• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FEATURED AMD RX 480 Review list

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
well said!

I do believe though that CFx has made some significant strides lately on the scaling side, particularly with fiji architecture.

From what i have seen it is a great 1080p card, solid 1440 card, and leaves a lot to be desired for single card 4k. If it is as fast or slightly faster than a 970, below a 980, its not 4k ready. When i say 4k ready, i mean ~45-60 fps, ultra settings, no AA to 4xAA range.

~45-60fps ultra settings, no AA ~ 4xAA - and this is going to cost ~$250ish?

I'm going to see how the unicorn in the backyard is doing!:chair:
 
Sorry, i was responding to the thought it is a 'doable' at 4k, cost not withstanding. Most titles were between 20-30ish fps average. Two would get you to what i would need to be playable... a 1080ti or vega i hope. 1080 is close.
 
Perhaps ASUS, Gigabyte etc might be able to offer higher clocked versions that outperform a 970. Any multi-XFire tests? This would probably be a better upgrade path (8Gb version) for anybody rocking a 960, 950,750Ti or slower. I don't see myself upgrading any time soon, unless I move to 4k. Even then, perhaps 970 SLI would be a cheaper option than a single powerful card.
 
Guysss, Everybody is disappointed that its like a GTX970, But its not! the reviews i saw put the Reference RX480 against a SSC GTX970. The SSC970 is overclocked and has a custom PCB while the RX480 is stock speeds with a reference PCB.
 
Guysss, Everybody is disappointed that its like a GTX970, But its not! the reviews i saw put the Reference RX480 against a SSC GTX970. The SSC970 is overclocked and has a custom PCB while the RX480 is stock speeds with a reference PCB.

Not to mention a healthy boost clock. I see the lackluster OC to be a result of the boost clock. Im not an expert and I really should search before saying this but 1120 baseclock to 1266 Boost is fair enough I would think. For reference I submit 2 cards that I own... Powercolor 7850 and a Sapphire 270x. The 270x has 1100 base and 1150 boost and is flakey at 1175 (stock voltages), but that little 7850 will go from 910 base to 1050 (OC in Afterburner) on stock voltages and not break a sweat. What I am getting at is that IMO boost clocks are near the limet... and these cards are following the trend. Right? :)
 
Tom's Hardware reported that the RX 480 violates the PCI-E specifications by exceeding the max of 75W by drawing 90W from the slot. They didn't overclock because they claimed tha the RX 480’s power consumption through the PCIe slot jumped to an average of 100W, peaking at 200W. If this is proven to be true, and not the result of some faulty measurement through a riser card, this is a HUGE issue as it could fry the motherboard.

Very interesting, I'll have to look into this.

@Janus
I'm under the assumption that the DX12 multi-gpu support is native to the language. Early key notes that I have read has lead me to believe that the engine will recognize whats available and offload. In the past its been the game developer and driver team requiring to acknowledge the different cards and than assigning tasks.
 
Not to mention a healthy boost clock. I see the lackluster OC to be a result of the boost clock. Im not an expert and I really should search before saying this but 1120 baseclock to 1266 Boost is fair enough I would think. For reference I submit 2 cards that I own... Powercolor 7850 and a Sapphire 270x. The 270x has 1100 base and 1150 boost and is flakey at 1175 (stock voltages), but that little 7850 will go from 910 base to 1050 (OC in Afterburner) on stock voltages and not break a sweat. What I am getting at is that IMO boost clocks are near the limet... and these cards are following the trend. Right? :)

Its near the limit at stock temps, because of the heat.
 
Tom's Hardware reported that the RX 480 violates the PCI-E specifications by exceeding the max of 75W by drawing 90W from the slot. They didn't overclock because they claimed tha the RX 480’s power consumption through the PCIe slot jumped to an average of 100W, peaking at 200W. If this is proven to be true, and not the result of some faulty measurement through a riser card, this is a HUGE issue as it could fry the motherboard.

Its not the PCI-E Slot (as you have mentioned) Its the PCI-E Power Connector:

Reviews for the reference design RX 480 just went live a couple of hours ago. Testing around the card’s overclocking revealed that the extremely limited power delivery is the likely culprit behind its modest overclocking capabilities. With just a single 6-pin power connector the card’s power delivery is limited to a maximum of 150W. 75W from the PCIe slot and 75W from the 6-pin PCIe power connector.


The reference RX 480 draws an average of 80 watts from the PCIe 6-pin connector. That’s already 5 watts over the limit at stock clock speeds. Leaving absolutely no room for any serious overclocking. An issue that AIB partners are addressing directly with their custom RX 480 designs with the addition of more power connectors. This is very likely why AMD’s AIB partners are reporting clock speeds of up to 1.6Ghz are actually achievable. As so much of the GPU’s overclocking potential simply can’t be tapped with the reference design.

source: http://wccftech.com/amd-rx-480-asus-strix-msi-gaming/

If it was the riser, than that would never have made it through unit testing phase. The PCI-E 6-pin connector can tolerate larger pulls, but voltage will drop at a higher rate.

There is no second PCI-E connector available on the ref board, so this will forever limit ref cards from higher OC. The non-ref cards should be different. ASUS/GiGY/MSI will most likely add a second so that their customers are happy.
 
It is still dangerous and breaking spec by pulling more than 75W through the 6-pin PCIe power connection...
 
Very interesting, I'll have to look into this.

@Janus
I'm under the assumption that the DX12 multi-gpu support is native to the language. Early key notes that I have read has lead me to believe that the engine will recognize whats available and offload. In the past its been the game developer and driver team requiring to acknowledge the different cards and than assigning tasks.

The multi gpu support is in the language, but everything that I've read states that it still requires coding to be done specifically for the purpose and it isn't automatic unfortunately.
 
The multi gpu support is in the language, but everything that I've read states that it still requires coding to be done specifically for the purpose and it isn't automatic unfortunately.

Correct, developers must code the support into their software. It doesn't just magically appear and is not retro-active.
 
Either way its suppose to be less stress on the coders compared to earlier multi-gpu support. But I guess time will tell.

@ATM
I agree that its breaking spec, but those 6-pins should be able to push more. Now the catch is that OCers that normally didn't have to watch for a catch case, need to realize that a low cost PSU will most likely die after long term OC with ref 480.
 
Why would that be? Its not like any psu has limit as on each connector. Limkts are on the rail. These connections can handle more than its rating. Look at the 500W 2 8-pin 295x2. ;)
 
Either way its suppose to be less stress on the coders compared to earlier multi-gpu support. But I guess time will tell.

@ATM
I agree that its breaking spec, but those 6-pins should be able to push more. Now the catch is that OCers that normally didn't have to watch for a catch case, need to realize that a low cost PSU will most likely die after long term OC with ref 480.

I'm always going to assume someone has a crap PSU which barely meets ATX specifications, which would absolutely not push more.
 
Why would that be? Its not like any psu has limit as on each connector. Limkts are on the rail. These connections can handle more than its rating. Look at the 500W 2 8-pin 295x2. ;)

I'm always going to assume someone has a crap PSU which barely meets ATX specifications, which would absolutely not push more.

Agreed thats why you can push the 6pin PCI-E further than you want, you will just slowly lose voltage.
 
I'm not impressed.

"Look - here's our new product...and it's just about as good as a GTX 970 (you know, that old technology from NVIDIA?) You can buy it with more RAM (not sure what you need it for as it doesn't do well in high resolution titles which will use that RAM). And as a bonus, it draws more power."

As I have said before, I really don't understand AMD's strategy here. All NVIDIA has to do to counter is to drop the price of a GTX 970 to around $225.

The average user (which this card is targeted for) will not care about an extra 25 W or $25.
 
Its not the PCI-E Slot (as you have mentioned) Its the PCI-E Power Connector:

Here is a direct quote from the review:

"AMD’s Radeon RX 480 draws an average of 164W, which exceeds the company's target TDP. And it gets worse. The load distribution works out in a way that has the card draw 86W through the motherboard’s PCIe slot."

Further down they mused about a crossfire setup's affect on the motherboard:

"We’re also left to wonder what we'd see from a CrossFire configuration. Two graphics cards would draw 160W via the motherboard’s 24-pin connector; that's a tall order."

And here's one more direct quote:

"Believe it or not, the situation gets even worse. AMD's Radeon RX 480 draws 90W through the motherboard’s PCIe slot during our stress test. This is a full 20 percent above the limit.

To be clear, your motherboard isn't going to catch fire. But standards exist for a reason. All of the components around the PCIe slot and along the path from the slot to the 24-pin ATX connector will suffer from the peaks. And depending on your platform's design, audio problems may also materialize."
 
Last edited:
This is very disturbing. I received some internal verification on this, and it checks out. I was also told that Nvidia has the same issue but I'm not sure to what detail.

So basically the VRs on the GPUs are not setup correctly balance power draw from the slot and the power port. This should be fixed with non-ref cards, but to what degree I'm not sure. I'll have to see what ASUS/Gigy and others do to their cards.
 
This is very disturbing. I received some internal verification on this, and it checks out. I was also told that Nvidia has the same issue but I'm not sure to what detail.

So basically the VRs on the GPUs are not setup correctly balance power draw from the slot and the power port. This should be fixed with non-ref cards, but to what degree I'm not sure. I'll have to see what ASUS/Gigy and others do to their cards.

It also seems that they really should have used an 8-pin rather than 6-pin power connection. I've not seen anyone report a significant overdraw on the PCI-e connector on an Nvidia card so if you find a link on that, please post it.
 
Back