Table of Contents
Well, we’d like to tell you we have a review today, but AMD says pre-launch samples were tight and was unable to procure one for us. They have iterated they will get us one post-launch though, so we’ll bring you a full review as soon as we’re able.
In addition to the fact that they were unable to seed one, we have it on good authority that multiple US offices of board partners (big ones) hadn’t even held one of these in their hands as of late last week. Partners aren’t sampling reference boards, only AMD is, so either supply is tight or AMD is controlling this launch just as tightly. This is, in fact, a hard launch – AMD expects immediate availability. There isn’t any information on supply amounts though. Hopefully they’ll have plenty. Time will tell.
We hope to get a sample from AMD sometime soon. If not, look to wait a little while until partners come out with non-reference designs before we’re able to test a 290X. Anyway, while we don’t have a GPU yet, we can still walk you through what AMD is bringing with the new R9 290 series cards.
TrueAudio
You already know about Mantle, AMD’s low level graphics API, and with the 260X launch AMD had a card with TrueAudio on the market. Today’s R9 290X launches the most powerful in the Rx series and adds another to the list of cards currently available with TrueAudio, which is a combination of hardware and API.
AMD is doing this because…well, because nobody else has bothered to. Not since Windows XP has anyone really done anything with regard to sound. Back then, discrete sound cards were the norm and people used the latest and greatest SoundBlasters. Creative’s driver support has been abysmal (anecdotally) since then and nobody has really done much to address sound in years. This has left audio codecs that use the CPU for processing holding the bag. They have improved, but not all that much. AMD wants to change the game with TrueAudio, allowing the CPU to off-load audio processing to the GPU’s DSP.
TrueAudio resides on the new GPUs but won’t take up massive amounts of processing power or memory. As noted in the architecture overview, there are a few KB here and there for caching and shared internal memory and then up to 64MB of the frame buffer can be used for audio information. That’s a drop in the bucket when you’re talking about cards with 4 GB frame buffers.
After the TrueAudio DSP processes the audio it goes out of your traditional analog and digital audio outputs. Unless you use HDMI audio (which driver would be supplied by AMD of course), your USB and analog/digital outputs will still need a sound device installed. The difference is what does the processing. With TrueAudio, your GPU does all of the 3D processing and then sends it through the audio codec to your sound device.
Traditional setups send audio data from the game, unprocessed, through to the audio codec. Then your audio codec is responsible for translating that into 3D, and AMD thinks they haven’t been doing a great job at it.
UltraHD – AMD’s Push for 4K
With the launches of the R9 290 series, AMD has been pushing 4K really hard. They see these cards as the first step in a “generational” leap that they say gamers have been waiting for. Frankly, I find this mostly to be bunk. It’s not that gamers don’t want 4K. Sure, bring on the 4K monitors! It’s that the prices are so astronomical that nobody’s really pushing for this kind of solution. For instance, ASUS’ 4K monitor on Newegg is a jaw-dropping $3,499, and that is -literally- the ONLY 4K monitor on Newegg right now. There are a couple extremely large TVs that cost even more, but I think the point stands – UltraHD is an UltraMarketingPloy, because nobody needs the kind of power that a 290 or 290X has for a single 1080p monitor. It’s serious overkill. …but AMD needs to sell GPUs, so you see why they’re pushing resolution.
As you can see, AMD’s analysis is that 4K is supposed to be taking off this very year, but that has obviously not happened yet. With such a tiny amount of 4K displays even on the market, and their astronomical price (were 1080p monitors ever that expensive?!), I don’t anticipate that fast of an adoption rate.
AMD is ready for the future though, with support for these displays. There isn’t currently a way to get 4K worth of information through a single data cable, so it requires two connections. The biggest, most convenient part of their 4K ready strategy is that the card will detect 4K monitors and automatically set up the Eyefinity profile for you. Eyefinity isn’t difficult by any stretch, but it can be a bit of a pain to set up and having it do that automatically for you is a nice touch.
Back to Earth – Multi Monitor Eyefinity Gaming
Multi-monitor gaming has actually come down to being affordable, so the push for higher resolution is fine. I happen to think 4K is a non-starter at least for the next couple of years, but if you’re on a budget, you can get a reasonable Eyefinity 3x 1080p setup for under $500. That is a whole heck of a lot more affordable than 4K. It’s so affordable, we’ve gone to the effort of procuring a setup and have been testing at 5760 x 1080 for you for a couple of years now.
I guess that’s the crux of AMD’s 4K push. Multi-monitor gaming has been here, for a good while now, and AMD just wants to push something new and innovative. That’s good and all, but let’s work on making current technology work better, yes? For instance, let’s get that frame pacing driver out for Eyefinity so you can use Crossfire properly, k AMD?
Anyway, multi-monitor gaming is here, is reasonably priced and thankfully AMD is working to make it easier on us. Interestingly, AMD has now gone with the same output configuration as NVIDIA – two DVI, one HDMI and one full size DisplayPort. However, AMD has done a good thing here and allows you to pick any three outputs. Doesn’t matter which ones, meaning you can use both DVI outputs at the same time, unlike the HD 7970.
They also allow up to six monitors to work off of one card (not that it will have enough strength for all those pixels), using a DisplayPort hub, which lets that port control three monitors. Then you hook three more up to the two DVIs and single HDMI output.
AMD R9 290 Series Architecture
Not much has changed about the R9 290 series architecture from the HD 7970 that came before. It’s still Graphics Core Next, there is just a lot more of it. The R9 290 series is similar to NVIDIA’s TITAN GK110 GPU vs. the GTX 680’s GK104 GPU. The R9 290s have up to 44 compute units (as opposed to 32 on the HD 7970). That, combined with a 512-bit GDDR5 memory bus (up from 384-bit) and TrueAudio are the biggest changes to the GCN GPUs from a low-level, GPU design standpoint.
This isn’t a process shrink, and it’s not a new architecture. Most of what you see here is already out there in the form of the GCN GPUs we’ve come to know the past two years. This is just bigger and much stronger.
Note this last slide is a significant change – the huge 512-bit memory bus is much wider than its 384-bit predecessor, allowing for much more bandwidth out of the same memory – now 320 GB/s vs. 264 GB/s before. There is a trade-off though. Note the “at 5.0 Gbps” above, which is slower than the previous generation. The HD 7970 ran at 5.5 Gbps. Thus, you’ve got to give up a little speed to put that much more data through the bus.
Side-by-side at a higher level, here are the biggest differences. Note that without a process shrink or architecture change, they’re cramming a lot of GPU in here, so the die size is going to inherently get bigger. In this case, it’s 1.24x as large as the previous generation – 438 mm2 vs. 352 mm2.
PowerTune has a new controller as well. AMD is starting to look more like NVIDIA with PowerTune.
There are four elements to how PowerTune controls your GPU – first it controls the power, keeping it within its PowerTune limit (which, if the HD 7970 is any indication, can be increased by up to 20%).
The controller then moves on to thermals, monitoring operating temperature and adjusting fan control as necessary to keep the temperature where the GPU wants it. AMD wants fan speed consistency with PowerTune (much like NVIDIA) and they’re ramping up and down more moderately than letting the GPU get to a set point, then sending the fan to crazy speeds. It can do that when necessary, but that’s not preferable.
Now we get to performance. Obviously we aren’t able to test this like we have with every other GPU in the last three years, so we’re a bit in the dark as to how this performs in practice. In AMD’s graph, it sure looks a lot like NVIDIA’s boost 2.0 implementation. The GPU will go up and run at its max frequency until its power limit and/or temperature reaches a certain point. Once there, the GPU will continuously adjust its frequency to keep power levels and temperatures in check. Sound familiar? Yea, it’s because NVIDIA has been doing this as well since Boost 2.0 came out with its TITAN GPU.
Interestingly, where the HD 7970 had the same number of ROPs as it did compute units, the R9 290 series has 64 ROPs for 44 compute units. Thus, there are more render backends to handle all the data processing that the R9 290 series’ GPUs can put out. There is a solid bump in stream processors (2816 in the 290X vs. 2048 in the HD 7970) and the ROPs were doubled from 32 to 64. All of that processing power and the larger pipeline are needed to handle gaming at 4K resolutions.
AMD has supplied some 4K gaming numbers and (depending on the detail settings, etc) the R9 290X is actually capable of driving a 4K display at over 30 FPS in several modern titles. That’s nothing to sneeze about.
With the R9 290 series, AMD is changing up how they handle Crossfire. Rather than using a Crossfire cable, they are doing away with cables altogether and are running communication between the two GPUs through the PCIe bus. This will work with Catalyst frame pacing and they say it will have no adverse affect versus an external bridge. If this marketing slide is accurate about scaling, the new Crossfire has great potential.
The last slide we have for you today is the actual R9 290 series specification slide. Note that I’ve blacked out half of this – the embargo today covers only the R9 290X. You’ll have to wait a little bit for the R9 290. Two items not discussed have been clockspeed and frame buffer size. The R9 290X clocks in at “up to” 1GHz. As far as I can tell, they will not operate there all the time like we’ve become used to on the HD 7970 GPUs. The clockspeed will dynamically vary just like NVIDIA’s boost does. It remains to be seen how that will go and we’ll bring you the scoop on actual clock speeds as soon as we get one of these in our hands, which – believe me – we’re working on.
As for price, AMD is shooting for ultra-competitive versus the GTX 780 (assuming the rumors are correct and the R9 290X is able to beat it). MSRP for the reference R9 290 is $549. That’s under cutting the cheapest GTX 780 by a full hundred bucks. Let’s hear it for GPU price wars; they’re good for everybody!
Well, that’s all we’ve got for you unfortunately. We’re working diligently to obtain an R9 290X sample to bring you results from our testing suite and, more importantly, show you how well it can overclock. In the mean time, at least we have an idea of what the new GPUs will feature and how they operate. We’ll update this article with links to sites that AMD did get cards to so you can see how the new GPUs perform. We are slated to get a sample of the 290X’s little brother, so expect a review on that when its NDA lifts.
As far as our opinion goes so far, it seems like another HD 7970 launch – they’re “out” and this is a hard launch, but supply might be slim for now. Hey, at least we have AMD’s two year old GPUs still, and for significantly less money…with a different model number, no less!
Until next time kiddies, thanks for reading.
Editor’s Note: The following links have performance numbers:
AnandTech – The R9 290X Review
142 replies
Loading new replies...
Benching Team Leader
Member
Benching Team Leader
Member
Gulper Nozzle Co-Owner
Member
Gulper Nozzle Co-Owner
Water Cooled Moderator
Gulper Nozzle Co-Owner
Member
Join the full discussion at the Overclockers Forums →