AMD Launches R9 290 Series Graphics Cards

Add Your Comments

Well, we’d like to tell you we have a review today, but AMD says pre-launch samples were tight and was unable to procure one for us. They have iterated they will get us one post-launch though, so we’ll bring you a full review as soon as we’re able.

In addition to the fact that they were unable to seed one, we have it on good authority that multiple US offices of board partners (big ones) hadn’t even held one of these in their hands as of late last week. Partners aren’t sampling reference boards, only AMD is, so either supply is tight or AMD is controlling this launch just as tightly. This is, in fact, a hard launch – AMD expects immediate availability. There isn’t any information on supply amounts though. Hopefully they’ll have plenty. Time will tell.

We hope to get a sample from AMD sometime soon. If not, look to wait a little while until partners come out with non-reference designs before we’re able to test a 290X. Anyway, while we don’t have a GPU yet, we can still walk you through what AMD is bringing with the new R9 290 series cards.

TrueAudio

You already know about Mantle, AMD’s low level graphics API, and with the 260X launch AMD had a card with TrueAudio on the market. Today’s R9 290X launches the most powerful in the Rx series  and adds another to the list of cards currently available with TrueAudio, which is a combination of hardware and API.

Multi-Directional Audio

Multi-Directional Audio

TrueAudio Tech

TrueAudio Tech

AMD is doing this because…well, because nobody else has bothered to. Not since Windows XP has anyone really done anything with regard to sound. Back then, discrete sound cards were the norm and people used the latest and greatest SoundBlasters. Creative’s driver support has been abysmal (anecdotally) since then and nobody has really done much to address sound in years. This has left audio codecs that use the CPU for processing holding the bag. They have improved, but not all that much. AMD wants to change the game with TrueAudio, allowing the CPU to off-load audio processing to the GPU’s DSP.

The Problem

The Problem

AMD's Solution

AMD’s Solution

TrueAudio resides on the new GPUs but won’t take up massive amounts of processing power or memory. As noted in the architecture overview, there are  a few KB here and there for caching and shared internal memory and then up to 64MB of the frame buffer can be used for audio information. That’s a drop in the bucket when you’re talking about cards with 4 GB frame buffers.

TrueAudio Architecture

TrueAudio Architecture

DSP Features

DSP Features

Hardware Features

Hardware Features

After the TrueAudio DSP processes the audio it goes out of your traditional analog and digital audio outputs. Unless you use HDMI audio (which driver would be supplied by AMD of course), your USB and analog/digital outputs will still need a sound device installed. The difference is what does the processing. With TrueAudio, your GPU does all of the 3D processing and then sends it through the audio codec to your sound device.

Traditional setups send audio data from the game, unprocessed, through to the audio codec. Then your audio codec is responsible for translating that into 3D, and AMD thinks they haven’t been doing a great job at it.

TrueAudio Data Flow

TrueAudio Data Flow

TrueAudio vs. Current Tech

TrueAudio vs. Current Tech

UltraHD – AMD’s Push for 4K

With the launches of the R9 290 series, AMD has been pushing 4K really hard. They see these cards as the first step in a “generational” leap that they say gamers have been waiting for. Frankly, I find this mostly to be bunk. It’s not that gamers don’t want 4K. Sure, bring on the 4K monitors! It’s that the prices are so astronomical that nobody’s really pushing for this kind of solution. For instance, ASUS’ 4K monitor on Newegg is a jaw-dropping $3,499, and that is -literally- the ONLY 4K monitor on Newegg right now. There are a couple extremely large TVs that cost even more, but I think the point stands – UltraHD is an UltraMarketingPloy, because nobody needs the kind of power that a 290 or 290X has for a single 1080p monitor. It’s serious overkill. …but AMD needs to sell GPUs, so you see why they’re pushing resolution.

The push for UltraHD

The push for UltraHD

4K Display Adoption

4K Display Adoption

As you can see, AMD’s analysis is that 4K is supposed to be taking off this very year, but that has obviously not happened yet. With such a tiny amount of 4K displays even on the market, and their astronomical price (were 1080p monitors ever that expensive?!), I don’t anticipate that fast of an adoption rate.

AMD is ready for the future though, with support for these displays. There isn’t currently a way to get 4K worth of information through a single data cable, so it requires two connections. The biggest, most convenient part of their 4K ready strategy is that the card will detect 4K monitors and automatically set up the Eyefinity profile for you. Eyefinity isn’t difficult by any stretch, but it can be a bit of a pain to set up and having it do that automatically for you is a nice touch.

Support for Multiple 4K Types

Support for Multiple 4K Types

Tiled Displays

Tiled Displays

Tiled Topology

Tiled Topology

Future - Support for One Connector 4K

Future – Support for One Connector 4K

Back to Earth – Multi Monitor Eyefinity Gaming

Multi-monitor gaming has actually come down to being affordable, so the push for higher resolution is fine. I happen to think 4K is a non-starter at least for the next couple of years, but if you’re on a budget, you can get a reasonable Eyefinity 3x 1080p setup for under $500. That is a whole heck of a lot more affordable than 4K. It’s so affordable, we’ve gone to the effort of procuring a setup and have been testing at 5760 x 1080 for you for a couple of years now.

I guess that’s the crux of AMD’s 4K push. Multi-monitor gaming has been here, for a good while now, and AMD just wants to push something new and innovative. That’s good and all, but let’s work on making current technology work better, yes? For instance, let’s get that frame pacing driver out for Eyefinity so you can use Crossfire properly, k AMD?

Anyway, multi-monitor gaming is here, is reasonably priced and thankfully AMD is working to make it easier on us. Interestingly, AMD has now gone with the same output configuration as NVIDIA – two DVI, one HDMI and one full size DisplayPort. However, AMD has done a good thing here and allows you to pick any three outputs. Doesn’t matter which ones, meaning you can use both DVI outputs at the same time, unlike the HD 7970.

They also allow up to six monitors to work off of one card (not that it will have enough strength for all those pixels), using a DisplayPort hub, which lets that port control three monitors. Then you hook three more up to the two DVIs and single HDMI output.

Pick Any Three

Pick Any Three

Support for Six Displays

Support for Six Displays

DisplayPort 1.2 Hubs

DisplayPort 1.2 Hubs

AMD R9 290 Series Architecture

Not much has changed about the R9 290 series architecture from the HD 7970 that came before. It’s still Graphics Core Next, there is just a lot more of it. The R9 290 series is similar to NVIDIA’s TITAN GK110 GPU vs. the GTX 680’s GK104 GPU. The R9 290s have up to 44 compute units (as opposed to 32 on the HD 7970). That, combined with a 512-bit GDDR5 memory bus (up from 384-bit) and TrueAudio are the biggest changes to the GCN GPUs from a low-level, GPU design standpoint.

This isn’t a process shrink, and it’s not a new architecture. Most of what you see here is already out there in the form of the GCN GPUs we’ve come to know the past two years. This is just bigger and much stronger.

GCN R9 290 Architecture

GCN R9 290 Architecture

GCN Shader Engine

GCN Shader Engine

GCN Compute Unit

GCN Compute Unit

Geometry Processing

Geometry Processing

Render Back Ends

Render Back Ends

Memory Interface

Memory Interface

Note this last slide is a significant change – the huge 512-bit memory bus is much wider than its 384-bit predecessor, allowing for much more bandwidth out of the same memory – now 320 GB/s vs. 264 GB/s before. There is a trade-off though. Note the “at 5.0 Gbps” above, which is slower than the previous generation. The HD 7970 ran at 5.5 Gbps. Thus, you’ve got to give up a little speed to put that much more data through the bus.

Side-by-side at a higher level, here are the biggest differences. Note that without a process shrink or architecture change, they’re cramming a lot of GPU in here, so the die size is going to inherently get bigger. In this case, it’s 1.24x as large as the previous generation – 438 mm2 vs. 352 mm2.

GCN Architecture Efficiency Comparison

GCN Architecture Efficiency Comparison

PowerTune has a new controller as well. AMD is starting to look more like NVIDIA with PowerTune.

PowerTune Architecture

PowerTune Architecture

New Serial VID Interface

New Serial VID Interface

There are four elements to how PowerTune controls your GPU – first it controls the power, keeping it within its PowerTune limit (which, if the HD 7970 is any indication, can be increased by up to 20%).

The controller then moves on to thermals, monitoring operating temperature and adjusting fan control as necessary to keep the temperature where the GPU wants it. AMD wants fan speed consistency with PowerTune (much like NVIDIA) and they’re ramping up and down more moderately than letting the GPU get to a set point, then sending the fan to crazy speeds. It can do that when necessary, but that’s not preferable.

Now we get to performance. Obviously we aren’t able to test this like we have with every other GPU in the last three years, so we’re a bit in the dark as to how this performs in practice. In AMD’s graph, it sure looks a lot like NVIDIA’s boost 2.0 implementation. The GPU will go up and run at its max frequency until its power limit and/or temperature reaches a certain point. Once there, the GPU will continuously adjust its frequency to keep power levels and temperatures in check. Sound familiar? Yea, it’s because NVIDIA has been doing this as well since Boost 2.0 came out with its TITAN GPU.

Quadrinity

Quadrinity

Power Control - Near Static

Power Control – Near Static

Fan Control

Fan Control

...which Controls Temps

…which Controls Temps

Power Plus Temps Equals Performance Regulation

Power Plus Temps Equals Performance Regulation

Interestingly, where the HD 7970 had the same number of ROPs as it did compute units, the R9 290 series has 64 ROPs for 44 compute units. Thus, there are more render backends to handle all the data processing that the R9 290 series’ GPUs can put out. There is a solid bump in stream processors (2816 in the 290X vs. 2048 in the HD 7970) and the ROPs were doubled from 32 to 64. All of that processing power and the larger pipeline are needed to handle gaming at 4K resolutions.

R9 290 Series - Much Stronger

R9 290 Series – Much Stronger

Strength For the 4K Push

Strength For the 4K Push

AMD has supplied some 4K gaming numbers and (depending on the detail settings, etc) the R9 290X is actually capable of driving a 4K display at over 30 FPS in several modern titles. That’s nothing to sneeze about.

Display Configs

Display Configs

Gaming at 4K

Gaming at 4K

With the R9 290 series, AMD is changing up how they handle Crossfire. Rather than using a Crossfire cable, they are doing away with cables altogether and are running communication between the two GPUs through the PCIe bus. This will work with Catalyst frame pacing and they say it will have no adverse affect versus an external bridge. If this marketing slide is accurate about scaling, the new Crossfire has great potential.

New Crossfire Technology - No Bridge

New Crossfire Technology – No Bridge

Purported Crossfire Scaling

Purported Crossfire Scaling

The last slide we have for you today is the actual R9 290 series specification slide. Note that I’ve blacked out half of this – the embargo today covers only the R9 290X. You’ll have to wait a little bit for the R9 290. Two items not discussed have been clockspeed and frame buffer size. The R9 290X clocks in at “up to” 1GHz. As far as I can tell, they will not operate there all the time like we’ve become used to on the HD 7970 GPUs. The clockspeed will dynamically vary just like NVIDIA’s boost does. It remains to be seen how that will go and we’ll bring you the scoop on actual clock speeds as soon as we get one of these in our hands, which – believe me – we’re working on.

AMD R9 290X Specifications

AMD R9 290X Specifications

As for price, AMD is shooting for ultra-competitive versus the GTX 780 (assuming the rumors are correct and the R9 290X is able to beat it). MSRP for the reference R9 290 is $549. That’s under cutting the cheapest GTX 780 by a full hundred bucks. Let’s hear it for GPU price wars; they’re good for everybody!

Well, that’s all we’ve got for you unfortunately. We’re working diligently to obtain an R9 290X sample to bring you results from our testing suite and, more importantly, show you how well it can overclock. In the mean time, at least we have an idea of what the new GPUs will feature and how they operate. We’ll update this article with links to sites that AMD did get cards to so you can see how the new GPUs perform. We are slated to get a sample of the 290X’s little brother, so expect a review on that when its NDA lifts.

As far as our opinion goes so far, it seems like another HD 7970 launch – they’re “out” and this is a hard launch, but supply might be slim for now.  Hey, at least we have AMD’s two year old GPUs still, and for significantly less money…with a different model number, no less!

Until next time kiddies, thanks for reading.

Editor’s Note: The following links have performance numbers:

AnandTech – The R9 290X Review

HardwareCanucks – AMD Radeon R9 290X 4GB Review

-Jeremy Vaughan (hokiealumnus)

Leave a Reply

Your email address will not be published. Required fields are marked *

Discussion
  1. Asus amd gpus are not so brilliant afaik...at least last gen second revisions kinda sucked.

    If using hot wire and asus boards I guess they can be real good for ln2, but most of the highest scores were on Lightning 7970s before the Titan and Classified 780 launched.

    http://www.overclock.net/t/1436497/official-amd-r9-290x-290-owners-club/4170#post_21159578

    Lots of water results for R9 290 pitted to a 290x and a 7970 at 1300+ core.
    Honestly I've always been happy with msi reference stuff but I haven't heard ANYTHING good about their upper end mobos or cards. The theme I saw when researching the 780 Lightning is that it could of been great but MSI dropped the ball on firm/software. Same thing for the xpower/mpower series. So I'm probably going to steer clear of msi for now. The asus dc2 version should have better power delivery right? as well as a waterblock from EK and VGA hotwire.
    SeeThruHead
    Wow those are some impressive numbers for a 400 dollar card. Highest I've seen the 780ti is 86 in valley. I plan on buying both cards for benching purposes at some point. I haven't bouight ATI since a bad experience with the 6990. What's the best manufacturer for extreme voltage control? Asus? How about hotwire support?


    LN2, Msi Lightning...For air and water, right now I'd say a reference Asus card. (at least current offerings, we have yet to see which will be the best non ref this gen)
    Wow those are some impressive numbers for a 400 dollar card. Highest I've seen the 780ti is 86 in valley. I plan on buying both cards for benching purposes at some point. I haven't bouight ATI since a bad experience with the 6990. What's the best manufacturer for extreme voltage control? Asus? How about hotwire support?
    SeeThruHead
    Anyway, a little more on topic. I'm really considering picking up a 290 for benching purposes. It's so cheap and such great performance. Anyone have benchmarks under water yet?


    From ocn:



    Valley Extreme HD -- Sapphire r9 290 -- 1150/1575 -- stock bios/volts -- stock cooling



    Sapphire r9 290 @ 1150/1550 Firestrike -- 12348 gpu score -- stock volts and bios -- stock cooling

    http://www.3dmark.com/fs/1121788

    All water scores I find are 290X cards, but they already have a custom gputweak for voltage control, and you can also add +100mv in Afterburner beta16 with a command line tweak. AB support will be ready soon, looking forward to it. (Asus soft is buggy and the interface is quite horrid)

    "You can alter voltage on it even with current MSI AB beta by sending commands to VRM via MSI AB command line:

    To set +100mV offset:

    MSIAfterburner.exe /wi4,30,8d,10

    To restore original voltage:

    MSIAfterburner.exe /wi4,30,8d,0

    Use it at your own risk"

    http://forums.guru3d.com/showthread.php?t=382760&page=6

    You can get an Asus card for best compatibility with gputweak, no need to flash bioses to get voltage control and best compatibility with the modded PT1 bios (PT3 is not advisable since it seems to wreck cards eventually, better wait for better AB support for more hardcore benching). Most PT3 users report black screens that are a pain to troubleshoot, and most get worse when you touch memory colocks.

    EDIT: We need an R9 290X/290 club thread here...I don't wanna hang out at ocn so much once I get mine :p
    Anyway, a little more on topic. I'm really considering picking up a 290 for benching purposes. It's so cheap and such great performance. Anyone have benchmarks under water yet?
    Adaptive v-sync works just as STH describes. After it gets under 60 FPS, it shuts off.

    NVIDIA's Adaptive Vsync does shut itself off below 60 FPS. It shuts off below 60 FPS to reduce the stuttering introduced by normal vsync which clamps to frames to multiples of the refresh rate (45/30. etc). Vsync though tends to add some input lag...in my limited use of it, the adapative vsync minimizes that though does not eliminate it.

    Tearing can occur above or below, it is just most visible above. It happens below the same way as above, the panel starts to refresh the image (using the old frame) and then halfway through it gets the new frame and starts drawing that instead.
    Screen tearing happens whenever your fps is not synced to your refresh rate. At lower than 60 fps as well as above. It's much worse if your fps is a lot higher than your refresh rate, in those scenarios you could get 2 or three tears on screen at a time. At 45 fps you will still get tearing just not in every frame. In order to have no tearing the fps has to be in sync at 60 he or the same frame must be displayed twice : 30pm.

    Tldr: tearing happens below 60 fps even when using adaptive v-sync. That is not really debatable
    SeeThruHead
    You aren't listening. Go look it up. Screen tearing happens both when you fps is above and below your refresh rate (vsync off)

    Adaptive vsync doesn't get rid of screen tearing if your card is only pushing out 45fps, in fact adaptive vsync doesn't do anything at all at 45fps. I dont need your hardware. I have 2x 670s and a 650ti boost. I have plenty of experience using adaptive v-sync since it came out. I understand exactly how it works. You do not.


    Yes I'm listening you don't understand that you don't need v-sync 60 FPS and under because the monitor syncs with the video card and it does not use vertical sync, it uses refresh rate that is why I don't get tarring at 48-60 FPS

    You need to read how monitors sync with video cards.

    Tearing

    http://www.tweakguides.com/Graphics_9.html

    It is an unfortunate fact that if you disable VSync, your graphics card and monitor will inevitably go out of synch. Whenever your FPS exceeds the refresh rate (e.g. 120 FPS on a 60Hz screen), or in general at any point during which your graphics card is working faster than your monitor, the graphics card produces more frames in the frame buffer than the monitor can actually display at any one time. The end result is that when the monitor goes to get a new frame from the primary buffer of the graphics card during VBI, the resulting output may be made up of two or more different frames overlapping each other. The onscreen image may appear to be slightly out of alignment or 'torn' in parts whenever there is any movement - and thus it is referred to as Tearing. An example of this is provided in the simulated screenshot below. Look closely at the urinals and the sink - portions of them are out of alignment due to tearing:
    I got a single 280x so far oc to 1175/6500

    i play a lot bf4 and in 1080p 64MP online with view 120, scale 100 and everything on ultra and 4 msaa i can do 60+ fps and sometimes more, but in will fall down to 40 to many times, so i need another 280x to top the game so i can hit 60fps+ all the time.

    if i play high and 2xmsaa i do 60fps+ all the time never see it under 60. so for the last Ultra settings i will buy another 280x. ;)
    You aren't listening. Go look it up. Screen tearing happens both when you fps is above and below your refresh rate (vsync off)

    Adaptive vsync doesn't get rid of screen tearing if your card is only pushing out 45fps, in fact adaptive vsync doesn't do anything at all at 45fps. I dont need your hardware. I have 2x 670s and a 650ti boost. I have plenty of experience using adaptive v-sync since it came out. I understand exactly how it works. You do not.
    SeeThruHead
    That's really not a true statement. When running adaptive v-sync as soon as your fps drops below 60 (for a 60hz display) the software turns off v-sync. That immediately introduces tearing. (tearing happens both at fps above refresh rate as well as below refresh rate.)

    So if you can't maintain 60fps with your hardware, you will experience tearing when your fps drops, or you can use normal v-sync are you will eliminate tearing but you fps will jump straight to 30fps when you can't maintain 60fps.

    Both scenarios are terrible IMO, that's why I buy a graphics card capable of maintaining 60fps over the majority of my gaming.


    That is not true and if you had my hardware you could see that you are incorrect. screen tarring only happens when your video card sends more frames then you monitor can handle causing tarring.

    The artifact occurs when the video feed to the device isn't in sync with the display's refresh.

    You don't have to deal with screen tarring so then you don't understand adaptive v-sync works great, nvidia would not have made that setting if it did not work.

    When the video drops below 60 then it is in sync with the monitor how do you think it works.
    That's really not a true statement. When running adaptive v-sync as soon as your fps drops below 60 (for a 60hz display) the software turns off v-sync. That immediately introduces tearing. (tearing happens both at fps above refresh rate as well as below refresh rate.)

    So if you can't maintain 60fps with your hardware, you will experience tearing when your fps drops, or you can use normal v-sync are you will eliminate tearing but you fps will jump straight to 30fps when you can't maintain 60fps.

    Both scenarios are terrible IMO, that's why I buy a graphics card capable of maintaining 60fps over the majority of my gaming.

    Luckily we have g-sync coming, which will entirely eliminate tearing and fps halving. But sadly only for Nvidia buyers.
    SeeThruHead
    I dont understand what you're really getting at. I haven't run a single game on my pc without vsync for years. I cannot stand tearing. I also cannot stand 30fps. So if my card cannot keep up 60fps at least 80% of the time, I dont find the gameplay experience enjoyable. That's not me blaming hardware for software issues, it's just the fact that certain cards, like the 280x do not provide enough muscle for the gameplay experience that I find enjoyable. It comes down to personal tolerances for things like FPS and screen tearing. Someone who doesn't like tearing or 30fps should not buy a 280x for 1080p, not if they want to run maximum settings. If you want to sacrifice settings, or experience tearing/30fps drops then my all means save the money and get a 280x.

    Everyones preference is going to be different. You can't just go and say that a card is overkill when you are basing that on your not very demanding needs.


    People don't need to deal with screen tarring by buying new hardware, you just need to enable v-sync and it will not run over your refresh rate of the monitor.

    I don't have screen tarring and because I run adaptive V sync.
    I dont understand what you're really getting at. I haven't run a single game on my pc without vsync for years. I cannot stand tearing. I also cannot stand 30fps. So if my card cannot keep up 60fps at least 80% of the time, I dont find the gameplay experience enjoyable. That's not me blaming hardware for software issues, it's just the fact that certain cards, like the 280x do not provide enough muscle for the gameplay experience that I find enjoyable. It comes down to personal tolerances for things like FPS and screen tearing. Someone who doesn't like tearing or 30fps should not buy a 280x for 1080p, not if they want to run maximum settings. If you want to sacrifice settings, or experience tearing/30fps drops then my all means save the money and get a 280x.

    Everyones preference is going to be different. You can't just go and say that a card is overkill when you are basing that on your not very demanding needs.
    I agree V-sync is a setting in software depending on your monitor speed. If you have 60Hz monitor and your FPS is 70 you will get screen tarring. if you have 30-60 fps and stay there you will not have any.
    Thats a native ability of any modern hardware. The issue when it comes to V-sync issues is usualy located at the software. But just as i said, i dont own a single PC game with V sync issues. In the past 5 years the only game i truly had some bad tearing was on the Xbox360 and thats because they used bad software, the Xbox360 got a native Vsync ability but that doesnt mean that a bad software cant "disable" it. The reason is because it was a cheap PC port... and ported over to console without love. With the release of a DRM free gog version i stopped playing it and nowadays that game is running on my PC... so actually not a single game still in use is lacking Vsync. :D