In Greek mythology, the Titans were a race of “…immortal, huge beings of incredible strength and stamina…” It seems graphics card manufacturers are all about naming their high end offerings after gods now-a-days. The Titan is also the name of one of the fastest supercomputers in the world at Oak Ridge National Laboratories.
The NVIDIA GeForce GTX TITAN’s name is certainly derived from some powerful pedigree.
Specifications & Features
NVIDIA starts off their press deck with a strong claim – the world’s most powerful GPU. It’s certainly built to qualify, with 2,688 CUDA cores, a massive 7.1 billion transistors and the capability of 4.5 teraflops worth of computing power.
Obviously a GPU isn’t nearly as versatile as an x86 CPU, but this comparison is eye-popping none-the-less. If you’re running calculations that can run on CUDA cores, it will outpace an i7 3960X by orders of magnitude.
This is what a lot of people have been waiting quite a while for. The rumor mill has been churning for some time now. The rumor thread on our forum is over eight pages long now. Well folks, the wait is over. As widely rumored, the GTX TITAN’s GPU is the GK110 core. You’ve already seen the number of CUDA cores (2,688). What’s not listed is that it has 224 texture units and 48 ROPs. By comparison, the GTX 680′s GK104 GPU has 1536 CUDA cores, 128 texture units and 32 ROPs. Yes, TITAN is a beast.
TITAN clocks in with an 837 MHz base clock and an 876 MHz boost clock. Fear not, it normally runs faster than that. Exactly how much faster will have to wait a couple more days, but I’ll say that 876 MHz is a tad conservative for stock boost clock.
At that speed with so many cores, as mentioned, you can get 4.5 Teraflops of computing power. Note the number underneath that – 1.3 Teraflops of Double Precision computing power. Yes folks, for the programmers among you that enjoy tinkering with CUDA coding in your spare time, there is now a powerful consumer GPU that doesn’t cost multiple-thousands-of-dollars (i.e. Tesla) that allows you to compute with double precision. It’s important to note that selecting double precision processing is an all-or-nothing selection. If you want to dedicate your Titan GPU to double precision calculations, you’ll need to manually select that and get to work. When you’re done, if you want to go back to gaming, you’ll need to manually turn it off, or it will hurt your performance.
There’s more, but I’ll let you actually look at the specs first…
…and we’re back. With such a power packed graphics processor, NVIDIA saw fit to give people plenty of frame buffer, a massive six gigabytes to play with. People complained that the GTX 680 didn’t have enough memory, coming with a mere 2 GB to AMD’s 3 GB on the HD 7970. NVIDIA listened and tripled the amount. Not only that, it’s operating on a 384-bit bus at 6,000 MHz speeds (that’s quad-pumped GDDR5, running at 1,500 MHz). This is the largest frame buffer on any NVIDIA GPU, ever.
One thing that lots of people will be surprised about is the power consumption. The card has two power connectors, a single 8-pin and a single 6-pin. It just plain doesn’t need any more, with a meager 250 W TDP. If it lives up to its billing as the fastest GPU in the world, that’s a coup considering how little power it draws. AMD’s HD 7970 GHz Edition is rated at a 250 W TDP as well, but if TITAN out performs it by a tremendous amount, while keeping the same TDP, that would be impressive.
The slide above is a rendering, but it’s the first glimpse at TITAN. Below, you’ll see a focus of NVIDIA for TITAN. They wanted a very strong GPU, but they also wanted one that can fit the needs of small form factor builders (and boutiques). I mean, really, who wouldn’t want the most powerful GPU in the world inside a teeny tiny mITX system?
So you’ve heard the specs, let’s talk about using TITAN.
NVIDIA GPU Boost 2.0
NVIDIA has had some time to consider and refine GPU Boost since it debuted with the GTX 680. Introducing GPU Boost 2.0 (also referred to as GPUB 2.0 since it’s easier to type).
GPU Boost 2.0 has everything GPU Boost 1.0 had but adds more voltage and higher clocks. Sounds good to an overclocker so far.
There were a couple graphs leading up to this one in the press deck, but you can see how they’ve compared 1.0 to 2.0 relative to temperature in the grouped graph. Temperature is the biggest takeaway on this graph.
With GPU Boost 1.0, much to the chagrin of overclockers everywhere, clocks were capped by available voltage, which itself was locked down. It was good for preventing silicon damage, but bad for overclocking. Some people (water cooling enthusiasts, for instance) could cool their cards to quite low temperatures, but have zero additional headroom.
GPU Boost 2.0 pulls the rug out from under hard voltage control. Now the GPU controls its boost clock by temperature. In the simplest terms, the GPU will use the highest voltage available (now higher than the GTX 680) in conjunction with the highest stable clocks until the GPU reaches the default set temperature (80 °C). Only then will it reduce the boost clock.
In addition to that, NVIDIA is also allowing some overvoltage of the TITAN GPU. It’s not a lot (you’ll get the actual number in a couple days), but it’s more than zero. You have to accept that you may impact the long term reliability of your GPU, but once you’re through that mandatory dialog, you get access to some overvoltage control. This is optional for board partners to include; they don’t have to include it. However, based on the fact that EVGA and ASUS are the two partners bringing TITAN to us in North America, I’d say it’s a safe bet they’ll give us that control.
With temperature in control instead of an arbitrarily selected ‘safe’ voltage, there is a not insignificant increase in frequency capability. What damages silicon is voltage combined with temperature. Previously NVIDIA only allowed voltage that was safe for long term reliability, period; no matter what temperature conditions existed. Now they’ve lifted that hindrance and let temperatures control the boost.
Of course, the curve shifts a hair when a bit more voltage is introduced.
Only under these conditions can you adjust the max GPU voltage.
Overvoltage isn’t the only thing that you need to adjust with TITAN. Since temperature is the new metric by which your clocks are controlled, you need to use temperature targets to help decide where your GPU will end up.
Using Temperature Targets
Not only can you add a bit more voltage, you can also help your clocks along if you’re willing to tolerate higher temperatures. Overclockers have been doing this manually for years. They know the max temperature they can accept on their particular hardware and they adjust the cooling, voltage, and overclocks to reach that happy medium. It’s been this way for a long time, but this is the first time the temperature directly affects your ability to get higher clocks.
If you move your temperature target higher, it may move your clock curve even higher.
There are a few items in advanced controls that we haven’t gone over. You know about raising max voltage and adjusting temp target. There is also a Prioritization control. You can tell your GPU to prioritize either frequency or temperature. If you tell it to prioritize frequency, it will act like you expect based on everything above, sticking to the base clock no matter what and clocking up with boost as headroom is available.
However, say you want your GPU to run at a cooler temperature, frequency be damned. You can set your temperature target to, say, 65° C and then set the GPU to prioritize temperature. With GPU Boost 1.0, once the GPU banged up against its limits, it would throttle down, potentially to the base clock. With GPU Boost 2.0, if your GPU hits its limit (temperature now, which you’ve selected to prioritize), the GPU will throttle itself until it cools down to the temperature you’ve set, potentially even below the base frequency. So now you’ve got more control over your GPU, up and down.
The last item to talk about is display overclocking. This is a hit-or-miss operation. Some monitors will do it and others won’t. But with TITAN, NVIDIA has introduced the ability to overclock your display a bit. Most monitors operate at 60 Hz refresh rates (new 120 MHz monitors notwithstanding). Even if you’ve got a very powerful GPU pushing much higher framerates, the video output of the GPU is still only putting out 60 Hz (which is basically 60 frames per second).
Like any overclocking, display overclocking may or may not work and I have no idea whether it will have any effect on your monitor, positive or negative.
Meet the NVIDIA GTX TITAN
Now we get to the main event, TITAN itself, in the flesh. 10.5″ of GPU goodness. Its heat sink is similar to that on the GTX 690. It is clad in aluminum and there’s even a nice looking window into the fins on the heatsink.
It’s definitely a good looking GPU. As an added bonus, the GEFORCE GTX logo glows a pleasant green when the GPU is in use. Depending on the board partner’s software, you can even control it; turn it on, off or have its glow correspond with the load on the GPU.
Here are the aforementioned 6-pin & 8-pin power connectors.
The video outputs are standard fare for current generation NVIDIA GPUs, two double link DVI, an HDMI, and a DisplayPort 1.2 output.
The exterior certainly looks like a winner. Let’s look at the PCB.
Under the Hood
Here’s the back of the PCB, on which you can see half of the video RAM.
This photo is here for one very important reason: To dispel any rumors that this is the GTX 780, or in any way part of the GTX 700 series. The actual name of this card is the GTX TITAN and its SKU number begins with 699. Put those rumors to rest folks. No GTX 7xx here.
Here’s a closer view of the back of the socket area.
The chosen RAM provider for the TITAN’s frame buffer is Samsung.
Regrettably, I did not have a torx in the requisite size to disassemble this card and this thing hasn’t been in my hands long enough to acquire one (since Saturday morning to be precise). So the only disassembled PCB photos come courtesy NVIDIA.
The power section isn’t massive on TITAN. The GPU is powered by a six phase power section, with two additional phases dedicated to the 6 GB frame buffer, for a total 6+2 power section. While it can be over-volted, it might not be too wise to go volt modding these with the stock power section.
The cooler does a good job of zeroing in on the temperature target (in conjunction with the GPU itself of course). When you adjust the fan profile to be a bit more aggressive, it does a solid job of keeping temps down.
Since the photos from the press documentation started leaking, I’ve seen some mention the fins on the rear of the card behind the fan, wondering what they cool. The answer? Nothing at all. I’m not quite sure why they didn’t close that off. It does allow air to leak from the fan back into the case.
NVIDIA is quite proud of this card’s acoustics and for good reason. Its default fan profile is very quiet. Impressively so, even when temperatures cause the fan to spin up. Even after adjusting for a more aggressive fan curve (I prefer lower temperatures; they lead to higher clocks with this card, see GPU Boost 2.0 above), it’s quite quiet.
One last parting shot of the GPU itself, just for the heck of it.
There are two more photos and we’ll be done for the day. TITAN installed. This is what it looks like on a Maximus V Extreme (with a water cooled 3770K) and G.Skill DDR3-2600 RAM. While we can’t show you any performance numbers, you can guess that the system is going to perform pretty well. It even looks the part.
The last photo shows the nice green glow from the GEFORCE GTX logo. It’s really a good looking card and that’s icing on the cake.
Last, I leave you with a redux of the specs and a couple side notes regarding some things I’ve seen talked about.
- TITAN’s MSRP is $999.
- Yes, that’s the same price as the GTX 690.
- This is not replacing the GTX 690. The GTX 690 will continue to be produced.
- The TITAN is an alternative, not a replacement.
- This is not a limited run part. In the conference call with reporters, senior product manager Justin Walker said there will be “More than enough to satisfy demand for these high end gaming PCs.” The rumor mill some how got a limited production run of 10,000 units in its head.
- EVGA and ASUS are the two board partners bringing TITAN out in North America. There may be other partners in other countries, but not in this market.
- Availability is expected to start next week (the week of February 25th).
Last, I leave you with a video that should be live by the time we hit the big blue Publish button this morning, courtesy NVIDIA.