• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Why didn't Nvidia ever release the Volta GV100 to AIB's?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

magellan

Member
Joined
Jul 20, 2002
It seems like all other GPU lines except the Volta were released to AIB's. It couldn't be because of HBM2 memory, right? Because AMD released HBM2 parts to AIB's. According to techpowerup, the Volta out-performs the 2080Ti.
 
The GV100 is in the Quadro line, a working class GPU. I don't recall the Quadro ever being produced by AIBs.
 
I don't recall the Quadro line being sent out to card partners either...

Also, comparing a 2080Ti to Volta isn't comparing like things. One is a pro GPU (and uses pro drivers) versus the other, a gaming GPU.

EDIT: In a professional graphics world, the key is not only performance but STABILITY and power efficiency. Clearly these things are not a 'partner card' staple. They tend to (but not always) raise clocks (which raises power) and stability on both sides (AMD and NV, drivers) can be an issue. In the professional segment, the drivers are vetted more thoroughly and update less frequently in light of these operating conditions.

Few pros give a hoot about fancy heatsinks or overclocks.
 
I see your point Earthdog but why not maximize profits from the Volta? How many Pro cards can they sell vs. selling to AIB's for gaming/cryptomining? Why go through the incredible expense of designing and manufacturing a GPU and then only sell it to what is a niche market relative to cryptomining and gaming? It would be interesting to see what Nvidia's profit margin was on the Volta relative to say, Maxwell 2.0, Pascal, Turing or Kepler.
 
Compute, AI, and any other pro use are not a 'niche' space...even compared to mining and gaming. There's PLENTY of profits to be had in the professional space.

You also need to understand that the mining space has little use for these cards. The ROI takes MUCH longer consider the high price of the units. They have more RAM, which isn't a need mining most coins. I.e.. a 2080 Ti was a $999 card, the Quadro 5000 (based on 2080Ti) was $2399 and has similar hash rates. Any sane miner would want the 2080 Ti as it would take less than half the time to get an ROI.
 
I'd imagine MANY more units of gaming GPU's are sold than HPC units.

Why not a cut-down Volta for AIB's? There was a cut-down Volta, the GV10B, released for the mobile market. Like I said, it would be interesting to see what kind of profits Nvidia made off the Volta, because I'll bet it's nothing like what they made off Maxwell 2.0, Kepler or Pascal.

Did AMD AIB's have problems with manufacturing boards with HBM? Was there an HBM shortage? Are HBM boards considerably more expensive to manufacture than GDDRx types (either because of the added expense of HBM or the additional complexity)?
 
Not only do they charge quite a lot for the individual cards, but a lot of their cards also have licensing fees for their usage. For example, in our virtual desktop environment we have nVidia GPUs which use a feature called 'vGPU' and requires a license for it to work for the hardware acceleration.
 
I'd imagine MANY more units of gaming GPU's are sold than HPC units.
Sure... but it's still not a "niche" market. Not close and growing annually...dare I say faster than gaming(?).

Did AMD AIB's have problems with manufacturing boards with HBM? Was there an HBM shortage? Are HBM boards considerably more expensive to manufacture than GDDRx types (either because of the added expense of HBM or the additional complexity)?
1. Nope. Just look around at the AMD gaming cards with HBM AIBs put out.

2. Nope.

3. Not sure. I vaguely recall it was SUPPOSED to be cheaper, but, who knows. But again these cards are gutless for gaming. IIRC (in one of the articles I linked?), in many of these, they strip the gaming goodness out for compute and other tasks. In other words, for cards like that, they'd have to leave the magic gaming smoke in...

...that said, to me there just isn't a point at all. It's clear they carved out plenty of capable SKUs from the top down generation after generation. For Nvidia, the Titan cards were crossovers (more vRAM + better compute in most of them).... though none contained HBM. For gaming, HBM was nothing but a marketing gimmick, really. While memory bandwidth is important, it only goes so far. With GDDRX5/6 out AMD ditched HBM for gaming cards and left it in the pro cards where the bandwidth is actually useful. I really don't see what the extra expense brings to the table in the gaming market.
 
The AMD Radeon VII certainly punched above its weight class and I think it's disingenuous of TPU to rate the Radeon VII below the 1080Ti -- considering I've seen gaming benchmarks where the Radeon VII only came in second to the 2080ti.

Were the Tensor Cores in the Volta V arch. at all applicable to gaming? It had a lot of them, almost twice as many as the RTX 3090 (640).

TPU evaluation of the Titan V doesn't seem to mesh with the benchmarks I saw here:
https://www.tomshardware.com/reviews/nvidia-titan-rtx-deep-learning-gaming-tensor,5971-4.html

and

https://www.hardwareluxx.de/index.p...volta-architektur-im-gaming-test.html?start=8
 
The AMD Radeon VII certainly punched above its weight class and I think it's disingenuous of TPU to rate the Radeon VII below the 1080Ti --
lol, really? They test across 20 games and their results are disingenuous :confused: ?? It's a slower card until it's able to utilize the bandwidth advantage it has. A few titles here or there it may be different, but across those 20 titles, at 1080p, its 8% slower. 1440p, 5% slower and even at 4K.

HBM/HBM2 was only good for 4K gaming. It was supposed to revolutionize..............???..........and it didn't do much for the consumer space. See any gaming cards with it now? So, at 4K it caught up... the problem is that the Radeon VII is more of a 1440p/60+/ultra card than 4k/60/ultra in the first place.

RTX cards use tensor cores for DLSS. Volta doesn't game, but it's the same tensor cores... they do the AI/HPC thing on Volta.

I don't see a review from TPU about the Titan V... are you just looking at the specs page or something and the 'relative performance' chart? If so, it certainly seems to jive with the tom's review. Look at the 4K results for tom's. Pretty clear the Titan leads in most tests. If you notice, the chart at TPU says 4K FOR ANYTHING 2080Ti OR FASTER so you should be looking at 4K results. The Hwlux link is sort of useless considering it doesn't have a 2080Ti to compare to like the Tom's review does.

I just don't think you read the data right, or something. It's all right there, and generally matching (as best it can because, again, those are different games and perhaps different settings and systems, so it's apples to oranges in the first place).
 
The relative performance chart at TPU was what I was using. I don't believe Wizzard over there ever had a Titan V to review/bench. The benchmarks I did provide don't seem to be consistent with the Titan V being superior in perf. to a 2080Ti -- considering a 1080Ti sometimes beats it in min. FPS. The tomshardware does provide 2080ti vs. Titan V benchmarks and the 2080Ti always beats the Titan V in min. FPS and in two cases the 1080Ti beats out the Titan V in min. FPS, but how could that even be possible?
 
Relative performance seems to match. I see what you're saying now about mins. No idea off the top of my head. I can tell you ashes is a cpu heavy game, but otherwise no idea.
 
Would a Titan V be significantly better at DLSS (because Earthdog stated that's what tensor cores are used for in gaming GPU's)?

Why didn't Nvidia cram some RT cores in the Volta instead of stuffing in so many tensor cores? Or are tensor cores particularly suited to HPC/machine learning apps?
 
IIRC, Volta doesn't support NGX which is what's needed for DLSS. I think that came with Turing.
 
Was the Volta GPU arch. designed from the get-go to be more or less just for machine learning/HPC apps? But if this was the case, why the release of Volta parts of the mobile market? Woops, it looks like TPU got this wrong. There were not Volta GPU's designed for the mobile market, they were Nvidia embedded systems with their own ARM CPU's using greatly cut-down Volta GPU's.

I did find one company using the Jetson AGX Xavier NX GPU in embedded systems (incl. some sort of surveillance system)
https://connecttech.com/product/sentry-x-rugged-embedded-system/
 
Back