• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

GeForce 9900 GTX & GTS Slated For July Launch

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Kinda have the feeling both companies try to fool the other's industrial spies with false infos which media informers also receive so we have these confusing contradicting rumors floating around.

Yeah I agree with you. It's all too fishy for my liking. The 9800 series cars have only been out for "five minutes" and now of a sudden there is this rat race on releasing NV cards. If the 9900's are to be released along with the GT200 type cards after then NV are either running on a mental time frame and are about to **** alot of customers off or they are running rumors.com. Either way i'm not impressed and not buying none of it anymore untill i see it all in writing:screwy:
 
I can certainly understand most people's frustration with what seems to be a quick succession of card releases. The simple truth is the G80 (8800 series) has been around since November of 2006. Thats about 18 months the market has remained stagnant, which is an eternity in the tech market. We are talking about 20 months by the time the GT200 rolls around. WAY WAY overdue in my opinion.

As for the cards since the 8800 GTX, the Ultra was the same thing with premium .8ns ram added. The revamped 8800GT/GTS was just a core refresh (G92), which made the cards cheaper for the consumer but saved even more money for nVidia. The 9800 GX2 was nothing more than two 8800GTS series (G92 cores) slapped together. And we all know the 9800GTX was just a speed bumped GTS (same G92 core)... everything was all based on the G80 architecture.

There was a hell of a lot of naming deception going on the last 8 months, I will fully admit that. But if people had followed the tech and did their homework, they would have seen there has been little real technological advancement since the G80. What the G92 refresh did bring was more mainstream pricing for higher performance.

Bring on the GT200. :thup:
 
Yeah I agree with you. It's all too fishy for my liking. The 9800 series cars have only been out for "five minutes" and now of a sudden there is this rat race on releasing NV cards. If the 9900's are to be released along with the GT200 type cards after then NV are either running on a mental time frame and are about to **** alot of customers off or they are running rumors.com. Either way i'm not impressed and not buying none of it anymore untill i see it all in writing:screwy:

Yeah typically I jump when I see a card. This time im holding out til I see numbers. I perferably would go back to ATI just because I like there drivers better, but at least I'm not getting anymore of the .dll errors that I was getting with my nVidia card so I'm willing to give it a second chance.
 
What I still find totally bizzare on this news, is the fact that this bit of rumors.com reckon that there will be a GTX model in the song and dance. Was there not a 9800 GTX that just rolled out onto the shop floor this month? So that's April, May, June, Jul...9900GTX!!. EVGA are getting some tonkin' speeds on those 9800 GTXs so does that mean the 9900GTX is going to take me to gaming hyperspace? Or will it be "quick but not as quick as the newly released
9800GX2!"..Or did everyone who just recently bought a 9800 series just get pwned by NV (probably again ) If this all happened i'd be screaming for a super discount step program! Also the 8800GTS isnt exactly slow compared to the 9800 GTX and 8800GTX so I wonder how quick that 9900GTS will be...
My only real question is, what is the GT200 core finally all about? Is it for the 9900 show or is that another NV episode?:drool:
 
I don't see why people say a 512+ bit memory bus is needed/wanted. What you want is memory bandwidth, not memory bus width. Memory bus does nothing, is only the number of bits that can be read/write at a given clock tick, so you can increase memory bandwidth (data per time unit) either increasing the bus or the memory clock speed.

Now, memory bandwidth is one of those things that can make bottlenecks if it's too low (GPU don't get enough data to work and wastes time) but don't afect possitively if it's higher than needed (why would I want to have more data availabilty than the data I can process?), AKA, don't directly increase performance or image quality, it removes bottlenecks that impede good performance under given cirtumstances of high memory useage. I've always found much more interesting and spectacular the changes that really afect the pure processing power... :D

Even so, if more mem bandwidth is needed to feed the processing capabilities of the GPU under the 3d/gaming scene in the target market segment, you can go by the route of doubling (or increasing without doubling using odd quantities of VRAM) memory interface/bus, or using faster memory. Former is the complex, expensive, less flexible way to do it. Lots of additional transistors in the chip, complex PCB, dependant on the quantity of memory being used... now you have GDDR5, 2x faster than GDDR3/4, probably cheaper than 2x memory bus width, less power hungry than GDDR3/4... the way to go should always be 1st use the faster memory which is affordable (specially if it even leads to a power consumption reduction) 2nd if that's not enough (and only if that's not enough), increase memory interface.

I wonder how many of that "billion transistors" of the GT200 are wasted in that 512 bit bus, instead of using GDDR5 to achieve the same performance (better said, to rise the memory-gpu comunication bottleneck to the same high enough point), with a cheaper, cooler, less power hungry vga, all because they know that releasing a high-end vga with "only" 256 bit will hurt their image (thus sales) because all of these people that thinks 256 bit is "low-end".

Since the cost of the chips depends on size (area) of it, and this one depends on the number of transistors and the size (nm) of these, it costs the same a chip with X transistors if they are used to make shader processors, tmus, rops, memory controllers, whatever it wants to be. So, give me a graphics card with GDDR5, half the memory bus bits, and use all the extra transistors to increase shaders or tmus. I don't need to know that I've a 512 bit bus to feel better...

Anyway, lets see what nVidia and AMD/ATi give us this time... :beer:
 
Last edited:
I don't see why people say a 512+ bit memory bus is needed/wanted. What you want is memory bandwidth, not memory bus width...

After that big longwinded post, you said it yourself in the first pargraph...lol.

The bigger bus width will yield more memory bandwidth. As per GDDR5, who is to honestly say which is cheaper? Put the 512 bit bus in now and add the quicker ram as the price comes down. Best of both worlds.
 
After that big longwinded post, you said it yourself in the first pargraph...lol.

The bigger bus width will yield more memory bandwidth. As per GDDR5, who is to honestly say which is cheaper? Put the 512 bit bus in now and add the quicker ram as the price comes down. Best of both worlds.

I don't know. Look at the re-hash of the HD2900 to the 55nm HD3870. Only big change in the architecture was to reduce the 512 bit bus to 256, and the transistor account fell from ~750 to 666 millions. Now the rumours point to a ~800 million transistors in the HD4870 core, and that supposedly is enough to add +50% shader processors (+160), +100% tmus (+16), and so. What is more useful? It depends on several things.

What it's true is that I (most of us too) don't really know which is less costly between the faster GDDR5 memory or the 512 bits. But GDDR5 is reducing power consumption compared to using the same amount of GDDR3/4 AFAIK, and using 512 bits seems to be a big increase in the size of the core, therefore an increase in the power consumption and the heat produced as well.

Doubling the memory bandwidth should be more than enough for a period, either using faster mem or higher bus, given that the current mem bandwidth is only becoming a bottleneck at the highest resolutions and memory intensive settings. So, I think that using a bigger bus thinking in adding faster memory later is a bit too much overkill. Probably the chip will change a lot before we need 4x the current bandwidth, and there'll be time to add a bigger bus later when needed...

Anyway the only thing I tried to point is that memory bandwidth is nothing else than a bottlenecking matter, not a true indicator of the power of a vga, and bus width is only one between two major and comparable factors (bus vs mem clock speed) on that. I think too many people gives too much significance to a parameter that may not be needed to achieve a performance improvement.

Hehe, you see I've some problems to stop going on, and on, and on when talking (or writing)... I'm a lil' bit :screwy:
 
Good points Farinorco. I've been saying that all along...it's simple math, and all that matters is the final bandwidth number, not whether it is derived b/c of wider bus width or higher speed memory (or both).

I also agree w/ surfrider in that I'd like the best of both worlds, but I'd also rather have a $300 card that rivals a $500 card. If the memory is cheaper then bring it on. I can't wait to see what ATI and nVidia have up their sleeves for this next round!

I think a lot of folks have issues w/ the current crop of 256-bit bus cards b/c the fps jump all over the place. The processing power is there, and the fps are high, but then all the sudden they'll dip to super low levels while the GPU is wasting cycles waiting on the memory. It can be very annoying, and it gets worse as you scale in more cards in SLI and push the settings higher. Your processing power goes up, but your memory bandwidth stays stagnant as you add in more cards. You get super high max fps, but you also get super low min fps.
 
I also agree w/ surfrider in that I'd like the best of both worlds, but I'd also rather have a $300 card that rivals a $500 card. If the memory is cheaper then bring it on. I can't wait to see what ATI and nVidia have up their sleeves for this next round!

Agreed, and less complex GPU memory controllers and simpler PCBs is a huge advantage for GDDR5 at an equivalent bandwidth. While latency matters a little in graphics cards bandwidth matters a lot more, most of the data is streamed not randomly accessed, the latter is where latency becomes important.

I'll take a $100 lower priced card that has GDDR5 with equivalent bandwidth any day.
 
I'm going to make a bold prediction that goes against all the rumors. Maybe the likes of FUDzilla have good insider info but the track record of sites like that and the INQ aren't great when it comes to these rumors.

So I'm predicting that the summer release from NV will be 55nm G90-based cores. The reasons for this are one of business and how fast things can ramp for production. First, 55nm versions would be easy because it's an optical shrink of 65nm and requires no reworking of the chip. It could provide a decent clock bump and the GTX might come with .8ns Ram again. Second NV would have had to decide this at least 3-6 months ago to get the production pipeline going. If they only decided recently that they'd like to fight the R4000's they wouldn't have enough time. Third, NV typically launches true new architectures in Q4 not early Q3 or Q2. Going against this is the rediculous naming schemes NV has had with all 9 series, they may very well launch a new architecture as 9 series :rolleyes: otherwise expect 55nm G90 GPUs as 9900s and the GT200 to be 10k (?) series in Q4.

So I see AMD taking a likely lead in the summer with R4000's until GT200 GPUs are out in early Q4. It will be nice to see the competition although price: performance is about as good now as it's ever been, AMD keeping competitive in graphics is more important from their business perspective than from the buyer's perspective.

You heard it here first!
 
I'm going to make a bold prediction that goes against all the rumors. Maybe the likes of FUDzilla have good insider info but the track record of sites like that and the INQ aren't great when it comes to these rumors.

So I'm predicting that the summer release from NV will be 55nm G90-based cores. The reasons for this are one of business and how fast things can ramp for production. First, 55nm versions would be easy because it's an optical shrink of 65nm and requires no reworking of the chip. It could provide a decent clock bump and the GTX might come with .8ns Ram again. Second NV would have had to decide this at least 3-6 months ago to get the production pipeline going. If they only decided recently that they'd like to fight the R4000's they wouldn't have enough time. Third, NV typically launches true new architectures in Q4 not early Q3 or Q2. Going against this is the rediculous naming schemes NV has had with all 9 series, they may very well launch a new architecture as 9 series :rolleyes: otherwise expect 55nm G90 GPUs as 9900s and the GT200 to be 10k (?) series in Q4.

So I see AMD taking a likely lead in the summer with R4000's until GT200 GPUs are out in early Q4. It will be nice to see the competition although price: performance is about as good now as it's ever been, AMD keeping competitive in graphics is more important from their business perspective than from the buyer's perspective.

You heard it here first!

Rise the stakes :thup:. I'd say that the "new" GT200/G100 architecture IS a derivative of the G92, only that beefed up to a higher market end (higher clocks, more transistors -stream procs, tmus, and so-, bigger mem bus width, much stronger product to run high res + filters, maybe a few new little features that nobody worries about, but same basic architecture). I'd say a GTX model with a cost ~500$/€ that will be the new performance king when is out, and the new watt-eater king, and hotter and bigger card (of current generations, of course). And a GTS or GT cut model with a cost ~300$/€ to compete against HD4870. It would be the way to do the things of nVidia, and a try to take some of the initiative again and to do something else than reacting to the AMD moves (that's what they're doing since the first info about RV670 leaked).

So yes, I believe that next cards will be beefed up G92s, but I believe that THAT IS what GT200/G100 IS. No bad, I feel G9x technology hasn't been used to produce a true high end part yet and can be exploded further.

Now, let's wait and see. :rolleyes:
 
Last edited:
...NV typically launches true new architectures in Q4 not early Q3 or Q2.
GeForce 6-series (NV40), lauched April 2004.
GeForce 7-series (G70/NV47), launched June 2005.
GeForce 8-series (G80), launched November 2006.
GeForce 8-series (G92), launched October 2007.

The Q4 releases are a new thing for nVidia, their older GeForce cards launched in Q2/Q3, but the latest ones were Q4. But, take a look at what's been going on here: nVidia has been punctual about releasing new chips every year, not exactly 12 months to the date, but every year there is a new chip out.

This past year, nVidia DID keep to it's roots and made a successor to G80, which is the G92 chips. So they called them the 8800GTS 512mb at first, big whoop. The fact still remains that in the end they did what they should have done in the first place: slap the 9800 tag on them and call it the next gen (9800GTX and 9800GX2).

I will admit, the 7800-7900 die-shrink was an amazing boost in power (and to finally see 512mb of memory on a card at affordable prices was amazing too), and so was the G80-G92 swap. I will admit, it wasn't like the "OMG, EVERYTHING RUNS 2x AS FAST NOW!!!" upgrade that we saw going from a 7900GTX to the 8800GTX, but nobody is perfect; so maybe nVidia didn't come out with the perfect chip this time to upgrade from G80, but at least it was something.

If nVidia didn't give G92 the "8800GTS" name at first, things prolly would have been a lot less confusing. G92, from the start, should have been called the 9800GTS and had the 9800GTX and GX2 above it, and the 9800GT below it, instead of calling it the 8800GTS and 8800GT. G80 has an 8 in it, G92 has a 9 in it, it's not too hard nVidia :p

But, I'm gunna join in with madman's predictions: the 9900-series is just a G92 die-shrink to tie us over till GT200 gets buffed and polished to perfection.

-Mobious-
 
Last edited:
Thanks for the dates. Until about 2 years ago I was out of PC hardware for a few years, I kept up with it a little but not in detail. The Q4 launches make sense at least from a sales perspective for holiday sales but those buying high-end graphics will buy them at any time of the year probably.

I diasgree with this though...
The fact still remains that in the end they did what they should have done in the first place: slap the 9800 tag on them and call it the next gen (9800GTX and 9800GX2).

They should have been called 8900's, with 50's thrown on for good measure if needed.
 
They should have been called 8900's, with 50's thrown on for good measure if needed.
True, it would have been a familiar name-change for those who were around for the 7-series. But b/c G92 was a new architecture (rather than a die shrink), it should have gotten a new name (like, the 9*00's). G80 already took the 8's, G92 should have taken the 9's.

The other reason why is b/c G80 already had a 8800GTS, why make the G92 8800GTS as well? Why not call it the 9800GTS to go along with the 9800GTX and 9800GX2? From what I can gather, nVidia doesn't plan to use "9800GTS" or "9800GT" before the 9900's come out, they would have been perfect: 9800GT and followed by 9800GTS, later on have 9800GTX and 9800GX2 based on the same chips, rather than having people go "Why are 9800's slower than 8800's???".

From our standpoint ("our" being every member of OC Forums), it made sense in the end, you could figure out the difference and that was that. But from a computer illiterate standpoint, I see: 8800GTS 320 < 8800GTS 640 < 8800GTX < 8800Ultra; all of a sudden that becomes: 8800GTS 320 < 8800GTS 640 < 8800GT < 8800GTX < 8800GTS 512 < 8800Ultra. I showed this to a few friends of mine who know nothing about computer hardware, and they all agreed it made no sense at all.

GTS is supposed to be less than GTX, but when G92 came out, there were suddenly 8800GT's that were better than the GTS's and on par with the GTX's, and when the GTS 512's came out, they were better than the GTX's and on par with the Ultra's. It's like deciding that the standard-edition car needed an upgrade from a 4-cylinder to a V-8, while the sports-edition is still stuck with a V-6.

I honestly think it would have made more sense that way. The GT200's needed a name other than 9*** anyway, something a little more epic and off-the-beaten-path. Who knows, they might just be called the GeForce 20*** series, or even the nVidia Super-Special-Awesome-ATi-Killer-of-DOOM!!!!-Series.

-Mobious-
 
Although there weren't as extensive details when G92 came out all the articles I read hinted at it being a tweaked (mainly in some memory bandiwdth efficiency) and shrunk G80, not a new architecture, and I think that's exactly why there weren't in-depth articles. I'd say it was a new chip not a new architecture but that may just be arguing semantics. 'Die shrunk and tweaked G80' about summed it up. Just because they call it a '92' doesn't make it so ;)

The real reason they couldn't call G92 the 9800 or even 8900 is because at that time there were still lots of G80 GTXs and Ultras in the channel that had to be sold. If it had a higher number than those it would have caused just as much confusion and possibly hosed sales for the cards already in the channel if you assume that the only people buying those cards are ones in the know like us.
 
nvgpulineup.gif

VR-Zone got to know that the GeForce 9900 GTX card is 10.5" long, full height and dual slot and the cooler is TM71 from CoolerMaster. Dubbed GT200 or D10U, we heard that this architecture might be dual chip with TDP up to 240W.
:burn:
 
Just pray the 512bit interface isnt a complete failure like the 2900XT series from ATI when they introduced it. Huge extra cost for no performance increase, maybe even a performance decrease.
 
Although there weren't as extensive details when G92 came out all the articles I read hinted at it being a tweaked (mainly in some memory bandiwdth efficiency) and shrunk G80, not a new architecture, and I think that's exactly why there weren't in-depth articles. I'd say it was a new chip not a new architecture but that may just be arguing semantics. 'Die shrunk and tweaked G80' about summed it up. Just because they call it a '92' doesn't make it so ;)

The real reason they couldn't call G92 the 9800 or even 8900 is because at that time there were still lots of G80 GTXs and Ultras in the channel that had to be sold. If it had a higher number than those it would have caused just as much confusion and possibly hosed sales for the cards already in the channel if you assume that the only people buying those cards are ones in the know like us.
Looking at it from a marketing standpoint, that does make more sense. Just seems like a waste of a good naming scheme if you ask me :screwy:

-Mobious-
 
Now that makes some kind of sense. I mean come on..releasing three GTX models (9800GTX, 9900GTX and a supposed GT200 powered watever architecture GTX) in one year would be nuts!:screwy:
Also, I think its about time i upgraded to a completely new system (see my sig!) so the last thing I could do with this year is to get pwnd by Nvidia's slimey marketing tactics! I want to buy the real shebang and know that it'll maintain the throne for at least as long as the 8800GTX will!
 
haha, Im with ya monstert! I wanted to build in Jan but said screw it. I'm not playing crysis anytime soon so I don't need the awesome system. However I do realize that my system is getting old. I was hoping DDR3 would be alot cheaper towards the end of the 08' summer. But if this computer doesn't die I may wait till Jan of next year to build something for the long run. Bring on the GT200's!!
 
Back