• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Quad Core vs Quad Core with HT?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
To hell with the budget, I got the Xeon and the 970.

Well, that was a bad decision seeing the GTX 970 is really a 3.5GB card with fewer ROPs and less L2 cache than has been advertised for the last 5 months. Nvidia sucks.
 
Well, that was a bad decision seeing the GTX 970 is really a 3.5GB card with fewer ROPs and less L2 cache than has been advertised for the last 5 months. Nvidia sucks.

It's actually not true. It still allocates 4GB and performance is still really good with the only difference that users ( who read news ) know about this fact. You just got this info and now you think it's some kind of issue but have you seen it on your PC ? 99% users won't even see it playing games. Not even mention it's barely a performance drop as people already tested:
http://www.guru3d.com/news-story/does-the-geforce-gtx-970-have-a-memory-allocation-bug.html

Regardless if there is issue or not, it's still best gfx card for the price on the market and I see no reason why not to buy it. GTX960 is a fail and GTX980 is much more expensive but not so much faster. On the AMD side you have noting interesting and won't be for next 2-3 months.
 
Last edited:
Untitled_zps9iv5yapl.png

This is the issue with the 970, visually explained. Basically, due to "binning" or shall we say "gimping" by Nvidia, the 970 basically has 7 memory controllers with L2 cache to control 3.5GB of VRAM. And for the remaining 0.5GB, it has a single memory controller with no L2 cache, making that 0.5GB of VRAM slower. The "8th" memory controller borrows cache from the 7th memory controller.

The 970 is basically a gimped 980. It's been neutered. I think everyone knew that all along.

What the heck is Nvidia doing using a 256bit memory interface anyways? AMD uses a 512-bit interface on its high end gaming cards.
 
It's actually not true. It still allocates 4GB and performance is still really good with the only difference that users ( who read news ) know about this fact. You just got this info and now you think it's some kind of issue but have you seen it on your PC ? 99% users won't even see it playing games. Not even mention it's barely a performance drop as people already tested:
http://www.guru3d.com/news-story/does-the-geforce-gtx-970-have-a-memory-allocation-bug.html

Regardless if there is issue or not, it's still best gfx card for the price on the market and I see no reason why not to buy it. GTX960 is a fail and GTX980 is much more expensive but not so much faster. On the AMD side you have noting interesting and won't be for next 2-3 months.

I bought the card specifically for 4GB of VRAM so it's still a decent card in 2-3 years. I'm pissed I instead get some 3.5+0.5GB kludge that requires careful driver optimization that's not going to be there in a year when the new 70 series card is a rebranded 980. This is exactly the kind of GPU I wanted to avoid.
 
Untitled_zps9iv5yapl.png

This is the issue with the 970, visually explained. Basically, due to "binning" or shall we say "gimping" by Nvidia, the 970 basically has 7 memory controllers with L2 cache to control 3.5GB of VRAM. And for the remaining 0.5GB, it has a single memory controller with no L2 cache, making that 0.5GB of VRAM slower. The "8th" memory controller borrows cache from the 7th memory controller.

The 970 is basically a gimped 980. It's been neutered. I think everyone knew that all along.

What the heck is Nvidia doing using a 256bit memory interface anyways? AMD uses a 512-bit interface on its high end gaming cards.

Nvidia promoted it to reviewers as a 980 with less CUDA cores, less texture units, and a slower clockrate. Not less ROPs, not less L2 cache, with a 224GB/s memory bandwidth instead of its acutal 196GB/s, and not with a segmented memory architecture that ensures it will age much worse than a 980 does. This card is gimped a lot more than their initial specs indicated. If they offer full refunds I'm going to get my $340 back and buy an R9 290x, as the power savings of Maxwell are meaningless to me.
 
Last edited:
Have you seen results ? It's still using 4GB memory but above 3.5GB is little performance drop, like 1-3% max. New drivers will probably make bigger difference. Regardless how it's called, how many ROP, shaders etc it has, it's still the same card which was so great for all users during last 4 months but now somehow nvidia sucks. Yes, they actually make things like that from time to time but what would you buy which is better in similar price ? R9 290X ? 290X is slower in many games at high resolution with over 50% higher power usage and much more heat.

Do you remember when GTX680 appeared ? 2-3 weeks before premiere all saw previews with specs nearly like GTX780. When cards were released then we saw how nvidia moved all cards one step lower so what supposed to be GTX670 was named GTX680 and what supposed to be GTX660Ti was GTX670. In real all had to pay 20% more for 20% slower cards. Nvidia released full Kepler when they fixed production issues what took them over one year. I would feel really cheated if I paid $700 for a card which year later was replaced by 30% faster $600 card. This is how this market works.

Do you think that you get exactly what you count on from Intel or AMD too ? Every Intel chipset in last ~3 years has issues and some parts or features are turned off. Good example is X79 with locked additional SATA ports ( and disabled SAS ) or X99 with SATA/RAID issues where some functionality had to be locked in drivers. Z87 = sleep/USB issues about what they knew before premiere but still were selling boards. P67 = all boards replaced from B2 to B3 rev.

Nvidia is using additional texture compression which makes memory bus less important so even if it's 192bit then you actually can't see big difference. In games GTX980 isn't so much faster but price is ~40% higher.
At the end counts what can you do with this hardware, not what numbers you see in specification.
 
Nvidia promoted it to reviewers as a 980 with less CUDA cores, less texture units, and a slower clockrate. Not less ROPs, not less L2 cache, with a 224GB/s memory bandwidth instead of its acutal 196GB/s, and not with a segmented memory architecture that ensures it will age much worse than a 980 does. This card is gimped a lot more than their initial specs indicated. If they offer full refunds I'm going to get my $340 back and buy an R9 290x, as the power savings of Maxwell are meaningless to me.

Well, the best thing you can do to vent your frustration is find a lawyer who only gets payed if you win and start a class-action lawsuit. "NVidia lied to us". Go for it. If I had a 970 I'd jump on board. I'm happy with my un-gimped 980 though.
 
What the heck is Nvidia doing using a 256bit memory interface anyways? AMD uses a 512-bit interface on its high end gaming cards.

really doesnt matter how wide it is, people get stuck on this to much.... even if a AMD gpu has more BW memory wise what if the gpu doesnt need all that memory bw for what the gpu is clocked at. If NV cards still hit their targeted performance goals with 256bit memory width with higher clocked memory for the BW the gpu needs. NV saves money by having to run 256 less traces on the board for memory. with next get memory for gpus prolly coming soon if not here now, NV has more then enough reason to keep using 256 memory width. this isnt yesteryear video cards with DDR/DDR2 memory rates where they couldnt push memory speeds that high. Where the only option was to use wider memory widths, since memory for video cards moved as slight slower for development.
 
Back