• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

CPUs should be faster for high-end cards

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Yup totally agree. One should think that for the uninformed customer, buying a top end right for 3-4 Grand will give you the results that you should be expecting. Not some right that costs half as much giving better results. While I don't mind OCing to get the most of my system, it really does show, that OCing gives much better results. Talking in the bang for the buck area since even if you buy the fastest CPU out there today, it still won't perform the same as just OCing what you got, and even on stock cooling its possible to get more performance from your equipment by OCing it.

And whats really sad is that nVidia says CPU's aren't important...
 
This is why I don't buy the top end hardware.

Dido! i have i think twice bought something new, when i got my Ti4600 vid card, was like $600 CAD at the time.. and after that i think an x1800 card....

otherwise i get what i need, i got a Q6600 i think like 6-7 months after they came out...and i just got a Q9650 which i plan to keep for some time (g/f needed a new rig so things got shuffled down)
 
Going from 60 to 30fps makes me think you have vsync enabled. The actual fps drop may be a lot less, but w/ vsync on it has to jump down to the next multiple to keep the screen from tearing. Using D3D Overrider you can get more steps (like 45 or 48fps).

3DMark06 is a horrible benchmark to use to make comparisons. By default it runs at a low rez, and the CPU tests are weighted too heavily.

Still, your point is valid.


Those who can OC do, and this is part of the reason why...better gaming experience. With console games the game developer chooses which settings to turn down so the game will run smoothly. You can do the same thing on your PC....but w/ the PC you can also turn the settings up and use OCing to find a good balance. You are limited on consoles, and if a console game starts to get bad fps that is just bad coding/planning, and there is nothing you can do about it.

GPU's need CPUs to feed them data. The problem is that video-processing is highly parallel, but the data the GPU requires from the CPU is not so parallel. So, the GPU makers can keep adding more 'cores' (shader processing cores) and get better results w/o any different software, but the CPU makers are not so lucky and have to wait on the programmers to write software that can more effectively use all of the CPU's cores.

As for your OCing woes...did you run the MCH Memtest at your CO settings? You said you isolated the RAM which is good to get some baselines, but you still need to test at your final settings to make sure everything works well together. There are a lot of NB/RAM timings that get changed at different straps/RAM:FSB ratios.
 
Going from 60 to 30fps makes me think you have vsync enabled. The actual fps drop may be a lot less, but w/ vsync on it has to jump down to the next multiple to keep the screen from tearing. Using D3D Overrider you can get more steps (like 45 or 48fps).

3DMark06 is a horrible benchmark to use to make comparisons. By default it runs at a low rez, and the CPU tests are weighted too heavily.

Still, your point is valid.


Those who can OC do, and this is part of the reason why...better gaming experience. With console games the game developer chooses which settings to turn down so the game will run smoothly. You can do the same thing on your PC....but w/ the PC you can also turn the settings up and use OCing to find a good balance. You are limited on consoles, and if a console game starts to get bad fps that is just bad coding/planning, and there is nothing you can do about it.

GPU's need CPUs to feed them data. The problem is that video-processing is highly parallel, but the data the GPU requires from the CPU is not so parallel. So, the GPU makers can keep adding more 'cores' (shader processing cores) and get better results w/o any different software, but the CPU makers are not so lucky and have to wait on the programmers to write software that can more effectively use all of the CPU's cores.

As for your OCing woes...did you run the MCH Memtest at your CO settings? You said you isolated the RAM which is good to get some baselines, but you still need to test at your final settings to make sure everything works well together. There are a lot of NB/RAM timings that get changed at different straps/RAM:FSB ratios.

I understand your points, and I appreciate that you take your time to be part of this discussion. But I would like to point out that I wasn't referring to the loss of performance in synthetic benchmarks in my very first post, that was mentioned in the very first few sentences. I was referring to the loss of performance in actual games. Additionally I would like to say that, no, I do not use v-sync, it's in fact forced off in the nVidia control panel, simply because I cannot reach even 50FPS in most of my games with my CPU at only 3.0Ghz (stock speed). And my refresh rate on my LCD when I still had it was 75Hz, but on my current CRT (replacement in the time being, until I get a new LCD since my previous one died recently) refresh rate is set at 100Hz in most resolutions and 85Hz in higher ones (1600x1200, 1600x1024, 1280x1024, and some others).

There's technically no need for v-sync since the bottleneck created by the CPU at stock speed is so high/restricting on the performance that, as I said and I will repeat, I'm barely seeing even 60FPS in most of my games at resolutions of 1600x1200 for instance (with or without AA/AF, doesn't matter), and it's not just Source-based games, it's many others, on many engines and even some in OpenGL. If I had to mention my whole games collection I would, but I don't feel like it, although I can say that there's exactly 37 titles installed on my HDD and that I've based my very first post and the ideas I had on the tests I made on 14 of them ever since I bought my GTX285 about two weeks ago.

To be honest I don't care much about 3DMark scores, if not for pure curiosity. But I did mentioned 3DMark06 in a later post after my first one in reply to someone else, that if he really wanted to he could make comparisons in 3DMark06 to give himself a basic idea of the points of my first post. I know that you said that my points were valid "anyway", but as I said, I wasn't referring to 3DMark06 at all, I don't actually "play" 3DMark06, I don't care much about it. I'm simply mind-bashed and absolutely disappointed to see that when I play for instance Warhammer 40,000: Soulstorm, which is based on a now quite aging engine, and that I see my FPS drop to the low 10's mark (I can prove it whenever you ask me to) I just wonder if I actually physically left my previous GPU installed instead or if I actually ever even received my GTX285, or if it was a dream or something. Is it normal? No, it's not, is it the fault of the engine being old? Is it only the fault of the bottleneck? I don't know, and I shouldn't have to know, my "job" isn't to know about all of that, I'm not their engineer, I'm just a gamer and the only thing I want is that if I buy a $400 or $500 component focused on processing graphics (and in the future even physics and God or Intel/nVIDIA knows what else) then I should not only hope for but actually get a minimum of good performance, not just "decent", but actually good.
 
I hate to side with Nvidia, but it's really none of their responsibility on how poorly coded game engines are used over and over by software engineers / game devs to cut costs. I mean people are still using the Unreal 3 engine and how old is that...

Then they put super fancy new graphical things on an old software engine and it just doesn't work too well. (A lot of this is due to DX10's poor adoption - AKA Windows Vista). Instead of making newer engines that ran better they just reused the old DX9 stuff and put AA and AF in.

But it's not a lost cause, parallel programming for CPU's is making great strides all the time.
 
I hate to side with Nvidia, but it's really none of their responsibility on how poorly coded game engines are used over and over by software engineers / game devs to cut costs. I mean people are still using the Unreal 3 engine and how old is that...

Then they put super fancy new graphical things on an old software engine and it just doesn't work too well. (A lot of this is due to DX10's poor adoption - AKA Windows Vista). Instead of making newer engines that ran better they just reused the old DX9 stuff and put AA and AF in.

But it's not a lost cause, parallel programming for CPU's is making great strides all the time.

The Source engine is also very old, but updated overtime... the Unreal3 engine isn't all that old, and one of the best imo. Many people say they have problems with errors and crashing in UE3 games but I've yet to run into them; conflicting software/drivers on third party PC's often cause that.

I'd honestly say my preference for gaming in general (especially online) goes to well optimized engines like Source and UE3, rather than 'extreme graphics' ones like CryEngine2. More people being able to play at good fps = more players, faster gameplay in intense firefight situations is smoother, less $ required to run highest settings.

But as I said before, the reason they don't make completely new engines for more advanced technology all leads to the same thing; consoles are $ and if consoles can't run it well, then they won't bother doing it cause devs don't care about investing much time and effort into PC exclusives. When the next gen of consoles come out we will start seeing more advanced game engines.
 
The Source engine is also very old, but updated overtime... the Unreal3 engine isn't all that old, and one of the best imo. Many people say they have problems with errors and crashing in UE3 games but I've yet to run into them; conflicting software/drivers on third party PC's often cause that.

I'd honestly say my preference for gaming in general (especially online) goes to well optimized engines like Source and UE3, rather than 'extreme graphics' ones like CryEngine2. More people being able to play at good fps = more players, faster gameplay in intense firefight situations is smoother, less $ required to run highest settings.

But as I said before, the reason they don't make completely new engines for more advanced technology all leads to the same thing; consoles are $ and if consoles can't run it well, then they won't bother doing it cause devs don't care about investing much time and effort into PC exclusives. When the next gen of consoles come out we will start seeing more advanced game engines.

Yup, and let me refer you back to post #19 ;)

I'm hoping for a good DX11 engine to come out in the next year. But then again I mainly play games that don't require insane amounts of graphics horsepower. Mainly RTS and RPG for me.
 
I understand your points, and I appreciate that you take your time to be part of this discussion. But I would like to point out that I wasn't referring to the loss of performance in synthetic benchmarks in my very first post, that was mentioned in the very first few sentences. I was referring to the loss of performance in actual games. Additionally I would like to say that, no, I do not use v-sync, it's in fact forced off in the nVidia control panel, simply because I cannot reach even 50FPS in most of my games with my CPU at only 3.0Ghz (stock speed). And my refresh rate on my LCD when I still had it was 75Hz, but on my current CRT (replacement in the time being, until I get a new LCD since my previous one died recently) refresh rate is set at 100Hz in most resolutions and 85Hz in higher ones (1600x1200, 1600x1024, 1280x1024, and some others).

There's technically no need for v-sync since the bottleneck created by the CPU at stock speed is so high/restricting on the performance that, as I said and I will repeat, I'm barely seeing even 60FPS in most of my games at resolutions of 1600x1200 for instance (with or without AA/AF, doesn't matter), and it's not just Source-based games, it's many others, on many engines and even some in OpenGL. If I had to mention my whole games collection I would, but I don't feel like it, although I can say that there's exactly 37 titles installed on my HDD and that I've based my very first post and the ideas I had on the tests I made on 14 of them ever since I bought my GTX285 about two weeks ago.

To be honest I don't care much about 3DMark scores, if not for pure curiosity. But I did mentioned 3DMark06 in a later post after my first one in reply to someone else, that if he really wanted to he could make comparisons in 3DMark06 to give himself a basic idea of the points of my first post. I know that you said that my points were valid "anyway", but as I said, I wasn't referring to 3DMark06 at all, I don't actually "play" 3DMark06, I don't care much about it. I'm simply mind-bashed and absolutely disappointed to see that when I play for instance Warhammer 40,000: Soulstorm, which is based on a now quite aging engine, and that I see my FPS drop to the low 10's mark (I can prove it whenever you ask me to) I just wonder if I actually physically left my previous GPU installed instead or if I actually ever even received my GTX285, or if it was a dream or something. Is it normal? No, it's not, is it the fault of the engine being old? Is it only the fault of the bottleneck? I don't know, and I shouldn't have to know, my "job" isn't to know about all of that, I'm not their engineer, I'm just a gamer and the only thing I want is that if I buy a $400 or $500 component focused on processing graphics (and in the future even physics and God or Intel/nVIDIA knows what else) then I should not only hope for but actually get a minimum of good performance, not just "decent", but actually good.

I didn't mean for the emphasis of my post to be directed at 3DMark06. I should have left that out. That was more for general info for other readers.

I do find it strange that you are having these problems, though. Maybe I'm just not playing the same games you are, but so far everything I've tried has been pegged at 60fps (I do use vsync) w/ max settings at 1920x1200 except Crysis (low 40's). I will admit I've only tried a handful of games so far w/ my new card, though. I have been playing Prey which I believe is based on the Doom III engine (it looks like it anyways) and it is pegged at 60fps for me.

My OC is only a bit more than your max OC. I do have 2 extra cores, but they are useless in the majority of games. We should be fairly equal in our gaming power when you are OC'd.

I do understand what your saying on how it should just work after spending all that money. To be honest, if you want something that just works w/ no fuss then maybe consoles might be better for you. I am an Engineer, and enjoy all the tweaking that has to be done on the PC to get every last ounce of performance. This includes tweaking the hardware to run as fast as possible, and then tweaking the individual program/game to run as efficient as possible. The end result is something you cannot get on any console, but it does take a lot more work.

To use a car analogy getting a PC is like getting an older muscle car. Lots of power, but you need to work on it a lot to keep it tuned for max performance. And you always buy little things for it looking for that extra boost/edge. Buying a console is like buying a new Honda Civic. Reliable w/ little fuss.



Also wanted to point out that voltage, not speed, reduces the life-span of your components. So you can OC to your hearts content w/o worry if you run the vcore at or below stock. In fact a very nice OC can be gained at less than stock vcore....in theory your hardware should last longer like this than running at stock speeds/stock voltage.


Just a thought...have you checked your PCIe link width? Perhaps its running at x4 or something.
 
I didn't mean for the emphasis of my post to be directed at 3DMark06. I should have left that out. That was more for general info for other readers.

I do find it strange that you are having these problems, though. Maybe I'm just not playing the same games you are, but so far everything I've tried has been pegged at 60fps (I do use vsync) w/ max settings at 1920x1200 except Crysis (low 40's). I will admit I've only tried a handful of games so far w/ my new card, though. I have been playing Prey which I believe is based on the Doom III engine (it looks like it anyways) and it is pegged at 60fps for me.

My OC is only a bit more than your max OC. I do have 2 extra cores, but they are useless in the majority of games. We should be fairly equal in our gaming power when you are OC'd.

I do understand what your saying on how it should just work after spending all that money. To be honest, if you want something that just works w/ no fuss then maybe consoles might be better for you. I am an Engineer, and enjoy all the tweaking that has to be done on the PC to get every last ounce of performance. This includes tweaking the hardware to run as fast as possible, and then tweaking the individual program/game to run as efficient as possible. The end result is something you cannot get on any console, but it does take a lot more work.

To use a car analogy getting a PC is like getting an older muscle car. Lots of power, but you need to work on it a lot to keep it tuned for max performance. And you always buy little things for it looking for that extra boost/edge. Buying a console is like buying a new Honda Civic. Reliable w/ little fuss.

Also wanted to point out that voltage, not speed, reduces the life-span of your components. So you can OC to your hearts content w/o worry if you run the vcore at or below stock. In fact a very nice OC can be gained at less than stock vcore....in theory your hardware should last longer like this than running at stock speeds/stock voltage.

Just a thought...have you checked your PCIe link width? Perhaps its running at x4 or something.

I'm sorry about the 3DMark06 part, I clearly misunderstood your point and that it was a generalization rather than being focused on my case, my apologies. As for the PCI-E link width, well, according to the nVidia's control panel (system information) it shows that the BUS being used is 16x, although I wonder if it's just telling me that it is capable of 16x or if it is currently effectively set to a speed of 16x, is there any way to make absolutely certain of that other than with the nVidia's control panel?

Thanks.
 
Yup totally agree. One should think that for the uninformed customer, buying a top end right for 3-4 Grand will give you the results that you should be expecting. Not some right that costs half as much giving better results. While I don't mind OCing to get the most of my system, it really does show, that OCing gives much better results. Talking in the bang for the buck area since even if you buy the fastest CPU out there today, it still won't perform the same as just OCing what you got, and even on stock cooling its possible to get more performance from your equipment by OCing it.

And whats really sad is that nVidia says CPU's aren't important...

Nvidia would say that because they want to promote CUDA, which isn't nearly as useful as they make it out to be. It is faster in some things, but it is limited. Playing Mirrors Edge at 1680x1050 on a GTX 260 then enabling hardware PhysX enabled and getting your fps nearly cut in half? Really? A CPU is still needed for a great deal of things, especially gaming.

I hate to side with Nvidia, but it's really none of their responsibility on how poorly coded game engines are used over and over by software engineers / game devs to cut costs. I mean people are still using the Unreal 3 engine and how old is that...

Then they put super fancy new graphical things on an old software engine and it just doesn't work too well. (A lot of this is due to DX10's poor adoption - AKA Windows Vista). Instead of making newer engines that ran better they just reused the old DX9 stuff and put AA and AF in.

But it's not a lost cause, parallel programming for CPU's is making great strides all the time.

It's not really the devs - look at Cry Engine 2, for what it does, it runs well, and for a game that wouldn't be as open world as Crysis it would run very well on decent hardware, but consumer adoption isn't ready for it. You can't code a game for a quad either, you'll be severely narrowing your target market, because I imagine that coding it to use a quad properly will mean it would run like *** on a single or dual core system. Someone else mentioned Dual Cores finally getting used and it's taken what, 3 years or more?
 
graphics rendering is really a funny thing to have a good setup for in my opinion. A lot of diffrent variables come in there as to how diffrent programs run in difrent hardware configurations. The reason I think this is is because diffrent aplications run nativly using diffrent aspects of your system. some may be more cpu laborus some more gpu laborus and some demanding large card or system memory bandwidth.
 
I'm sorry about the 3DMark06 part, I clearly misunderstood your point and that it was a generalization rather than being focused on my case, my apologies. As for the PCI-E link width, well, according to the nVidia's control panel (system information) it shows that the BUS being used is 16x, although I wonder if it's just telling me that it is capable of 16x or if it is currently effectively set to a speed of 16x, is there any way to make absolutely certain of that other than with the nVidia's control panel?

Thanks.

You can use CPU-Z. The 'Mainboard' tab near the bottom. It tells you the max Link Width, and the Link Width you are actually running.
 
Ok thanks, well I've checked in both CPU-z and GPU-z and it's actually set to 16x, no problems there.
 
Back