Tests how well GPU performance scales with CPU clockspeed – Dominick V. Strippoli
The idea here was to take Nvidia’s latest GPU creation, the
8800GTX, and see how well GPU performance scaled with CPU clockspeed. The
benchmarks used were the synthetic but very popular 3DMark05 and 3DMark06. CPU Score
was not computed in 3DMark05, but 3DMark06 has all scores including CPU within
our benchmark results.
First lets take a look at the packaging and actual card. As
you can see, the card is a monster compared to older PCI-Express16 hardware. You
can also take a look at the dual PCI-Express 6-pin power connectors. The card
definitely requires uber amounts of amperage and wattage.
For the record, our
Silverstone ST56ZF Zeus "560watt" power supply unit is fully capable of handling
both a Core 2 Duo clocked at 4.2 GHz and this 8800GTX fully loaded and
Test System Specifications
- E6700 Conroe "Week 27
- Asus P5B-Deluxe Wifi Mobo
- OCZ Titanium Alpha VX2 Ram
- EVGA 8800GTX
We will start off using the stock clockspeed of an E6600 Conroe
(2.4 GHz) and eventually work our way all the way up to 4.2 GHz. 3DMark05 and
3DMark06 will be recorded at each clockspeed. The following images will display 3DMark05 results on the left screenshot and 3DMark06 results in the right screenshot.
>E6700 @ 4.20 GHz
As you can see, 3DMark05 appears to react ferociously to minor
changes in CPU clockspeed while 3DMark06, tailored to newer GPU’s, SMP, and Shader
Model 3.0, tends to react a bit milder.
I believe it is safe to assume that raw
CPU power will increase 8800 GPU performance based solely on the processor.
Intel Core 2 Duo’s definitely will not bottleneck an 8800GTX setup, but as
proven: higher clockspeeds on your rig ultimately equals more potential and
headroom to be released from the 8800 line of GPU’s. So, the good old AMD
Opteron or X2 line of lower frequency processors may need to be overclocked a
tad to really benefit from the true potential of the 8800 series GPU’s.
Our overclocking results were completed using an E6700 Conroe
clocked at 4.2 GHz under a Vapochill Lightspeed Cooling Unit. For the actual
overclocking of the 8800 GPU’s core and memory bus, three tools came to mind:
Rivatuner, Coolbits, and NTune.
Coolbits is not compatible with the 8800, and
Rivatuner proved to be very buggy for me. My last resort was NTune, direct from
Nvidia. Although this software gave us a warning that it was not 8800GTX
compatible, it proved to be a great tool in processing its 3 main components:
Raising fan speed, changing memory bus frequency, and core frequency.
By setting fan speed to 100%, core clock to 653 MHz and
memory clock to 1040 MHz (2.08 GHZ effective), we were able to shatter our stock
3DMark05 score by almost 2,000 marks. We were able to achieve, using
incompatible NTune software and brand new drivers, an almost a 14% overclock on core
speed and an almost a 16% overclock on the memory bus, from 575 / 900 (1.8) stock to
653 / 1040 (2.08) overclocked.
For stock cooling and no voltage modifications, this
overclock was plenty for the time being. I am almost certain that the core bus
has a lot of room left in it as well, because my testing ended in the morning
hours when the DDR3 maximum was attained at 1040 MHz. I am sure that the memory
was holding me back, not the core. I just didn’t take enough time out of my
schedule to finish upping core speed. Anyway, here is the result:
For my game testing, I chose to use FEAR. As most of you
already know, this game coupled with a 1024 x 768 resolution and Ultra/Maximum
Quality setting is enough to bring any system with current technology to its
knees. I used the in-game benchmarking provided by FEAR for these two tests. The
8800GTX was overclocked to 653 / 1040 (2.08) and the E6700 was clocked to my normal
4.2 GHz. Maximum settings unbelievably produced these results:
Just for kicks and giggles, I decided to run the test on High
Quality settings. The result is even more dramatic. These results are almost a
100% gain over my existing graphics solution: ATI’s flagship X1900 product line:
I would like to give my kudos and two thumbs up to Nvidia for
putting out such a powerful graphics solution, fully Directx 10 compliant, and
with plenty of headroom for overclocking. As always, I would like to thank
Overclockers.com for publishing this article.
I can be reached on overclockers.com/forums with my username: Dominick32