3D Mark05

Not a long while ago, on September 29th, the new, current version of the most popular video cards benchmark software, 3DMark05, was released.

The software has gone through a long way since her first versions, 3DMark99 and 3DMark2000 (which have been pretty simplistic benchmarks; the first used DirectX 6, the latter advanced to DirectX 7), featured a new dimension to performance measuring for our PCs and for the first time not from the CPU’s point of view, that until that period of time was for many the most important component of the computer system (anyway, there wasn’t a lot of space for maneuvering with the graphic cards of 5-6 years ago that were pretty primitive).

Although these programs have provided a new way to scale performance, they couldn’t really isolate the graphic card’s influence from the CPU’s. That dragged the score down dramatically if you had a weaker CPU or pushed it up if you had a performance beast.

Historical Review and Milestones

1999: MadOnion company (will change its name in the future to Futuremark) releases 3DMark99 – computer systems’ performance measurement receives a new dimension. The software uses DirectX 6 and technology of Remedy Entertainment’s graphic engine – MAX-FX. The software was designed to match Intel Pentium III’s SIMD and AMD’s 3DNow multimedia instruction sets for the CPU of that generation. The software also included an image quality test tool.

2000: 3DMark2000 sees first light: the software used the relatively new T&L technology that has been featured in NVIDIA’s new graphic cards. The software keeps using its MAX-FX engine but advances to DirectX 7.

2001: The software’s quantum jump – 3DMark2001 sees first light: Although it’s still using the MAX-FX engine, it’s upgraded to DirectX 8.1, enabling the software to support Vertex Shader and Pixel Shader 1.4 effects with full support for anti-aliasing. Furthermore, MadOnion added two new tests based on the Havok physical engine.

2003: 3DMark03 is released – the software moves to support DirectX 9.0 and soon the competition in the graphic cards market between two of the main companies becomes a bitter war: NVIDIA releases its “magic” drivers that brought a huge “improvement” to its graphic card’s scores – we’re talking about the drivers that optimized specially for 3DMark03, and not a real performance boosting drivers.

Futuremark (changed its name from MadOnion just a year before) had to release a special patch for the software to prevent score cheating. The entire situation just showed 3DMark’s main role in determining and scaling the graphic card’s performance.

29.09.2004: 3DMark05 is released to the wide audience – support for DirectX 9.0c and for Shader Model 2.0, 2.0a, 2.0b and 3.0 is included within it, an addition that completely changes the results of the last generation graphic cards and also takes the new generation cards (GF6x00 & RXx00) to the front line battle. Also, an Artificial Intelligence influence test infiltrated to the benchmarks. The software uses a graphical engine that Futuremark developed by itself.

In the following article we will test 3DMark05 to find out exactly how the generated score depends on the main hardware components of the computer system.
{mospagebreak}

Test Setup

  • AMD Athlon XP 2600+ (1909 MHz)
  • Abit NF7-S Mother Board
  • 2x Samsung 256MB PC3200 Memories
  • ATI Radeon 9600XT 128MB 128bit VGA
  • IBM-Hitachi Deskstar IC35L040AVVN07-0 40GB 7200RPM HDDs
  • Western Digital Caviar WD300AB 30GB 5400RPM
  • Windows XP Professional Service Pack 1 Operating System
  • ATI Catalyst 8.07beta Driver

For our tests, we made use of the Catalyst 8.07beta driver for the Radeon 9600XT, since it’s the latest driver from ATI that has been approved by Futuremark (FM Approved) for use with its products.

The tests used the default settings:

  • Resolution: 1024 x 768
  • Anti-Aliasing: None
  • Texture Filtering: Optimal
  • VS Profile: 2_0
  • PS Profile: 2_0

First test – CPU influence

3DMark2001:

Before we proceed testing 3DMark05, we’ll take a quick look at the CPU’s influence on some of the previous versions of the software.

For testing the CPU’s influence on the score in this test and also in the next couple of tests, we ran the CPU with the default 11.5 multiplier and the following FSB frequencies:

  • 100 MHz (x11.5 = 1,152 MHz)
  • 133 MHz (x11.5 = 1,536 MHz)
  • 150 MHz (x11.5 = 1,725 MHz)
  • 166 MHz (x11.5 = 1,921 MHz)
  • 180 MHz (x11.5 = 2,070 MHz)
  • 200 MHz (x11.5 = 2,300 MHz)

{mospagebreak}

The scores in the next 3 graphs were with two 256MB Samsung memory modules (a total of 512 MB) using timings of 2.5-3-3-7. The graphic card was at stock frequencies – 300 MHz for the memory (600 MHz DDR) and 500 MHz for the core.

Graph1

The calculation method for determining the final 3DMark01 score uses the average FPS rate of every game, giving the High Detail Games a more important part. Here’s the equation:

3DMark2001 score = (Game 1 Low Detail + Game 2 Low Detail + Game 3 Low Detail) * 10 + (Game 1 High Detail + Game 2 High Detail + Game 3 High Detail + Game 4) * 20

Based on the graph, we can see very clearly that with every rise of the CPU’s frequency, the score is influenced greatly (an increment of almost 4,500 3DMarks between the run of the CPU on 1,152 MHz to the run of the CPU on 2,300 MHz). The final difference in the score has reached 33% of the score.

CPU influence – 3DMark03:

Graph2

The calculation method for the 3DMark03 score also based on the average number of frames per second (FPS), but it was changed to the following formula:

3DMark03 score = (Game Test 1 * 7.3) + (Game Test * 37) +
(Game Test 3 * 47.1) + (Game Test 4 * 38.7)

As you can see, the first game test has the lowest impact on the total score (it does not require DirectX 9 – even my GeForce 2 MX 100/200 could run it, although not smoothly). The third game test has the highest impact on the score.

In 3DMark03, you start to notice that the difference between each column is smaller (compared to the 3DMark01’s graph). It’s important to notice that since the score calculation method is different, we should look at the scores in proportion to 3DMark01 scores in order to compare between the two’s dependence over the CPU. Still, there’s a difference of over 350 points between the run of the CPU in its high frequency to the run of the CPU with the lowest frequency – a difference of almost 10% between the scores.{mospagebreak}

CPU influence – 3DMark05:

Here is the newest version of the software, 3DMark05, with anticipation for smaller differences (to what extent possible).

Graph3

Again, also in 3DMark05, the score calculation method has changed and the average FPS number is still the basis:

3DMark05 score = (Game Test 1 * Game Test 2 * Games Test 3) ^ 0.33333… * 250

3DMark05 was planned in order to give each game test an equal importance in the final score: the average FPS number of each game multiplied by the others and raising to a power of one third and multiplying again by 250.

Here we can see that the difference between the runs with the CPU in the lower frequency to the runs with the CPU with the highest frequency is only 50 points, a little over 3%.

Futuremark has definitely managed to isolate the impact of the CPU’s power over the final 3DMark score. {mospagebreak}

Second test – Amount of RAM Influence

In order to test the importance of the amount of RAM and its influence of the final 3DMark score, we launched the test using two configurations – one with a single 256 MB Samsung PC3200 module and the second with two of the same modules (512 MB overall). The memory remained with fixed timings of 2.5-3-3-7. Each one of the configurations was tested using the FSB frequencies of the previous tests.

Graph4

As we learned from the previous tests, when the system had 512 MB DDR RAM installed, the difference between the lowest to the highest score was a little over 3%. From the aforementioned graph, we can see that the difference between the highest and the lowest scores almost doubled itself when we reduced the RAM amount by half – it made the difference between the highest score (which, by the way, wasn’t accomplished with the highest CPU frequency) and the lowest score by almost 6%.

Between the highest scores in the two configurations there’s a difference of slightly over 3%. Between the two lowest scores in the same configurations there’s a difference of over 5.5%.

We observed that lack of memory makes a bigger impact over the 3DMark05 score (and if 3DMark05 actually reflects the demands of tomorrow’s games, it also impacts the overall game experience) than the CPU itself. If you ever thought about upgrading the amount of your RAM, you’re invited to do that in the near time.

Third test – Graphic Card’s Memory Frequencies

In the following two tests, we set the CPU with a frequency of 2300 MHz (200 MHz FSB) in order to emulate a 3400+ PR CPU (the calculation was made by uAMD Tool). Total RAM was the same – 512 MB.

The next test, which should indicate the influence of the graphic card memory’s frequencies influence, was made as follows: at first we lowered the memory’s frequency rate to the lowest possible while not seeing artifacts, then slowly raised it each time with jumps of about 35 MHz (as accurate as the graphic card allowed). Notice that with also included the score with the default stock memory frequency – 600 MHz. The graphic core was fixed on 500 MHz.

Graph6

As you can see, with every raise of about 35 MHz, there’s an increment of between 26 to 55 points that becomes more moderate and smaller as the frequency increases. It makes us conclude that the speed of the memory in the graphic cards plays a bigger role in slower cards, but with cards with faster memories, the importance of a few more megahertz here and there becomes insignificant for a better mark with the software – an overall difference of almost 12% between the highest score to the lowest.{mospagebreak}

Fourth test – Graphic Card’s Core’s Frequencies

In our fourth and last test, we used the same settings as the last test – CPU at 2300 MHz and 2 x 256 MB Samsung PC3200 Memories (512 MB). The frequency of the graphic card’s memory was fixed at 600 MHz.

At first we lowered the GPU to the lowest frequency that the ATI Tray Tools software allowed us. Then, we raised frequencies with jumps between 35 MHz to 50 MHz until we reached a frequency of 570 MHz. From there we continued with smaller jumps of 10 MHz each until we reached a frequency of 600 MHz (a 100 MHz overclock. When we reached a 610 MHz, the software gave a spurious result which we couldn’t confirm by running the test again at 610 MHz, since it became unstable).

Graph

The graph speaks for itself: the GPU’s speed is the most important element of the score (and also to future game experience if the software really reflects it).

You can see that when we raised the GPU’s frequency with jumps between 35 MHz to 50 MHz, we gained score differentials of between 67 to 130 points, again decreasing as the frequencies became faster, as the test with the graphic card’s memory.

When we raised the GPU in jumps of 10 MHz each, of course we saw a more moderate rise with the score, but the differentials was of about 20 points with each jump.

Between the highest score to the lowest one, there’s a difference of 27% (!) that shows us the importance of the graphic card’s core performance above everything else. In addition, in light of those results and with the fact that we see in many 3DMark05 the benchmarks between the ATI Radeon 9×00 graphic cards series to NVIDIA’s FX series, we can understand the importance of efficient core architecture, something we couldn’t notice with previous versions of the software, and even with most games these days.

For example, in many games today the GeForce FX 5900XT will yield better performance than the Radeon 9600XT, but according to many benchmarks from the orb, we can see that the Radeon 9600XT brings at least 30% more points, even than NVIDIA’s strongest previous generation card – the GeForce FX 5950 Ultra. That’s because the 9600XT’s core was planned better, in a more efficient way.

We do not mention these details to slight NVIDIA; on the contrary – the company introduced great graphic cards that yield great performance with games these days. We just want to emphasize the importance of the GPU’s architecture planning for the future games.{mospagebreak}

A Few Words In Conclusion:

Futuremark introduces its benchmarking software as ones that are intended for gamers.

Until today, the final score of the software’s benchmarks wasn’t exactly objective as many of us would like to be in order to compare between two graphic cards on two completely different computer systems, but now with the release of 3DMark05, we have got a new, objective (more or less) way to compare this kind of systems, and keep in mind that in the most radical cases there could be a maximum deviation of about 8% if the CPU’s speed & the memory’s volume in one of the system is radically supreme or inferior.

The software also provides us with a more accurate diagnostic tool for failures and other faults and allows us to determine almost for certain if the fault is of the graphic card.

Pros:

  • Benchmarks the graphic card with an almost objective method, without any disruptive influences from other hardware parts, except the card itself
  • Brings a better diagnostic tool
  • The Software adds in a relatively small way, but that still exists in the game test, the dimension of the artificial intelligence influence
  • GREAT GRAPHICS

Cons:

  • In order to run the application, a graphic card with full support for DirectX 9.0 is required
  • The software still isn’t absolutely objective for the graphic card only, although the influence of other elements shrunk

Final Score

Graph

Tomer Pe’er – Israel

Be the first to comment

Leave a Reply