• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Futuremark releases 3dmark 2003 patch. nVidia officially cheated.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
MetalStorm said:
So looking back to PreservedSwines post on the first page, what nVidia are doing is optimisng the benchmark? as you said they are clipping the sky and using a more efficient pixel shader...so basically what has happened is that futuremark have made a crap benchmark, because its not in the least bit optimised... what could happen is they could just have a large flat wall, and put loads of things behind it which render using loads of inefficient pixel shaders that you never see and call it benchmark?

As far as im concerned there is nothing wrong with optimising anything, it can only be a good thing, it just seems its only nVidia that have taken the initiative.

Not at all. The shaders that Nvidia is using don't even render the same image.

Check these out:

http://www.beyond3d.com/forum/viewtopic.php?t=6042

Now do you understand the difference between "optimizing" and "cheating?"

I took this post from nVnews, as it explains quite bit. It coinrtains Tim Sweeney's view on the matter (The MAN at Epic)

Pixel shaders are functions, taking textures, constants, and texture coordinates as inputs, and producing colors as outputs. Computer scientists have this notion of extensional equality that says, if you have two functions, and they return the same results for all combinations of parameters, then they represent the same function -- even if their implementaions differ, for example in instruction usage or performance.

Therefore, any code optimization performed on a function that does not change the resulting value of the function for any argument, is uncontroversially considered a valid optimization. Therefore, techniques such as instruction selection, instruction scheduling, dead code elimination, and load/store reordering are all acceptable. These techniques change the performance profile of the function, without affecting its extensional meaning.

Optimization techniques which change your function into a function that extensionally differs from what you specified are generally not considered valid optimizations. These sorts of optimizations have occasionally been exposed, for example, in C++ compilers as features that programmers can optionally enable when they want the extra performance and are willing to accept that the meaning of their function is being changed but hopefully to a reasonable numeric approximation. One example of this is Visual C++'s "improve float consistency" option. Such non-extensional optimizations, in all sane programming systems, default to off.

3D hardware is still at a point in its infancy that there are still lots of nondeterministic issues in the API's and the hardware itself, such as undefined amounts of precision, undefined exact order of filtering, etc. This gives IHV's some cover for performing additional optimizations that change the semantics of pixel shaders, though only because the semantics aren't well-defined in the base case anyway. In time, this will all go away, leaving us with a well-defined computing layer. We have to look back and realize that, if CPU's operated as unpredictably as 3D hardware, it would be impossible to write serious software.

While my email to Tim came with an understanding that he may not wish for us to post his answers, Tim said "Please Post!".

--------------------------------------------------------------------------------


What ATi did

quote:
--------------------------------------------------------------------------------

The 1.9% performance gain comes from optimization of the two DX9 shaders (water and sky) in Game Test 4 . We render the scene exactly as intended by Futuremark, in full-precision floating point. Our shaders are mathematically and functionally identical to Futuremark's and there are no visual artifacts; we simply shuffle instructions to take advantage of our architecture. These are exactly the sort of optimizations that work in games to improve frame rates without reducing image quality and as such, are a realistic approach to a benchmark intended to measure in-game performance. However, we recognize that these can be used by some people to call into question the legitimacy of benchmark results, and so we are removing them from our driver as soon as is physically possible. We expect them to be gone by the next release of CATALYST.

--------------------------------------------------------------------------------


In other words what ATi did with a slight reorder of shader instructions is *NOT* cheating. What Nvidia did in changing entire shader routines, recompiling everything with Cg, inserting Clip planes and who knows what else they pulled.. *IS* cheating.

Thats why there is a big difference between 1.9% and 24%. 1.9% is well within the tolerance for shader optimization.
 
DaddyB said:
Well nvidia has now made a comment about this issue and they didn't say that they didn't cheat. Their comment, in fact, makes it look like they did cheat:



Sounds like they are trying to say that futuremark was cheating them and they don't make any mention of whether they did or didn't implement those 'optimizations', which of course leads me to believe that they did do it.

Read the rest of the story here


This response is PATHETIC, I mean, come on, at least fess up and save some face. Their constant lying to the public has got to be causing doubts in even their biggest fanboy's loyalty
 
RedSkull said:



This response is PATHETIC, I mean, come on, at least fess up and save some face. Their constant lying to the public has got to be causing doubts in even their biggest fanboy's loyalty

hell yeah, even overclocker550 jumped in atis bandwagon... hes not bee around lately tho :rolleyes:
 
MetalStorm said:
So looking back to PreservedSwines post on the first page, what nVidia are doing is optimisng the benchmark? as you said they are clipping the sky and using a more efficient pixel shader...so basically what has happened is that futuremark have made a crap benchmark, because its not in the least bit optimised... what could happen is they could just have a large flat wall, and put loads of things behind it which render using loads of inefficient pixel shaders that you never see and call it benchmark?

As far as im concerned there is nothing wrong with optimising anything, it can only be a good thing, it just seems its only nVidia that have taken the initiative.

When 3dmark 2k3 was released they said it was not optimized for any specific hardware, meaning it won't use ATi or Nvidia specific extensions, it's all generic - ARB and ARB2 I would assume. What ATi did was optimize for 3dmark (IMO), when it detected game4 it would run it's shaders the way that was best for it's hardware but the end result would be the same. Nvidia detected the tests, as ATi did, but they did it with most tests and replaced the shader instructions with it's own more efficient and much lower image quality instructions.

Culling is still allowed and legal, it is a part of the card's (well the driver's) architecure and how good a card culls is a part of how well the card performs. So normal culling and backface culling is fine. What Nvidia did was set up static culling points just above and below what is visible when you run 3dmark, doesn't seem that bad right? BUT every other card on the planet does not cull those parts of the scene so they all render it while Nvidia doesn't. This gives a huge advantage to nvidia since they are not doing all the work that the other cards are doing. When the camera is moved in fre look mode (only on 3dmark Pro) you can see the static culling points because the sky just ends, if it were real culling then when you move the camera the culling would move with it. That shows that nvidia set up certain points which are not visible in normal mode to be culled.

As for all of this 'now benchmarks are useless' talk it simply is not true, maybe the GFFX does better in doom 3 because they are cheating (granted the static culling wouldn't work in a game but the shader calls could be replaced), maybe the 5900 performs well in UT 2K3 because it is cheating... My point is that this doesn't make benchmarks untrustworthy, it makes nvidia untrustworthy. Any company can cheat in any benchmark/game but we assume that when we set the game to run at a given detail level it is actually doing it, the drivers can override those settings and artificially inflate framerates in anything.
 
FleshEating Bob said:
nVidia should just cut out the cheating and lying and admit that their FXes (Except the 5900) suck compared to ATi's latest and greatest.

They already admitted it... what else should they do? Put a warning on every box saying "WARNING: THIS CARD IS WORSE THAN AN ATI CARD. PLEASE, WE BEG OF YOU, DON'T BUY THIS"? News flash: companies don't like to say their products are bad, and NVIDIA isn't unique in this regard.
 
It is not ARB ............ that is the OpenGl standard. Furthermore DirectX which 3dmark is based on doesn't allow 'extensions' by anybody . It is standard or nothing . That is what Nvidia has been trying to pervert for the past year or so .
 
Their head has just gotten to big for their own good. The day they thought they could influence Microsoft in any way, shape or form, is the day they screwed themselves
 
Cowboy X said:
It is not ARB ............ that is the OpenGl standard. Furthermore DirectX which 3dmark is based on doesn't allow 'extensions' by anybody . It is standard or nothing . That is what Nvidia has been trying to pervert for the past year or so .

Look at preserved swine's second post,

In game test 4, the water pixel shader (M_Water.psh) is detected. The driver uses this detection to artificially achieve a large performance boost - more than doubling the early
frame rate on some systems. In our inspection we noticed a difference in the rendering when compared either to the DirectX reference rasterizer or to those of other hardware. It appears
the water shader is being totally discarded and replaced with an alternative more efficient shader implemented in the drivers themselves. The drivers produce a similar looking rendering, but not an identical one.

The pixel shader routine is detected, discarded and an alternative is used, that is the kind of thing I was talking about when I said extensions (the 'alternative' Nvidia optimized shader). DX certainly allows you to do that, whether you want to call them extensions or not and whether it deviates from the DX standard or not. If a driver contains it's own shader routines they can be substituted for the ones a program calls for if a detection routine is used (as was the case with the nvidia drivers). I was simply saying that 3dmark doesn't optimize for any specific hardware in that way, the code is all generic non optimized.

I didn't realize the rendering paths ARB, ARB2, nv20 etc. were exclusive to OGL, I thought it was a generic path that could be called in any mode, oh well live and learn.
 
I think the problem is that extension in OpenGl has 2 meanings , 1 is the additional feature tyep which literally adds features that a particular vcard has above + beyond the standard . The other type of extension is one which to some extent changes the way the hardware/software interpret OpenGl , this can improve compatibility , speed or both .

Direct X definitely doesn't allow the first type since you can't add features to the API . The second type is only permissable where your way of interpreting/compiling directX software doesn't go too far from what MS recommends and doesn't decrease quality .

Nvidia is trying to do all of the above .
 
i dont know, im getting a 9500Pro, so in fact, i dont care.
But.
If they did the same to a game in some way, used a faster way then the game designer thought of, that would be a good thing, right?

I might be stuppid, but as long as there is a as good way to do it, if its faster, that should be ok, if you KNOW your card aint performing its best, why dont optimize for it.. but, making a new sky, and so on, thats cheating... they might be on the right way, but its still the wrong way to actually do it...

Oh well.
B!

-Going with ATi, not since its better, or cheaper... Just coz its the best bang for the buck right now-
 
-=Mr_B=- said:
If they did the same to a game in some way, used a faster way then the game designer thought of, that would be a good thing, right?

Yea, that is a good thing. However, on a program that used purely to benchmark a video card using a strategy that CANT be used in a regular game, is just plain cheating.
 
(joke) Now see, if we had all listened to me and bought Parhelia, then we never would've had to worry about how this affects us! (/joke) :D

Seriously, though, we need to take a long look at the enthusiast community. If ATI cheated years ago with its Quack3, and now nVidia cheats with their special-casing of 3dMark, then I think that says something about how much stock people have come to place in benchmarks. If companies are willing to go to such lengths to make their product look better, that is. Sure, we all laugh at the mainstream herd who buy computers from Dell based solely on the GHz and HDD capacity numbers, but look at how often we technologically-knowledgable people use the single number 3dMark returns as a make-or-break system/card evaluation.

This is not to say that 3dMark is worthless; all synthetic benchmarks have their place. But 3dMark cannot and should not be the only means by which video cards are compared. Instead, it should be used as it was intended - as a guide to how much processing power a card has. All too often, reviews of most video cards are simply "Here's the technology overview, here's the 3dMark + Quake3 + UT2k3 + Doom3 scores, buy-it-now (or) man-this-sucks." I never see any real "playing" results of games, it's all pre-set benchmarking sequences. How about sitting down and firing up the game itself and seeing how it performs? How about using something not from the FPS genre (impossible as that may seem); I never see any review make mention of 3D RTS performance (i.e. Warcraft III), despite the large amount of detail that places a load on graphics cards in such games. Sound card reviewers don't solely go by CPU usage as the all-knowing indicator of performace - most soundcard reviews use listening tests and go through every major feature a card has to offer. Why don't video card reviewers do the same? How about going in-depth on a card's image quality, signal reproduction, etc?

Finally, to close with a little more humor:

The over-zealous boycotter: "Well, I swore I'd never buy another ATI product after Quack3, and I swore I'd never buy another nVidia product after 3dMark03...Time to whip out the Voodoo 5!"
 
Seriously, though, we need to take a long look at the enthusiast community. If ATI cheated years ago with its Quack3

Wow, how many times must this be gone over???


OK, ONE MORE TIME....

ATI didn't cheat w/Quake/Quak. It's an application detectin in thier drivers that had been in the past 6 driver releases for the 64DDRVIVO. It caused a total of (5) textures to be blurred on the brand new R8500 that ATI wasn't aware of. The blurried textures were repaired in the very next driver release, and the speed *increased* as well. There was only a bug, no *cheat*. The application detection *is still there*, and as I understand it, now applies to all Quake3 engine based games. It improves performance, and keeps the intened image quality.

I sincerely hope that helps...

I couldn't agree more with the rest of your post, I sure wish there were more that felt the same:clap: :beer:
 
Admiral Falcon said:
(joke) Now see, if we had all listened to me and bought Parhelia, then we never would've had to worry about how this affects us! (/joke) :D

Seriously, though, we need to take a long look at the enthusiast community. If ATI cheated years ago with its Quack3, and now nVidia cheats with their special-casing of 3dMark, then I think that says something about how much stock people have come to place in benchmarks. If companies are willing to go to such lengths to make their product look better, that is. Sure, we all laugh at the mainstream herd who buy computers from Dell based solely on the GHz and HDD capacity numbers, but look at how often we technologically-knowledgable people use the single number 3dMark returns as a make-or-break system/card evaluation.

This is not to say that 3dMark is worthless; all synthetic benchmarks have their place. But 3dMark cannot and should not be the only means by which video cards are compared. Instead, it should be used as it was intended - as a guide to how much processing power a card has. All too often, reviews of most video cards are simply "Here's the technology overview, here's the 3dMark + Quake3 + UT2k3 + Doom3 scores, buy-it-now (or) man-this-sucks." I never see any real "playing" results of games, it's all pre-set benchmarking sequences. How about sitting down and firing up the game itself and seeing how it performs? How about using something not from the FPS genre (impossible as that may seem); I never see any review make mention of 3D RTS performance (i.e. Warcraft III), despite the large amount of detail that places a load on graphics cards in such games. Sound card reviewers don't solely go by CPU usage as the all-knowing indicator of performace - most soundcard reviews use listening tests and go through every major feature a card has to offer. Why don't video card reviewers do the same? How about going in-depth on a card's image quality, signal reproduction, etc?

Finally, to close with a little more humor:

The over-zealous boycotter: "Well, I swore I'd never buy another ATI product after Quack3, and I swore I'd never buy another nVidia product after 3dMark03...Time to whip out the Voodoo 5!"


I hear you. I mean, you can't just run a game and tell if one card is better then another if you don't know have something keeping track of FPS and some parts of games use certain capabilities more intensly then others, but more real game benchmarks, some RPGs, FPSers, RTSs, etc would be great.

Also, test more then just 2 settings. With 4x AA and 8x AF and with out is too limited. Test the settings at the same visual quality and not just because they are labeled the same.

I could go on forever, but I totally agree. Video card reviews are lacking (very badly) and need some improvement.

I wish I was Anand's brother:p
 
Um Excuse me but?

Wernt the reviews of the 5900 and the 9800 done on a bunch of different benches? So why would the results be that much different just because of One optimized bench?


Honestly i think this whole Nvidia Vs ATI stuff Is For the birds. Use what you can afford or like. A top of the line card By either one Is STILL better than what was out last year.
 
baraka said:
I hesitated posting this link to another ET article on 3dmark cheats because the topic seems to have been discussed to death :argue:

But I thought it was kind of interesting:

http://www.extremetech.com/article2/0,3973,1105259,00.asp


Yeah, that ones been around already. Not sure if it was this thread or the other thread, but I'll re-post the quote here just in case.

[It is due to an]…optimization of the two DX9 shaders (water and sky) in Game Test 4. We render the scene exactly as intended by Futuremark, in full-precision floating point. Our shaders are mathematically and functionally identical to Futuremark's, and there are no visual artifacts; we simply shuffle instructions to take advantage of our architecture. These are exactly the sort of optimizations that work in games to improve frame rates without reducing image quality, and as such are a realistic approach to a benchmark intended to measure in-game performance. However, we recognize that these can be used by some people to call into question the legitimacy of benchmark results, and so we are removing them from our driver as soon as is physically possible. We expect them to be gone by the next release of CATALYST.
 
OC Noob- I'm with you on the having more test in reviews. My only objection is the fact that its difficult to be fair to both. Its very difficult if not impossible to measure cards fairly using dozens of games. The reviewer could simply find a spot on the map that one card does well on if he favored one company. Regardless of if he or she had a bias, the scores made could not be verified nor could they be reproducable. Futuremark scores STILL tell you what speed the core and memory were at, cpu, motherboard, drivers ect ect. I still trust a futuremark score to lie less than ANY reviewer. when i use the search tool i for the most part know how certain cpu's and cards compare at different resolutions. What i think needs to happen is a some kind of program ensures that no special drivers are loaded. i don't care if you call them optimizations. I want your pure performance. I don't give a crap how well your card can do if you have your driver design teams work on it for months. that isn't the purpose of a benchmark. if your not going to put that kind of work into EVERY SINGLE game then i dont want to see it in the benchmark. booting into safemode will do this, no?
 
Back