- Joined
- Jun 1, 2002
some new quotes from B3D:
(sorry some are long)
I would give my thoughts, yet then I would be called a fanboi (and that is soooo funny, because I'm thinking of going back to nVidia)....yet it is getting sooo easy to spot the true fanboys here.
"some people get it, and some people don't
some people understand it, and some people don't
and some people don't want to get it...they are the fanboys."-mica
mica
(sorry some are long)
It seems to me that the topic has once again become buried in a confusion as to the differences between an "optimization" and a "cheat." Most people have forgotten that difference.
optimization = an approach in the drivers to create a maximum of efficiency for the hardware which is universally applied, and is not application specific. An optimization results in no IQ degradation and better performance at the same time. Thus, optimization is good and desirable.
cheat = when the drivers are written to include application-specific code which alters the normal behavior of the drivers when specific applications are detected to create performance increases for those applications at the noticeable expense of IQ; OR when universal driver behavior is instituted such that performance increases in all applications at a noticeable expense of IQ in all applications.
An example of the first case of cheating above would be the original nVidia driver case with UT2K3, in which nVidia had completely disabled trilinear filtering for detail textures in this specific game, even when the control panel was set for "application" control, and the game instructed the drivers to provide it. By contrast, when the Catalyst control panel was set to "application" control, and UT2K3 called for trilinear on detail textures, it was applied. nVidia originally did this only for UT2K3, and people trying out U2 at the time, for instance, found that trilinear worked as expected.
An example of the second case of cheating was the so-called "compiler optimization" route nVidia took with nV3x which was done for the purpose of converting ps2.0 instructions encountered in a game into ps1.x-friendly code, simply because nV3x's ps2.0 hardware implementation was so poor compared to R3x0's, and nV3x's ps1.x implementation. This had the effect of greatly improving the performance of ps2.0 code/games on nV3x, but at the noticeable expense of IQ, which could be seen in the legion of screenshots most of us saw around the web which clearly demonstrated the negative effect on IQ that this approach had for nV3x. Of course, since R3x0 hardware natively supported ps2.0 so much better than nV3x, the Catalysts had no need to do anything apart from running ps2.0 code on R3x0's ps2.0 hardware.
Here's what ATi said on the present matter:
ATi wrote:
Our algorithm for image analysis-based texture filtering techniques is patent-pending. It works by determining how different one mipmap level is from the next and then applying the appropriate level of filtering. It only applies this optimization to the typical case – specifically, where the mipmaps are generated using box filtering. Atypical situations, where each mipmap could differ significantly from the previous level, receive no optimizations. This includes extreme cases such as colored mipmap levels, which is why tests based on color mipmap levels show different results. Just to be explicit: there is no application detection going on; this just illustrates the sophistication of the algorithm.
(emphasis mine)
Clearly, it's an optimization, not a cheat, because it is applied universally in all applications but only "where the mipmaps are generated using box filtering." In all other cases, including but not limited to, colored mipmap levels (which are not box-filter generated), the optimization does not function but normal trilinear is applied instead.
Q: So why would ATi's algorithm make this distinction--why not use it all the time?
A: Obviously, the intent of the optimization is to improve efficiency and performance without sacrificing IQ, and so the optimization doesn't function in the cases where *it would* visibly degrade IQ--colored mipmap situations being only one of them. That's the way I read what ATi has said about this.
So far as regards this thread and the entire topic I've not seen any evidence presented that IQ is degraded, and have seen a few comments to the effect that it may be *improved*, actually. Finding the pixel differences between screenshots does not tell us which, if either, of the screen shots is of better IQ than the other, merely that there are slight and subtle differences between them. Hence any examination of this matter which stops at differences in pixels between frames without demonstrating that the differences have visibly altered IQ, either improving or degrading it, must be considered incomplete, I would certainly think.
If the goal is to prove that ATi is fudging, or cheating, then it is imperative to clearly demonstrate that the optimization *degrades* IQ over what would exist without it. For instance, in the screenshots illustrating the differences between ps2.0 code running on nV3x's 1.x shaders, and the same code running on nV3x 2.0 shaders, the IQ differences are clearly evident, such that an examination of "pixel differences" is not required to see the difference, or the degradation in IQ. Until that kind of IQ difference can be established here, there is simply no Catalyst cheating case that can be made, imo, as slight pixel differences between frames may be positive differences as easily as negative ones, in terms of IQ. In short, a difference in IQ, either way, has yet to be demonstrated. That leads me to believe that actual IQ differences may be either unaffected, or possibly even improved, which certainly supports a case for considering this an optimization as opposed to a cheat.
I think this topic has really exposed the flawed knowledge of some people (and possibly myself by the end of this post). The whole brilinear thing has raised trilinear filtering onto a pedestal it doesn't deserve. nVidia's method was -- to put it bluntly -- crap. Firstly, it was a "one-size-fits-all" approach and thus led to often inferior IQ, and secondly, it was easily noticeable.
Now we have people creating custom levels and special situations to try and spot ATI's method. The fact that no one noticed on the 9800XT for a year reflects pretty well on it, wouldn't you say? Not only that, but ATI won, or drew, IQ wise in every review I read involving the X800. Either every reviewer on the net is incompetent, or the algorithm is doing a damn fine job.
Now, add to this the fact that, as was mentioned, trilinear isn't always the best solution because it blurs. In fact, the less filtering used the better, it's just that you tend to need filtering, and using trilinear all the time is the safest and least noticeable option. If an algorithm can come along and dump this "dumb filtering" (or, "naive" as it was called earlier) approach with something a little more intelligent (non-naive?) then surely we should be over the moon, not crying cheat? Or have we reached a point where trilinear is a false god?
What people should be doing is investigating places where the algorithm is making the wrong call, then letting ATI know so they can improve it. Like the shader compiler, I don't see how the driver being smart can possibly be a bad thing, unless it ends up making bad decisions... but that's why there's always a new driver release on the horizon!
What should come out of this is the choice of language in the reviewers' guide. It's ATI's attitude, not their technical skills, that should be under the microscope.
If someone wants to play with the small Q3 level on its own:
http://www.rivastation.com/temp/aniso1.zip
unzip to quake3\baseq3 folder.
start it with:
\map aniso1
there´s also a little demo included:
\demo aniso5
In some areas the texture don´t fit perfectly together... this is caused by the line tool of photoshop. It was just a quick test...
Well in the end it might proove that ATI is indeed filtering correctly with their adaptive algorithm.
Lars (THG)
Bjorn wrote:
Drak wrote:
That's a great optimisation. Setting "Trilinear" in the CP or in-game setting gives me less than full trilinear but better image quality.
While it definitely remains to be seen how this affect IQ, i don't see where you get "less then full trilinear but better image quality" from.
Sorry, I didn't properly quote WaltC. The next line after I cut him off was:
WaltC wrote:
So far as regards this thread and the entire topic I've not seen any evidence presented that IQ is degraded, and have seen a few comments to the effect that it may be *improved*, ...
And from some other anecdotal posts:
Quitch wrote:
The whole brilinear thing has raised trilinear filtering onto a pedestal it doesn't deserve
Jabbah wrote:
Which brings us to the question: Is this optimization detremental to image quality?
From the images I have seen the X800 looks better than the 9800. The textures look sharper for longer, and there is no obvious banding at the mipmap transitions. I still think it is something that should be able to be turned off.
I would give my thoughts, yet then I would be called a fanboi (and that is soooo funny, because I'm thinking of going back to nVidia)....yet it is getting sooo easy to spot the true fanboys here.
"some people get it, and some people don't
some people understand it, and some people don't
and some people don't want to get it...they are the fanboys."-mica
mica
Last edited: