• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

ATI Brilinear ?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Status
Not open for further replies.
some new quotes from B3D:
(sorry some are long)

Seems the same happens to NV40 (TrilOpt Off) and 9800XT.

atifilter2.png

Any ideas why this happens? Why do the colored mipmaps enhance the filtering results so much - and on all cards?

*confused*

Lars (THG)


It seems to me that the topic has once again become buried in a confusion as to the differences between an "optimization" and a "cheat." Most people have forgotten that difference.

optimization = an approach in the drivers to create a maximum of efficiency for the hardware which is universally applied, and is not application specific. An optimization results in no IQ degradation and better performance at the same time. Thus, optimization is good and desirable.

cheat = when the drivers are written to include application-specific code which alters the normal behavior of the drivers when specific applications are detected to create performance increases for those applications at the noticeable expense of IQ; OR when universal driver behavior is instituted such that performance increases in all applications at a noticeable expense of IQ in all applications.

An example of the first case of cheating above would be the original nVidia driver case with UT2K3, in which nVidia had completely disabled trilinear filtering for detail textures in this specific game, even when the control panel was set for "application" control, and the game instructed the drivers to provide it. By contrast, when the Catalyst control panel was set to "application" control, and UT2K3 called for trilinear on detail textures, it was applied. nVidia originally did this only for UT2K3, and people trying out U2 at the time, for instance, found that trilinear worked as expected.

An example of the second case of cheating was the so-called "compiler optimization" route nVidia took with nV3x which was done for the purpose of converting ps2.0 instructions encountered in a game into ps1.x-friendly code, simply because nV3x's ps2.0 hardware implementation was so poor compared to R3x0's, and nV3x's ps1.x implementation. This had the effect of greatly improving the performance of ps2.0 code/games on nV3x, but at the noticeable expense of IQ, which could be seen in the legion of screenshots most of us saw around the web which clearly demonstrated the negative effect on IQ that this approach had for nV3x. Of course, since R3x0 hardware natively supported ps2.0 so much better than nV3x, the Catalysts had no need to do anything apart from running ps2.0 code on R3x0's ps2.0 hardware.

Here's what ATi said on the present matter:

ATi wrote:
Our algorithm for image analysis-based texture filtering techniques is patent-pending. It works by determining how different one mipmap level is from the next and then applying the appropriate level of filtering. It only applies this optimization to the typical case – specifically, where the mipmaps are generated using box filtering. Atypical situations, where each mipmap could differ significantly from the previous level, receive no optimizations. This includes extreme cases such as colored mipmap levels, which is why tests based on color mipmap levels show different results. Just to be explicit: there is no application detection going on; this just illustrates the sophistication of the algorithm.
(emphasis mine)

Clearly, it's an optimization, not a cheat, because it is applied universally in all applications but only "where the mipmaps are generated using box filtering." In all other cases, including but not limited to, colored mipmap levels (which are not box-filter generated), the optimization does not function but normal trilinear is applied instead.

Q: So why would ATi's algorithm make this distinction--why not use it all the time?

A: Obviously, the intent of the optimization is to improve efficiency and performance without sacrificing IQ, and so the optimization doesn't function in the cases where *it would* visibly degrade IQ--colored mipmap situations being only one of them. That's the way I read what ATi has said about this.

So far as regards this thread and the entire topic I've not seen any evidence presented that IQ is degraded, and have seen a few comments to the effect that it may be *improved*, actually. Finding the pixel differences between screenshots does not tell us which, if either, of the screen shots is of better IQ than the other, merely that there are slight and subtle differences between them. Hence any examination of this matter which stops at differences in pixels between frames without demonstrating that the differences have visibly altered IQ, either improving or degrading it, must be considered incomplete, I would certainly think.

If the goal is to prove that ATi is fudging, or cheating, then it is imperative to clearly demonstrate that the optimization *degrades* IQ over what would exist without it. For instance, in the screenshots illustrating the differences between ps2.0 code running on nV3x's 1.x shaders, and the same code running on nV3x 2.0 shaders, the IQ differences are clearly evident, such that an examination of "pixel differences" is not required to see the difference, or the degradation in IQ. Until that kind of IQ difference can be established here, there is simply no Catalyst cheating case that can be made, imo, as slight pixel differences between frames may be positive differences as easily as negative ones, in terms of IQ. In short, a difference in IQ, either way, has yet to be demonstrated. That leads me to believe that actual IQ differences may be either unaffected, or possibly even improved, which certainly supports a case for considering this an optimization as opposed to a cheat.

I think this topic has really exposed the flawed knowledge of some people (and possibly myself by the end of this post). The whole brilinear thing has raised trilinear filtering onto a pedestal it doesn't deserve. nVidia's method was -- to put it bluntly -- crap. Firstly, it was a "one-size-fits-all" approach and thus led to often inferior IQ, and secondly, it was easily noticeable.

Now we have people creating custom levels and special situations to try and spot ATI's method. The fact that no one noticed on the 9800XT for a year reflects pretty well on it, wouldn't you say? Not only that, but ATI won, or drew, IQ wise in every review I read involving the X800. Either every reviewer on the net is incompetent, or the algorithm is doing a damn fine job.

Now, add to this the fact that, as was mentioned, trilinear isn't always the best solution because it blurs. In fact, the less filtering used the better, it's just that you tend to need filtering, and using trilinear all the time is the safest and least noticeable option. If an algorithm can come along and dump this "dumb filtering" (or, "naive" as it was called earlier) approach with something a little more intelligent (non-naive?) then surely we should be over the moon, not crying cheat? Or have we reached a point where trilinear is a false god?

What people should be doing is investigating places where the algorithm is making the wrong call, then letting ATI know so they can improve it. Like the shader compiler, I don't see how the driver being smart can possibly be a bad thing, unless it ends up making bad decisions... but that's why there's always a new driver release on the horizon!

What should come out of this is the choice of language in the reviewers' guide. It's ATI's attitude, not their technical skills, that should be under the microscope.

If someone wants to play with the small Q3 level on its own:

http://www.rivastation.com/temp/aniso1.zip

unzip to quake3\baseq3 folder.

start it with:
\map aniso1

there´s also a little demo included:
\demo aniso5

In some areas the texture don´t fit perfectly together... this is caused by the line tool of photoshop. It was just a quick test...

Well in the end it might proove that ATI is indeed filtering correctly with their adaptive algorithm.

Lars (THG)

Bjorn wrote:
Drak wrote:

That's a great optimisation. Setting "Trilinear" in the CP or in-game setting gives me less than full trilinear but better image quality.



While it definitely remains to be seen how this affect IQ, i don't see where you get "less then full trilinear but better image quality" from.


Sorry, I didn't properly quote WaltC. The next line after I cut him off was:

WaltC wrote:

So far as regards this thread and the entire topic I've not seen any evidence presented that IQ is degraded, and have seen a few comments to the effect that it may be *improved*, ...



And from some other anecdotal posts:
Quitch wrote:

The whole brilinear thing has raised trilinear filtering onto a pedestal it doesn't deserve



Jabbah wrote:

Which brings us to the question: Is this optimization detremental to image quality?

From the images I have seen the X800 looks better than the 9800. The textures look sharper for longer, and there is no obvious banding at the mipmap transitions. I still think it is something that should be able to be turned off.

I would give my thoughts, yet then I would be called a fanboi (and that is soooo funny, because I'm thinking of going back to nVidia)....yet it is getting sooo easy to spot the true fanboys here.

"some people get it, and some people don't
some people understand it, and some people don't
and some people don't want to get it...they are the fanboys."-mica

mica
 
Last edited:
Quoting yourself is on par with giving yourself a nickname. Are you by chance the guy in the "I'm Spicy" Burger King commercial?
 
After reading some of your copy-pasted quotes from B3D, it sounds like I'm not the only one thinking that this "trylinear" filtering may actually be doing a better job than "naive trilinear" that we've all been accustomed to. I also have to agree that, of the people trying to supposedly expose this horrid secret, they're having to build some pretty elaborate scenarios that would never exist in a real game.

The simple fact that this type of filtering has been used for 12 months or more and NOBODY noticed it, and even with the filtering enabled, the X800 drew a tie or won all the IQ tests it performed in (even against the NV40 when using FULL Trilinear).

What does it say when the new method you're all blasting is actually voted by unbiased parties as better than real trilinear? In my opinion, that says this new method is simply better. Fanboy? Dunno, is my logic somehow flawed? Visibly equal (or even better) result combined with equal (or faster) speed that is applied equally to all applications that use that filtering? This is the classic definition of an optimization isn't it?
 
Albuquerque said:
How is that the end of the discussion? I don't quite understand?

Trilinear filtering, had you paid attention, is not a mathimatic constant. Trilinear filtering is a "method", and the "method" is working exactly as advertised. It is enabled by default, and unless you can show me moving pictures of mipmap boundary lines rolling across the walls and floor while moving, then it's still working.

I made my previous statement in bright yellow, how did you miss it?

I'm not sure what's confusing. ATI's claim is that it does true trilinear when by their own admission in the PR letter today, they don't, at least not all the time. Slice it anyway you want, there's no alternate technique to trilinear, you either do it full/"naive" or you're cheating. If they weren't cheating then they would give us the option to do brilinear or trilinear which they don't currently allow. Since I'm chosing to do trilinear I should be the one who decides whether or not to do true trilinear or gain some more fps via brilinear. All the BS about general versus game specific optimizations and can't tell the difference is just that, BS. If you're not doing what it is you claim to be doing (as the literature clearly shows) you're cheating. The fact that it gains FPS in usage is just confirmation on this.

Stop defending ATI, they're as bad as nVidia. In fact, it now seems that the NV4x is apparently a more powerful card than the X800 series which further lends credence to this being a cheat. Winning is the only thing that matters to these companies, ATI is just as bad as nVidia and for those able to objectively see the truth it's finally out in the open.
 
Albuquerque said:
After reading some of your copy-pasted quotes from B3D, it sounds like I'm not the only one thinking that this "trylinear" filtering may actually be doing a better job than "naive trilinear" that we've all been accustomed to. I also have to agree that, of the people trying to supposedly expose this horrid secret, they're having to build some pretty elaborate scenarios that would never exist in a real game.

The simple fact that this type of filtering has been used for 12 months or more and NOBODY noticed it, and even with the filtering enabled, the X800 drew a tie or won all the IQ tests it performed in (even against the NV40 when using FULL Trilinear).

What does it say when the new method you're all blasting is actually voted by unbiased parties as better than real trilinear? In my opinion, that says this new method is simply better. Fanboy? Dunno, is my logic somehow flawed? Visibly equal (or even better) result combined with equal (or faster) speed that is applied equally to all applications that use that filtering? This is the classic definition of an optimization isn't it?

You just don't get it. Whether it's better or not is IRRELEVANT. They're not doing what it is they claim they're doing. I want to choose naive trilinear or brilinear. I will decide what looks better, not ATI.
 
xtrmeocr said:
nocheatATi.jpg


found this funny :D

That's classic :D
 
Bar81 said:

I'm not sure what's confusing. ATI's claim is that it does true trilinear when by their own admission in the PR letter today, they don't, at least not all the time.

Did you not read a single thing? Are you simply stuck regurgitating the same line over and over again? Let me put it in BIG COLORFUL LETTERS so that you might read:
TRILINEAR HAS NO "STANDARD". THERE IS NO MATHEMATICAL CALCULATION FOR "TRUE" TRILINEAR FILTERING. TRILINEAR IS A GENERALIZED METHOD, AND IS ABSOLUTELY IN NO WAY AN EXACT CALCULATION.

Did you see that sentence I just wrote? Continue reading:

Trilinear filtering is antialiasing of mip map boundaries, it does nothing else. As such, there is no standard, no mathematical exact, and no "true" way to do it. This isn't an ATI thing, it isn't an NVIDIA thing, it isn't an S3 thing or an Intel thing or a Microsoft thing or an AMD thing. THIS IS THE WAY VIDEO GRAPHICS WORK.

You can ask your self exactly one question:
1. Are mipmap boundaries being properly antialiased? Yes

Then "true trilinear filtering" is being applied. End of story.

And I'm sure you'll ask why NVIDIA was called out for brilinear? It was simply because mipmap boundaries were NOT being properly antialiased, and you could actually see the boundaries rolling across the visible screen when you moved. When you can see the boundaries, you are NOT doing trilinear filtering.
 
Sorry, you're wrong. Putting it in technicolor doesn't change that. There is a way and ATI's not doing it. Stop being such a fanboy.
 
Bar81 said:
Sorry, you're wrong. Putting it in technicolor doesn't change that. There is a way and ATI's not doing it. Stop being such a fanboy.

Really, I'm wrong? Show me ONE SINGLE website that shows the true mathematical definition for trilinear filtering. Show me the exact definition of "true" trilinear.

Show me one, otherwise YOU are a complete liar.
 
Albuquerque said:


Really, I'm wrong? Show me ONE SINGLE website that shows the true mathematical definition for trilinear filtering. Show me the exact definition of "true" trilinear.

Show me one, otherwise YOU are a complete liar.

ok now, let's not get heated about this.

while just about everything Bar81 may have said seems realy flawed to me...he may just not understand what is going on.

while he may not have read all my quotes, or may not understand them...there is no reason to take this great informative thread and degrade it by flamebaiting.

let's show everyone we can be better then this, regardles of the true outcome.;)

we are here to share information, not get into a fight.


mica
 
The simple fact that this type of filtering has been used for 12 months or more and NOBODY noticed it, and even with the filtering enabled, the X800 drew a tie or won all the IQ tests it performed in (even against the NV40 when using FULL Trilinear).

the reason it wasnt notice is because only the 9600 does it (and now the x800) and in synthetic filtering tests(aka when your testing forit) it does do FULL trilinear. the fact remains that the x800 is using less samples then the 9800 did.
 
Bar81 said:
I'm not sure what's confusing. ATI's claim is that it does true trilinear when by their own admission in the PR letter today, they don't, at least not all the time.
I was going to agree with you, but I'm afraid that you've read their letter wrong.

ATI claims the following:
Our algorithm for image analysis-based texture filtering techniques . . . works by determining how different one mipmap level is from the next and then applying the appropriate level of filtering.

They claim that they change the AMOUNT of filtering, not the TYPE. This has been known for a long time. When you set "16x Quality" in the control panel (which is 16x Trilinear), ATi does not do 16x filtering all the time. Depending on just how bad a texture needs filtering, it will crank up the AF level used.

They do NOT change what kind of filtering they use though. If you set trilinear, they will keep using trilinear for all textures. However, just because it's trilinear dosen't mean squat. The "16x Quality" mode on ATi hardware performs up to 128 pixel taps. You read that right. Up to. That means that if a texture really dosen't suffer from any filtering artifacts, it may be filtered less than a "2x Performance" (2x Bilinear) texture would be :eek:

Don't go ragging ATI off on this though. nVidia also implements an algorithm which will change the amount of filtering performed (up to a maximum set in the control panel through the AF slider) depending on just how bad a polygon needs it.


In other words, nowhere does ATI say they switch to bilinear. At best, they just confirm what we already know, which is that they change the level of anisotropy based on how bad a polygon needs it. If they are cheating, they're use of PRspeak to avoid the issue is excelent :)

JigPu
 
this may sound stupid, but imo when it comes down to it surely as long as the actions pumping smoothly and the picture looks great to the eye then who cares how its achieved? i dont care if my washing machine does half the stuff it claims to do so long as it cleans.... :D
 
Albuquerque said:


Really, I'm wrong? Show me ONE SINGLE website that shows the true mathematical definition for trilinear filtering. Show me the exact definition of "true" trilinear.

Show me one, otherwise YOU are a complete liar.

You can read up on 30 pages on Beyond 3D showing that you're wrong.

Second of all, fanboy, stop getting so defensive.
 
JigPu said:

I was going to agree with you, but I'm afraid that you've read their letter wrong.

ATI claims the following:


They claim that they change the AMOUNT of filtering, not the TYPE. This has been known for a long time. When you set "16x Quality" in the control panel (which is 16x Trilinear), ATi does not do 16x filtering all the time. Depending on just how bad a texture needs filtering, it will crank up the AF level used.

They do NOT change what kind of filtering they use though. If you set trilinear, they will keep using trilinear for all textures. However, just because it's trilinear dosen't mean squat. The "16x Quality" mode on ATi hardware performs up to 128 pixel taps. You read that right. Up to. That means that if a texture really dosen't suffer from any filtering artifacts, it may be filtered less than a "2x Performance" (2x Bilinear) texture would be :eek:

Don't go ragging ATI off on this though. nVidia also implements an algorithm which will change the amount of filtering performed (up to a maximum set in the control panel through the AF slider) depending on just how bad a polygon needs it.


In other words, nowhere does ATI say they switch to bilinear. At best, they just confirm what we already know, which is that they change the level of anisotropy based on how bad a polygon needs it. If they are cheating, they're use of PRspeak to avoid the issue is excelent :)

JigPu

Well, then you've read every post I've written wrong. What I'm referring to is brilinear, read up at Beyond3D for all the details.
 
xtrmeocr said:


because honesty doesn't sell.

That's the truth. All that matters is who "wins" whatever the cost. It's the same with almost every company when there's enough at stake.
 
Bar81 said:


Well, then you've read every post I've written wrong. What I'm referring to is brilinear, read up at Beyond3D for all the details.
Please explain exactly what you think "Brilinear" is (Beyond3D is a huge place, and even the thread referenced is rather long if I'm just looking for a single definition). I've heard several definitions, but they all seem to differ somewhat. I can't reply intelligently to your post without your own definition of Brilinear, since it involves comparing the differences of what ATI says they're doing, and what Brilinear is.

I don't mean to single you out specifically, but until all the words that are being debated over (angle dependant AF, AF, bilinear, trilinear, "brilinear", etc) are defined, nothing will really happen. You're here, and seem to understand a bit about brilinear, so I welcome your definition :)
JigPu
 
btw, did anybody even read the ATI literature I posted:

"a)Trilinear on by default...

ATI recognizes that the consumer wants the best visual quality be default, with the option to decrease Image Quality later if so desired."

Indeed :rolleyes:
 
Status
Not open for further replies.
Back