• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Farcry PS3.0 test from nVidia in London

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Cowboy X said:
The NV3x series can do enough of the DX9 spec and PS2.0 to be called compliant . the problem is that they left out several optional features which are now being used or are about to be used such as high dynamic range lighting and centroid sampling to name 2 . But what is a bigger problem is the fact that many of the features which the NV3xx series can handle it cannot use at a reasoonable speed at all ! So in several game situations NV3x dx9 cards have to stick to dx8 features or lower the precision or both .

So using that... I guess the latest incarnation from Nvidida is supposed to do all that with the 3.0 spec, but in all actuality the cards either:
a) Still cannot utilize the missing features
b) Can do it, but with no real performance increase...

which is what this whole thread is about, right?

You gotta forgive the stupid newb here... Thanx guys for straightening me out on this...
 
dark_15 said:
So using that... I guess the latest incarnation from Nvidida is supposed to do all that with the 3.0 spec, but in all actuality the cards either:
a) Still cannot utilize the missing features
b) Can do it, but with no real performance increase...

which is what this whole thread is about, right?

You gotta forgive the stupid newb here... Thanx guys for straightening me out on this...

the only true SM3.0 that farcry is using is "Geometry Instansing"....
they could not use another SM3.0 called "dynamic branching", so they used static branching instead.

it's funny that they say that they couldn't do static branching in PS2.0 because it used to many instructions...they use only 96 instructions max as what Yerli said.
well below what PS2.0 can do IIRC.

anyway, I hope this helps.

mica
 
Basically what micamica1217 said is pretty much true except that I don't think that the jury is yet in on how well Nvidia actually handles specific PS3.0 and VS3.0 ( collectively Sm3.0 ) features . For now PS3.0 can be used to reduce the number of passes and speed up some shaders compared with their PS2.0 equivalents . This is what Farcry is supposed to be doing .

The problem comes where we need to know what happens when as micmica said stuff like dynamic branching is used . ATI's contention is that PS2.0 can reproduce and visual effect that PS3.0 can do but just with more passes or different techniques . This is pretty much confirmed , so ATI then claims that they will/have produced an architecture which is so fast in PS2.0 shaders that it could beat NV's offerings even if they (NV) use PS3.0 . And they also claim that current generation cards will not be able to do highend PS3.0 stuff anyway so that PS2.0 is in their opinion enough for now .

Historically this has often been the case where the first vcards of a particular Direct3d generation despite being 'compatible' can hardly ever handle any application which uses the highend features which they claim to able to do . The way Commanche ( the first DX8 game as far as I remember)for example hurt the GF3 , r8500 and even GF4 cards . In fact only late last year were we really seeing advanced DX8 or 8.1 games on the market for the same reason. The 9700 Pro changed that in the vcard world , and for the first DX9 card it is still very capable indeed , but cards like the NV5200 ,5600 ,5700 etc have held back PS2.0's adoption across the industry. So ATI is historically correct in this case . Importantly what they have claimed thus far looks very true, since Farcry on the GF6800U and PS3.0 is still slower than the RX800XT using PS2.0 with the exact same graphics .

Does this mean ATI has won/proved their point ...... ? ........... For now , in the short term ......... yes . But Farcry is only one game and Nvidia certainly has to do some more driver tweaking so as I said the jury is not out . What one buys at this time in the highend space can be judged by several criteria :

1/ Price

2/ linux Performance ............ NVidia products win here .

3/ Today's Windows Performance ................... ATI wins here

4/ Image quality ....................... not much difference right now , ATI has better AA though .

5/ Multimonitors ...................... Nvidia is better at this .

6/ Planned lifespan of the card .................. Now this is the difficuult part here :

If you intend to keep today's highend card for 1 year or less I really think that the X800XT is the best card to get . If $$$ matters ( as it does to me ) then the 6800 GT or a X800Pro would be good . At stock the 6800GT is the better card in my opinion but if you mod the X800Pro to an X800Xt then that is the best choice of all . If you have less $$$ , then the GF6800 non-ultra at under $300 is a far better bargain than yesterday's GF5950U or 9800XT .

Where things get tricky is if you are aiming for 2 years or more . In that case the PS3.0 ability of the NV40 series is a strong point to go green with NV ( pun :) ) . But but but , the question of ability to run PS3.0 well is a question . Futhermore will they run nextgen titles in PS3.0 ??? Lets look at the GF5200 and 5600 series, not so long ago many people were asking if to buy them or a GF4 or r8500 level vcard . Those for going with the FX towed the Nvidia line that they had DX9 ability and that that would make a big difference later ( now ) and that DX8.0 or 8.1 would be left in the dust . Now here we are in reality , where the FX5200 and 5600 are so slow in DX9 that many games like Farcry and Halo shouldn't even turn on the PS2.0 features with those cards . In fact Valve has publically stated that such cards will have to run DX8.0 in Half Life 2. And guess what ............ in DX8.0 and 8.1 , the GF4 and R8500 murder those so called DX9 'advanced' , 'newer' video cards ( in both new and old games ). I've basically said all of that to say this , even having PS3.0 is not clear cut future-ability . Back then , those who studied and looked at synthetic benchmarks as well as the specs on the NV30 series could see that that line would likely be very poor in DX9.0 . Futuremark ( 3Dmark 2003's makers )took alot of flack but so far has been proven correct concerning the same thing .Sadly , at present we don't have enough synthetic tools to make the predictions which we could have in early 2003 here in 2004 . Will the Ps3.0 ability of today's NV4x vcards be more than a checkbox in future games ??? only time will tell .



One other point . The other way NV can make a big difference is by pressuring game developers to utilise alot of PS3.0 in upcoming games , but at the same time 'encourage' them to avoid putting in PS2.0 workarounds for the ATI cards or to take very long to do so (promise patches or pretend that you don't know why it ain't working on ATI hardware ............. Toca RacecarDriver's anti-aliasing mysteriously doesn't work on ATI cards even though end users have shown the programmers that it can work ( using alt tab ) or Morrowind's lack of shiny water on ATI cards which was so easy to fix but the developers seemed to actively give the impression that there was something patently wrong with the ATI hardware :rolleyes: , finally they fixed the issue ) . That would be a nasty thing to do and people already think it is happening ............. but what can I say .... that is buisness .
 
Last edited:
Cowboy X said:
One other point . The other way NV can make a big difference is by pressuring game developers to utilise alot of PS3.0 in upcoming games , but at the same time 'encourage' them to avoid putting in PS2.0 workarounds for the ATI cards or to take very long to do so (promise patches or pretend that you don't know why it ain't working on ATI hardware ............. Toca RacecarDriver's anti-aliasing mysteriously doesn't work on ATI cards even though end users have shown the programmers that it can work ( using alt tab ) or Morrowind's lack of shiny water on ATI cards which was so easy to fix but the developers seemed to actively give the impression that there was something patently wrong with the ATI hardware :rolleyes: , finally they fixed the issue ) . That would be a nasty thing to do and people already think it is happening ............. but what can I say .... that is buisness .

since we think so much alike, and I agree with most of what you said above, I'd like to comment on this quote....

I think it's sad that some game developers may even think or do such things, yet I think it's started to happen.

farcry may or may not be added to the list...
yet I'm thinking that HL2 might also do some underhanded things to nVidia's performance cards.

this is a slipery slope that we the consumor will always loose in the end.
if it happens more often, then we could wind up like the consoles.
by one card for one game, or by another card for the next game....
this just doesn't sit right with me.

mica
 
www.tomshardware.com

Please read the whole article before you start saying 'bull**** cuz teh tom d00d is b1ased'.

It shows that the 6800GT is now really alot faster than the X800pro.

Only thing that sux about the test is tom's crappy screen shots, I wanna see some nice 1280x960 ones! But they show that the bad shaders of the GeForce 6 are now fixed in 1.2 and look mostly like ATI's shaders. But ATI's are not 3.0 and the graphs show some performance improvements, and they are not really minor.

nVIDIA > ATI in Far Cry, period.
 
"nVIDIA > ATI in Far Cry, period "

not really ........ only at a particular price point , and that is where the 16 pipeline GF6800GT comes up against a 12 pipeline X800Pro . Otherwise , even with the PS3.0 enhancements the X800XT cleans the slate at the settings which really matter in the highend ........... high res , high AA , high AF . And again this is only one game . I've seen other reviews that gave a totally different picture in other games comparing the X800pro to the 6800 GT . Drivers still need work in both camps .
 
Cowboy X said:
"nVIDIA > ATI in Far Cry, period "

not really ........ only at a particular price point , and that is where the 16 pipeline GF6800GT comes up against a 12 pipeline X800Pro . Otherwise , even with the PS3.0 enhancements the X800XT cleans the slate at the settings which really matter in the highend ........... high res , high AA , high AF . And again this is only one game . I've seen other reviews that gave a totally different picture in other games comparing the X800pro to the 6800 GT . Drivers still need work in both camps .

How do u mean? The Ultra is faster than the XT.
 
I also notice that THG hasn't given us any idea of what control panel/in game settings he has applied . ............ Was AA set from the game or the panel etc . Why I say this is because there is a known bug where AA on the 6800 series is broken in the driver Nvidia supplied . So far all of the sites that show the 6800U beating the X800XT with AA and AF on made the error and gave erroneus results .

Have a look at the www.firingsquad.com review .
 
Would there be any reason at this point for THG not to use ATI's 4.7s? They were using special NVidia drivers . . . .
 
Well their review is already done and as far as I know the Cat4.7 only came out today . Those reviews take days to do , so , no it won't be possible unless they are willing to go back and repeat half of the tests they did .
 
Last edited:
They ought to, or they should have used the betas since they fel tfree to use special NVidia drivers.

I'm not saying I care specifically one way or another re: ATI v. Nvidia, i just want a fair comparison of the numbers so I know whether to use this X800 Pro I've got sitting right here or wait another week or two on a possible 6800 Ultra deal to come through with CompUSA (the 30% off PNY deal). If I'm gonna wait on the Ultra, it's 'cause I'm going ot Ebay the x800. As far as I'm concerned, either card is looking to be fast enough for me. The only thing I care about is visual differences with SM 3.0.

Of course, if CompUSA does what we all fear they are going to do and not honor the prepays then the issue becomes moot anyway. It's too bad it's going ot take forever to find out.
 
to weigh in just a bit.

I think that I would wait for the PCIexpress versions of the Nvidia cards. Not because of the PCIe bandwidth, but because of SLI. I know this is a bit off topic, but the thread has been talking about what is the better card for the future. I think that if you wanted a AGP card either the x800XT or the 6800GT (depending on budget) is the way to go. But for PCIe nvidia's SLI is a great futureproofing measure.

Imagine this: the 9800pro has SLI and you have one right now. When these new cards started to come out the classifieds were brimming with used 9800pros for great prices. You pick one of those up and, viola, you have a 70% increase in performance (or thereabouts) at a cheaper cost than a new $500 card.

that is IMHO, of course.
 
^^^ great future proofing - for a small fortune - not many people are willing to pay over $600 US for a new card let alone $1000 + for 2 as well as a nice pricey mobo with dual PCie16x slots.


personally i am waiting to go the SLI router when ever i get my duallie system going :d
 
^^uh, you missed the point of my post.

I said to buy it when the cards are selling for cheap in the classifieds, like the 9800 pros that are going for $175 in the classifieds. So you have, say a $400 new card and a $175 used card, that is quite a bit cheaper than $1000
 
The drivers everyone has been calling the "4.7 betas" were not 4.7 betas. They were a special version given out to OEMs for testing PCI-E cards. That's why there was support for R423 cards, but no inf entries for XT PE cards.

But back to the subject at hand...

From what I hear you'll need two identical (same PCB revision, same core revision, same BIOS) cards for SLI to work. So it's not like you can pick up any old 6800GT (or whatever) in 12 or 18 months and expect it to work. I'm not saying it's not doable, but it may not be that easy.

Also remember that there are no dual PEG boards out yet, nor have I heard of any upcomming boards with two true PEG slots (PEG being PCI-Express Graphics - a 16x PCI-E slot originating from the northbridge). Tumwater only supports 24 lanes from the northbridge - which means that you'll only get one true PEG slot - the other can only be 8x max (even if it does have a 16x connector). And usually there will be a PXH PCI-X bridge using four of those lanes - so typically you end up with one card in a 16x slot - and the other in a 4x slot.

Tumwater.gif
 
Panzerknacker said:
www.tomshardware.com

Please read the whole article before you start saying 'bull**** cuz teh tom d00d is b1ased'.

It shows that the 6800GT is now really alot faster than the X800pro.

Only thing that sux about the test is tom's crappy screen shots, I wanna see some nice 1280x960 ones! But they show that the bad shaders of the GeForce 6 are now fixed in 1.2 and look mostly like ATI's shaders. But ATI's are not 3.0 and the graphs show some performance improvements, and they are not really minor.

nVIDIA > ATI in Far Cry, period.

now that I've had time to read the THG review on SM3.0 and farcry...
you might as well read what I said about the Anandtech review

there are a few things you might be missing:

1) the anandtech review is borked because they never ran any AA on any test.
that's right, they have 4xAA listed, yet they must have ran the tests with 4xAA in the CP and not the game.
I gave a link to the B3D thread that is talking about this...it goes on for at least a few pages.

edit: there seems to be an added edit to the Anandtech review now:

UPDATE: It has recently come to our attention that our 4xAA/8xAF benchmark numbers for NVIDIA 6800 series cards were incorrect when this article was first published. The control panel was used to set the antialiasing level, which doesn't work with FarCry unless set specifically in the FarCry profile (which was not done here). We appologize for the error, and have updated our graphs and analysis accordingly.

2) who is going to run any of the high end 6800 cards with brilinear filtering turned on?
yet in the anandtech review, I show my email reply from Derek that states that brilinear filtering is running.
turning this "optimisation" off will decrease your framerate more.
(by as much as 10fps less)

3)while the new patch1.2 adds SM3.0 suport, it also increases the framerate for NV cards w/o PS3.0, yet decreases framerate across all ATI cards....by as much as 20%
this is how when you then test with PS3.0 that the nVidia 6800 cards are winning over the x800 cards.
(no thanks, I'll stick with patch1.1 and still beat the 6800u with PS3.0 running...look at the graphs I've posted above)

4)looking more at the SM3.0 tests from many sites, I'm thinking that this patch is ether beta at best, or a disgrace.
I can never remember when a patch would not only add a bug, but give you a performance decreace by as much as 20% for one perticular IHV while giving you more frames for the other (and I'm not talking about SM3.0 at this time).

5) seems I'm not the only one who is seeing that SM3.0 is not what is giving the nV cards all the added boost in framerate:

from kmolazz,
""X800 cards performance drops a little with the 1.2 patch and the boost nvidia cards get isn't only from sm3 but also some other improvements in the patch. ""
HERE.

while I think that the AA tests done were not borked in the THG review...
still, I'll take my x800pro (non OCed), non patched ofcorse, and beat the 6800u in every test...even if the 6800u is running PS3.0

thank you, and good night.

mica
 
Back