• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

FYI to Geforce4 overclockers

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
Disclaimer: Many of you may already have experienced the results I'm about to comment on. If that's the case, my apologies for rehashing old news.

I've been playing around with O/Cing my GF4 for a few months now and just noticed something very interesting. I would usually start my O/C by upping my core frequency from its stock 275Mhz to 300Mhz before doing anything to the memory. It's a ti4400 so I figured it'd do this without any fuss. Well, it goes to that frequency just fine, so long as my memory frequency is no more than around 600Mhz (300 x2 for DDR). I could run it at 640Mhz on the memory but it'd lock up if I ran more than 1 cycle of 3dMark.

I then remembered from experience O/Cing my old Radeon DDR card that memory problems usually showed up as visual artifacts. What I was seeing was just a hard lockup. It sounded more like a core problem than memory. So to test my hypothesis, I lowered the core clock just a bit to 295Mhz and proceded to up the mem frequency bit by bit with some startling results. I've run out of slider for the memory on my O/C utility (cool bits)! It's at 690Mhz and there's abolutely no evidence of any visual artifacts.

I'd recommend some people play with this a bit to see how their results compare. Rule of thumb-if it locks up, it's most likely a core problem, so back off the O/C a bit. Oh, and I have an 80mm side fan on the case blowing fresh air on my vidcard, so your mileage my vary if you're not cooling it this way. I remember seeing a bit of a hack to enable a higher frequency option for the sliders, anyone remember how to do that?
 

Pinky

Member
Joined
Apr 21, 2001
Location
Narf City, USA
Well, it isn't exactly Old News, but what you described (rather well) were the troublshooting techniques for overclocking your video card and knowing which (core/memory) will need to be decreased and which can still be increased...
 
OP
M

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
Exactly! What I found was weird though was that there seemed to be some sort of threshold between core and memory that when kept below that, seemed to be stable. For example, at 300 and 600 it worked great but upping the memory speed introduced some sort of extra stress on the core that it just couldn't keep up with. I suppose that makes sense since the added bandwidth would tend to tax the core a bit more. Anyway, I'm going to do a bit more research into tweaking the memory slider in coolbits to see how high this'll go.
 

Pinky

Member
Joined
Apr 21, 2001
Location
Narf City, USA
Muggy said:
Exactly! What I found was weird though was that there seemed to be some sort of threshold between core and memory that when kept below that, seemed to be stable. For example, at 300 and 600 it worked great but upping the memory speed introduced some sort of extra stress on the core that it just couldn't keep up with. I suppose that makes sense since the added bandwidth would tend to tax the core a bit more. Anyway, I'm going to do a bit more research into tweaking the memory slider in coolbits to see how high this'll go.

EXACTLY ;)

Typically I up the core then the memory for this exact reason...
 
OP
M

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
Well I used to do it that way. But that's why I wrote this, I think it's better to O/C your memory first and then increase the core speed after you reach the memory limit. Does that make sense? Because in my case, I would have just assumed 300 and 600 was the highest stable O/C.
 

]-[itman

Member
Joined
Sep 24, 2001
Location
Arizona
Muggy said:
Well I used to do it that way. But that's why I wrote this, I think it's better to O/C your memory first and then increase the core speed after you reach the memory limit. Does that make sense? Because in my case, I would have just assumed 300 and 600 was the highest stable O/C.

For most GF cards, upping the memory will give you a better overclock than upping the core because Nvidia literally starves their GPU's for bandwith. You can play around with it if you like, but 9/10 times this will be true. ATI, I believe, does a better job at balancing their cards so there are different methods for everclocking them.
 

Lunch_Box

Registered
Joined
Oct 3, 2002
Location
Southern California
If you look at many of the GF4 overclocking articles, you get higher frame rates out of CORE overclocking than MEM overclocking. For instance, a card running 300/610 versus a card running 310/600. 310/600 wins. Check the benchmarks, as I have. The GF4's LMA2 is better at using bandwidth more efficiently while the older GF2s & 3s benefitted more from overclocking memory. Higher memory speed will probably help in FSAA but who really uses FSAA in games when you can merely increase the resolution to 1280x1024.

Muggy:
If I had your card, I would do a voltage mod on it. It sounds like a little more juice to the core and mem will get you close to 350/700 and that is smokin! Add a CPU heatsink and fan to the core after lapping the GPU. Also helps a little.
 

gingo

Member
Joined
Oct 23, 2002
I've heard the exact opposite of what lunch_box said. I heard you get better performance with more mem OCing.
 
OP
M

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
Well to give both of you an idea of what I saw in my own tests, I only got about a 100 3dmarks after increasing my memory clock from 640 to 690. It was around 11450 with a 300/640 and about 11550 with the settings at 295/690. That doesn't seem like much for my efforts. Oh and lunch_box-my thoughts exactly on the voltage mod! If I could just find a decent guide on doing the mod. I've seen some for the 4200s but I'm not satisfied with what I've seen yet. Do you know of any good sites to check out for instructions?
 

Pinky

Member
Joined
Apr 21, 2001
Location
Narf City, USA
The way it works is this:

For better anti aliasing overclock the core more (since bandwidth isn't an issue as the core does all this working with the already loaded-into-memory textures).

That's why I overclock the core first, I use 2X AA at ALL times, so my core is crucial to my performance (and not just for a higher benchmark).

On GF3/GF2/GF cards, you gained much less with an overclocked core and bandwidth, being the true limit for thos cards, was the key gain.

Well, that's from my laymen experience and analysis. I know little about the REAL technical side to all this, only know what I've read and experienced first hand...
 

ReDeeMeR

Member
Joined
Oct 29, 2002
Ok might be abit off topic, but it doesnt seem that coolbits work on 41.xx drivers, is this true?
 
OP
M

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
ReDeeMeR said:
Ok might be abit off topic, but it doesnt seem that coolbits work on 41.xx drivers, is this true?
I believe you are correct. The option was there on my system but my scores didn't change in the slightest. That's why I'm still using the 30.82's.
 
OP
M

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
Pinky said:
The way it works is this:

For better anti aliasing overclock the core more (since bandwidth isn't an issue as the core does all this working with the already loaded-into-memory textures).

That's why I overclock the core first, I use 2X AA at ALL times, so my core is crucial to my performance (and not just for a higher benchmark).

On GF3/GF2/GF cards, you gained much less with an overclocked core and bandwidth, being the true limit for thos cards, was the key gain.

Well, that's from my laymen experience and analysis. I know little about the REAL technical side to all this, only know what I've read and experienced first hand...
I would have to say that real experience is the demonstrator of all the techie stuff behind the scenes! So your experience and comments are very much appreciated. Have you done anything to the core on your card? I've been wondering if I should lap it. I've attempted to lap the HSF but it's a real pain! The machining marks are pretty deep and the AL in the stock cooler is quite a bit harder than other AL sinks I've lapped.
 

]-[itman

Member
Joined
Sep 24, 2001
Location
Arizona
Pinky said:
The way it works is this:

For better anti aliasing overclock the core more (since bandwidth isn't an issue as the core does all this working with the already loaded-into-memory textures).

That's why I overclock the core first, I use 2X AA at ALL times, so my core is crucial to my performance (and not just for a higher benchmark).

On GF3/GF2/GF cards, you gained much less with an overclocked core and bandwidth, being the true limit for thos cards, was the key gain.

Well, that's from my laymen experience and analysis. I know little about the REAL technical side to all this, only know what I've read and experienced first hand...

Ahh yes, I always forget about AA performance. I actually like Quincunx performance better, but it depends on the card. I think Nvidia did a good job with the GF4 making the AA performance hit much less than it used to.
 

Pinky

Member
Joined
Apr 21, 2001
Location
Narf City, USA
I can't imagine playing games without the AntiAliasing turned on... even at 2X it makes a BIG difference in the jagged edges and sharpens everything up just a tad.

I am using an old P3 retail (aluminum) heatsink, and lapped it. Used arctic alumina (adhesive/epoxy) to join it.. naturally I removed all the gunk from the core first so it sat nice and even... it got me an extra 10mhz on the core overclock.
 

WyrmMaster

I'm a little teapot Senior
Joined
Dec 17, 2000
Location
Montana, USA
I see im not the only person to have good luck with an evga card. Mine will do 300/700 with an 80mm blowhole, same setup as you have. Considering the that that there the cheap alternative, they are sweet cards.
 

gingo

Member
Joined
Oct 23, 2002
ReDeeMeR said:
Ok might be abit off topic, but it doesnt seem that coolbits work on 41.xx drivers, is this true?

I'm using the 40.72 drivers and coolbits seems to be working. Didn't know the 41.xx drivers were even out.

I'll have to turn on AA on and see if I can tell the difference. Does turning AA on decrease fps alot?
 
OP
M

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
gingo said:


I'm using the 40.72 drivers and coolbits seems to be working. Didn't know the 41.xx drivers were even out.

I'll have to turn on AA on and see if I can tell the difference. Does turning AA on decrease fps alot?
So the 40.72's work for you huh? I couldn't make the overclocking do anything when I used those drivers. The sliders were there but my benchmarks didn't change at all. Let me know if you had to do anything special to get them to work.
 
OP
M

Muggy

Member
Joined
Jul 11, 2001
Location
Australia
Latest findings

Found some more things to support my theory about memory speeds affecting the stability of the core.

Did several tests at different settings for the core and memory. All using the Nature benchmark from 3dmark. I looped it 4 times to make sure my tests were consistent. Tests were done at default speeds (275 core and 550 memory), core overclock (300 core and same 550 for memory), and memory overclock (275 stock core and 690 for memory). At each interval, I checked the GPU temp and recorded the FPS for the 4 loop run.

Since this is just the Nature test and none of the others, my results may not be indicative of real in-game results. I'm at work and failed to bring my results with me so I'll just report the general findings (details when I get home).

Core overclocking alone yielded only about a 2% increase in FPS for the Nature bench. Temps went up only about a degree Celsius.

Memory O/C yielded a 13.5% increase and about a 10 degree temperature increase in GPU temps!

Now bear in mind, I'm using a standard indoor/outdoor digital thermometer from RadioShack (1% accuracy is pretty standard for these though). I've just taped the outdoor probe to the back of the card so the results are only reporting the temperature differences more than actual temps of the GPU. But the results pretty well speak for themselves-higher memory bandwidth definately taxes the core quite a bit more. And core O/C's don't do much for this particular demo.

Also, my card refuses to do more than 300 for the core. 310 locks up, 305 exits from 3dmark back to the desktop. With my memory at 690, even a 295 setting for the core is not 100% stable (3dmark lockup when running full tests multiple times).

So there you have it. Comments and criticisms are welcomed! I'll post more detailed results if anyone wants them. Oh, and I've decided that upping the voltage on my core should be my next step with this. After I've explored the option of lapping my GPU heatsink and perhaps the GPU too. Any pointers on that would be great.
 

Pinky

Member
Joined
Apr 21, 2001
Location
Narf City, USA
I might have to rnu some benchies over the weekend (IF I have the time) and see what my results yield...

I think running only the nature test may be a biased benchmark, I'll run all 4 and get a 3dmark result... and then a 3dmark result with anti aliasing turned on and see if my theories/observations are indeed correct...