Typically the default latency timings are very optimal, but that depends on what you want performance out of. Most modern day system timings reflect not crunching in particular, but networking and graphics- networking first since you need networking data in order to achieve the graphics from them.
What you really gotta do is get everything all figured out- then you will see some big performance increases; otherwise you might notice something but what you notice you can't tell exactly- or increased performance in one area; however with a latency timing balance: it should increase performance in all areas since the performance of higher IRQ's should be a refection of increased performance on lower IRQ's (if they need information from them in order to do thier job) If IRQ's and latency timing is setup properly- even though the lower IRQ devices have more priority, and the higher IRQ devices have more latency- CPU usage on each device should be more or less equal- since you need info from the hard drive to do anything at all (unless you want to watch something load for two hours and youve got over 1g of ram).
A flimsy systematic method i've used to obtain small overall differences
make absulutely sure lower IRQ# latencies are factors of higher IRQ's with greater latencies- which should be a factor of the max amount of data they can handle in one interrupt.
This way when something does eventually go to that ethernet card or hard drive- it is more probable it will have the precise amount of data for its valuable clock cycles.
take latency setting on lowest IRQ default you can change>than zero
X=8
Y=8
begin
Stop when there are no higher IRQ's to benchmark
ADD X, benchmark all greater IRQ devices, repeat until better or worse
worse?
default latency, subtract X, repeat until better or worse bench on greater IRQ
worse?
keep default latency
better? set new default latency, goto begin
next latency setting > 0 have the same IRQ #, goto begin
Determine if next IRQ should
take equal priority? move to its latency setting, goto begin
take greater priority? x=x-y:y=y-8, swap IRQ # setting goto begin
X=X+Y
y=y+8
move to next greater IRQ with latency setting>0, goto begin
Afterwords, overclock your graphics card or underclock to achieve highest performance without artifacts. If you can lower CPU latency and it produces artifacts- this means the CPU is hogging the FSB (only according to the IRQ's which are needed to produce graphics output). Best solution is to lower CPU latency & underclock the card for better performance & benchmarks.
During a diagnostic test which measures a CPU's usage, it will always show the cpu it 100% then go a little under that then 100% then a little under that over and over again. Better system timing will put make it so CPU usage is more likely to be 100%.
Truth is that you can do system timing this way; however, it probably won't be as balanced as the comp can do automatically.
In order to do it better than the comp you have to pretty much write a program for the comp to use in setting them; however, the program you write to do it has to be better than the current program that is already setup. Sometimes the basic program that figures the system timing out is bias in areas such as the IDE devices and networking, but this is for obvious reasons. I mean seriously with system timing to do it better in our own brain you have to take into account a serious amount of information- I'd leave it to the computer unless you overclocked your multiplier only- then try reducing CPU latency little by little which won't be too complicated like setting all the other devices (unless the devices were already hogging the bus)