It seems every time i load Winbond it reads the Vcore @ around 0.15 volts less then what is set in BIOS.
Here is what I've seen happen:
Default (1.5) Winbond reads it @ 1.35
1.6 volts Winbond reads it @ 1.45
1.65 volts Winbond reads it @ 1.5
I haven't even pushed it past 1.65 volts so this is all i can test.
Now my question is, Does Winbond just misread the Vcore and should i trust the settings in Bios?
Because a friend told me that the Abit boards all undervolt and thats why im getting these readings.
Has anyone else had these problems.
Your friend is correct. The board seriously undervolts. There seem to be two types of the TH7-II. One that undervolts very bad (like the one you and I have), and one that undervolts somewhat less. If you haven't done the Vid Pin mod this may hurt your ability to overclock (as you will never get to the max "safe" voltage of 1.7V vfor the P4.) You can also see the voltages in the PC health section in BIOS. One more thing: a bad PSU can also cause undervolting. Be sure you have enough juice to keep your system going
Plus another thing IF i overclock to 2.7Ghz and then stress test my PC or play a new game with nice graphics I seem to loose my USB connections to my mobo.
At the back of my motherboard and all Abit Th711 motherboards sit 3 USB connections. IF i use the top connection after a few minutes my PC thinks that this one port has been unplugged.
It seems to only occure when i've overclocked more then 133Mhz.
Has anyone had this problem?
USB ports do not always take nicely to overclocking.
I have the latest Bios from the abit-usa site, I would install Mr.Naturals Bios but i honestly haven't had a problem with this BIOS till now.
If it ain't broke then don't fix it IMHO
PLus what does the PCI latancy timing do? I've got it set to 32 but theres heaps of settings so i has wondering what they did also.
Ok, this is a technical dicussion of PCI latency timing taken from
http://www.reric.net/linux/pci_latency.html
"PCI latency timers are a mechanism for PCI bus-mastering devices to share the PCI bus fairly. "Fair" in this case means that devices won't use such a large portion of the available PCI bus bandwidth that other devices aren't able to get needed work done.
How this works is that each PCI device that can operate in bus-master mode is required to implement a timer, called the Latency Timer, that limits the time that device can hold the PCI bus. The timer starts when the device gains bus ownership, and counts down at the rate of the PCI clock. When the counter reaches zero, the device is required to release the bus. If no other devices are waiting for bus ownership, it may simply grab the bus again and transfer more data.
If the latency timer is set too low, PCI devices will interrupt their transfers unnecessarily often, hurting performance. If it's set too high, devices that require frequent bus access may overflow their buffers, losing data.
So in theory there's a compromise that can be reached somewhere in between, where all devices can get good performance plus a reasonable guarantee that they will get bus access on a timely basis.
Here's my best impression of the various ways you can handle the timers:
The Bad Way: Don't set them at all (in all cases I've looked at this results in a setting of zero).
The Better Way: just pick a number (I use 48 in my code) and set all devices and bridges to that.
The Best Way, in theory, though I doubt anybody does it:
For each card, read the MIN_GNT and MAX_LAT values, and calculate the following:
latency_multiplier = (8 for 33MHz) (this is the number of PCI clocks in one- quarter microsecond.)
min_grant_clocks= latency_multiplier * min_gnt
max_latency_clocks= latency_multiplier * max_lat
Then, calculate the desired latency timer values for each device such that:
each card's latency timer should be greater than that card's min_grant_clocks.
each card's latency timer should be set as high as possible. This is because the latency timer will truncate pci busmaster cycles when it expires, so a too-low setting will hurt performance.
all latency timers should be set low enough that if boards A, B, and C all start doing maximum length transfers, board D will still get the bus before max_latency_clocks has elapsed. So,
A.latency_timer + B.latency_timer + C.latency_timer < D.max_latency_clocks
Of course, the device manufacturer might have included themselves in the calculation, and in that case the requirement would be:
A.latency_timer + B.latency_timer + C.latency_timer + D.latency_timer < D.max_latency_clocks
which is easier to calculate anyway, since A+B+C+D could simply be called system_max_latency. If it's greater than any card's max_latency_clocks, you might have trouble.
Whew.
Needless to say, factory BIOSes are all over the board on this, and I suspect that most simply choose the "better" option (fixed values).
In truth, rule number 1 seems to get violated all the time without any ill effects. Since everybody else ignores rule #1 and I doubt anybody even tries to satisfy rule #3, the "better" option seems like an acceptable compromise.
" end of explanation on that site.
I myself have jiggled around somewhat with the timings, but it didn't do that much for my system. I decided to leave it at default in the end.
Thanks for any help in advance
I hope it was a bit helpful