• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

If the AGP and PCI busses are locked, then...

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Tacoman667

Member
Joined
Apr 28, 2001
Location
Kingwood, TX
This is quoted from some PM's I had with another member, and we feel it needs more explaining.

Tacoman667 wrote:


Very nice, I never FSB clocked so much before. This new DFI LP NF2 B I got last week is screaming at 12x200 out of the box. PCI and AGP locks are key, but if those never raise, why would you want to up FSB much over 200?

WarriorII Wrote:


FSB = faster SYSTEM performance.
makes everything faster is what it does.
HDD/ vid/ -system bus is what is is.
how fast things are moving between each other.

got this running @ 11x 200 now.
with low memory timings too!

Ya got a good board there! 12x 200 is sweet.

I just wanted to reach 200 for the performance gain myself.

Tacoman667 wrote:


But, if the PCI and AGP are locked at 33/66, then how is there faster system performance when those 2 busses ARE the system?

WarriorII wrote:

Good point.

From what I can tell, looking at a brake down of a system schematic;

the cpu is the starting point, runs through the North bridge to the AGP & Memory & runs through everything else, having the PCI system being one of the last it reaches.

So PCI is gonna run@ 33mhz.
AGP runs @ 66.
or should I say proccess' its functions at that speed.
when it sends the info it is done processing,
It transmits(?) that info @ a faster rate.

Which is held in the memory until used.

I think.

This might need to be posed & discussed a bit further.

Knowing how & why is 1/2 the battle.
Understanding information is power.

Any insight would be much appreciated. Thanks.
 
I'll try to help :).

Overclocking PCI devices (HDD, PCI cards, etc.) will not increase performance because they do not NEED anymore bandwidth. 33mhz should be enough for them to run w/ out reaching a bottleneck.

AGP 8x cards might gain an increase if they are overclocked alot (and were fast to begin w/). W/ my AGP 4x cards I saw 10 points increase in 3dmark w/ upping the AGP bus 10mhz. But I don't think it's worth it. And playing w/ AGP bus w/ my 9800Pro, upping the bus past a certain point hurt overclocks. Everyone will tell you to leave AGP bus alone... I sometimes see more 3dmarks by upping it.

So the vast majority of the time AGP and PCI devices don't need anymore bandwidth than they already have. However, one of the bottlenecks in a system is the bandwidth between the CPU, mobo, and memory. Bandwidth increases w/ FSB because your able to send data at a faster rate. These devices can use as much as you can stably give them (realisticly). Faster FSB for them should always be faster (again w/ in reason, maybe 1000mhz FSB is overkill on the bandwidth :p).

So it all comes down to being able to use the available bandwidth. The limited data rate of a HDD or a PCI device won't effect the system when their not being used (most of the time), so they aren't limiting your system at that time.

Then of course there are memory timings... if you can run your memory timing at a lower setting at a lower FSB, the speed gained by the lower timings may out weigh the gain of higher FSB... so just try 'em out. Luckily my memory can run low timings at the fastest FSB my mobo can do.

Hope I uhh helped.
 
My memory is 3200 HyperX running 200 FSB at 2-2-2-5 timings. I'm pretty sure that is good. So if the PCI/AGP bus stays at 33/66, then if I tokk her to 220 FSB, I should see a increase in performance even if I equalized a lower multiplier on the CPU to equal what I get at 12x200 (2400ghz)?
 
Ok, ok. Here's the deal. OCing the PCI bus does help. In some cases alot, HOWEVER...lol!

Its very dangerous. PCI is old technology. Raising the bus 1mhz is alot! PCI based systems were never intented to be at such high speeds.

That is why an PCI over 40mhz causes HDD failure or corruption, and other PCI cards to crap out. thats because ur OCing them almost 1/3 on old technology.

1/3 is alot! specially for a 10+year design. That is why the lock is important.

This also applies to the AGP bus. Yes it could benefit the card. Radeons do very poorly with a high AGP bus. Why? I dunno, they just do. nVidia cards benefit heavily from it AFAIK.

Tacoman667 wrote:
But, if the PCI and AGP are locked at 33/66, then how is there faster system performance when those 2 busses ARE the system?

^^ It is and it isnt. PCI and AGP and Memory bus (or FSB) are totally unrelated.

The PCI and AGP bus AFAIK are controlled by the Southbridge. The memory bus (FSB) is controlled by the NB. They are totally independant from each other.

However some design tie them together to avoid stability problems associated with running things outta synch (VIA / SIS) These two companies feel that their customer base dosnt want/need a lock since it is totally useless to 95% of all computer users.

WarriorII wrote:WarriorII wrote:

Good point.

From what I can tell, looking at a brake down of a system schematic;

the cpu is the starting point, runs through the North bridge to the AGP & Memory & runs through everything else, having the PCI system being one of the last it reaches.

So PCI is gonna run@ 33mhz.
AGP runs @ 66.
or should I say proccess' its functions at that speed.
when it sends the info it is done processing,
It transmits(?) that info @ a faster rate.

Which is held in the memory until used.

I think.

This might need to be posed & discussed a bit further.

Knowing how & why is 1/2 the battle.
Understanding information is power.

^^ Once again. the NB and SB are totally independant of each other. The mem-controller loops with the RAM and CPU. It rarely supplies direct info to the SB. It only does so every so often and it only does so at 33mhz or the PCI speed.

In order to keep at this speed it uses dividors, there are 2 types of dividors: fixed and floating. There are 3 common fixed sets right now that are used in conjunction with 266/333/400 FSB and RAM speeds. The floating dividor is commonly refered to as a PCI lock.

Unlike the fixed dividor, it constantly adjusts itself to maintain the 33.3mhz depending on ur clock speed.

This is how things work AFAIK. Feel free to get second opinions
 
Last edited:
A little correction: PCI and AGP devices can, and often do access memory directly (without going through the CPU) using DMA, so the memory controller does talk to the SB/PCI bus directly. However, most devices have plenty of bandwidth on the PCI bus to do this. Only things like massive RAID arrays eat up all the PCI bandwidth. On the other hand,some Via chipsets had a badly broken PCI bus that would mostly stop transferring data if it was heavily loaded. The other thing is that data can go between the SB and NB on modern chipsets at a fairly high rate. It used to be that the NB and SB were connected using the PCI Bus, but most modern chipsets use higher bandwidth interconnects, for example Via's V-Link on Via chipsets and the Nforce2, which I believe actually uses a HyperTransport Bus to connect it's NB and SB. His is where the advantage to integrating tings like SATA, IDE and Gigabit Ethernet controllers into the SB comes. They are usually cheap because the offload much of the processing to the CPU, and hey don't necessarily have to deal with the bandwidth restrictions of sitting on the PCI bus. I don't know how any current chipsets determine interconnect speed (ie I'm nott sure if it's based on the FSB or locked).
 
What all uses the PCI bus of 33mhz? If the RAID uses this bus then how can they have SATA Hd's at 150Mbps when the PCI bus is only 33mhz?!
 
Tacoman667 said:
What all uses the PCI bus of 33mhz? If the RAID uses this bus then how can they have SATA Hd's at 150Mbps when the PCI bus is only 33mhz?!

That's totally unrelated, the 33MHz PCI bus is the speed at which the PCI device operates at, not the speed that it can transfer information.
 
Tacoman667 said:
What all uses the PCI bus of 33mhz? If the RAID uses this bus then how can they have SATA Hd's at 150Mbps when the PCI bus is only 33mhz?!

tweakerxp said:


That's totally unrelated, the 33MHz PCI bus is the speed at which the PCI device operates at, not the speed that it can transfer information.
The PCI bus does operate at 33MHz. It is also 32-bits wide. That gives a theoretical maximum bandwidth of 32/8*33.3*1,000,000/1,048,576=127.2 MB/s. The Theoretical maximum of an SATA 1.0 bus is 150MB/s, so thjey're not that far apart. Furthermore, no drives can actually put out 150MB/s of data. My 7200 RPM ATA 133 Maxtor with a 40GB platter only puts out about 40MB/s sustained. Raptors start getting up there, to the point where a RAID 0 array may begin to saturate the PCI bus. This is on of the reasons for 64-bit, higher speed PCI-X slots on server boards that would end up with large RAID arrays. The Desktop will have to wait for PCI-Express for higher bandwidth.
 
So before the PCIXpress slots hit motherboards, making a RAID 5-10 array of more then 2 HD's striped would be pointless?
 
Not really. When I had two maxtors in a raid 0 array, they would put out about 70MB/s sustained transfer so four weould probably just be enough to saturate the bus. On the other hand, how often do you do sustained transfers? A raid 5 array that wasn't too huge might be alright also. If you need that king of harddrive performance, you're probably running some sort of server, and have a 64-bit PCI bus.

Also, what is a raid 5-10 array? I know of RAID 5 and RAID 10 (and 0+1, fot that matter).
 
I meant any RAID 5 through 10 configurations. I just recently heard of RAID 3, 4, and 5. 10 is just a huge stripping mode in the array right?
 
Back