• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

The Newbie's Guide to Overclocking.

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Dual channel or single channel mode in nforce2 mb is not that crucial for overall performance. The difference is only few % (say 2-3%) at most. Also single channel may let FSB to go a bit higher due to a smaller chance of potential dual dimm mismatch and memory controller stress at high FSB, I think. On the other hand, dual channel memory controller provides some performance advantage due to its intrinsic speculative caching capability. At this point, the little higher FSB from single channel offset the performance advantage of dual channel, and the two is about a tie

Not quite. The performance gained by dual channel even on nForce2 mobo's is around 12-15%. That is to say, that someone running a dual channel setup in 200mhz could match the bandwidth provided by 230mhz in single channel. I've seen people with slower front side buses than mine crush my bandwidth. My advice would be to go dual channel for sure. Even though it almost always limits your front side bus/memory bus overclock, the added bandwidth is well worth it. If this doesn't make your jaw nearly touch the floor, I don't know what will.

Overclocking is a process, not by sudden boosting the Vcore and hope it will go to 2.5 GHz.
Hehe...not for all of us :D. But in all seriousness, I believe that its often quote possible, even desirable, that one has a good idea of the logical limits and capabilities of all of their hardware, even before the FedEx truck comes by. It is imperative to have an understanding like this, and this can be greatly aided by browsing these very forums. For example, one should not be expecting to hit 2.7ghz on a Volcano7+. On the other hand, one should realize that an Abit NF7-S can provide a far higher fsb than 170mhz. It took a me an incredibly short amount of time to reach 2.5ghz with my setup, mainly due to knowing the maximum speed of my memory, and of looking at past experiences of my identical processor. Research, research, research!

Great work, Altec, and Hitechjb1. A guide like this was greatly needed, and both of you did excellent jobs in providing it.
 
Gautam said:

Not quite. The performance gained by dual channel even on nForce2 mobo's is around 12-15%. That is to say, that someone running a dual channel setup in 200mhz could match the bandwidth provided by 230mhz in single channel. I've seen people with slower front side buses than mine crush my bandwidth. My advice would be to go dual channel for sure. Even though it almost always limits your front side bus/memory bus overclock, the added bandwidth is well worth it. If this doesn't make your jaw nearly touch the floor, I don't know what will.
...

Thanks for pointing that out. I have heard that recently the dual channel for nforce2 may have improved its dual channel efficiency, proabably is due to the new C1 stepping of NB and/or bios. This summary was written and based on my understanding of nforce2 6-8 months ago when it first came out. Things may have changed. I have to look into this.

I have been seeing the memory bandwidth efficiency for nforce2 to be around 95% for both dual channel and single channel.

memory bandwidth efficiency for nforce2 is defined as

memory bandwidth efficiency = measure_bandwidth / (FSB x 2 x 8)

I just computed my nforce2 and also looked at a few examples from the above link you gave. Regardless of single or dual channel, I still find the efficiency number to be between 94-97%. So how can you say that dual channel can be 12-15% better than single channel.

Can you give me some exact numbers so we can look into this matter more?


I'll update the writeup or correct if needed. Comments and mistake finding are always welcome.
 
Actually, I'm slightly mistaken. Those results don't quite prove my point, so I'll concede that 12-15% is overshooting it quite a bit. I actually got that number by comparing my memory bandwidth with the results that [OC]This got in ]this thread. The image no longer shows, but he exceeded 3600mb/sec with 220mhz in dual channel, whereas I barely cross 3200mb/sec. This is around a 12% increase. My efficiency is around 90%, his eerily above 100%. That xtremesystems thread is not an adequate representation, as many of those posts are from pre-C1 northbridge stepping times. Almost all of them are with Epoxes, which, I think, fall short of the Abit NF7 in memory bandwidth. I've seen a few other results from dual channel users that have, although I apologize for being unable to give you exact numbers. Again, I've got nothing definite here, but there are people that get around 600 more 3dmarks than me(my score is around 14400) with identically clocked and configured systems. The only difference that I can find is dual channelI'm going to try to see if I can borrow a stick of 256mb from a friend to see what kinds of improvements it yields. Due to the discrepancies caused by different northbridge steppings, latencies, etc., this is probably the only way to find out. I believe the actual amount gained with all of this taken into consideration will turn out to be around 10%. 15% is overshooting it, but I think that 2-3% is a bit conservative, also.

94% in single channel is very very nice, btw. I get only 90.666% myself, and this appears to be the norm. Even by using those xtremesystems results, the efficiency appears to increase by around 5-7%, but I think that with the identical system, gains will be far higher than this.
 
Last edited:
I see sticky due to the fact there is alot of noob's out there that could use this post to help the overclock.


Good job the both of you.
 
Gautam said:
Actually, I'm slightly mistaken. Those results don't quite prove my point, so I'll concede that 12-15% is overshooting it quite a bit. I actually got that number by comparing my memory bandwidth with the results that [OC]This got in ]this thread. The image no longer shows, but he exceeded 3600mb/sec with 220mhz in dual channel, whereas I barely cross 3200mb/sec. This is around a 12% increase. My efficiency is around 90%, his eerily above 100%. That xtremesystems thread is not an adequate representation, as many of those posts are from pre-C1 northbridge stepping times. Almost all of them are with Epoxes, which, I think, fall short of the Abit NF7 in memory bandwidth. I've seen a few other results from dual channel users that have, although I apologize for being unable to give you exact numbers. Again, I've got nothing definite here, but there are people that get around 600 more 3dmarks than me(my score is around 14400) with identically clocked and configured systems. The only difference that I can find is dual channelI'm going to try to see if I can borrow a stick of 256mb from a friend to see what kinds of improvements it yields. Due to the discrepancies caused by different northbridge steppings, latencies, etc., this is probably the only way to find out. I believe the actual amount gained with all of this taken into consideration will turn out to be around 10%. 15% is overshooting it, but I think that 2-3% is a bit conservative, also.

94% in single channel is very very nice, btw. I get only 90.666% myself, and this appears to be the norm. Even by using those xtremesystems results, the efficiency appears to increase by around 5-7%, but I think that with the identical system, gains will be far higher than this.

That number 3600 MB/s at 220 MHz is probably wrong. Since at 220 MHz, the max bandwidth is 220 x 2 x 8 = 3520 MB/s which is less than 3600 MB/s !!!

E.g. my FSB = 209 MHz, bandwidth = 3144 MB/s (single channel)
so efficiency = 3144 / (209 * 2 * 8) = 94%

I have been using 95% efficiency for nforce2. Dual channel can be 3-5% higher for the same FSB due to the intrinsic caching advantage as described in the writeup. But since single channel can usually be clocked few % higher in FSB, so the net result would be within 2-3% either way.

For PC makers, definitely should go with dual channel since they don't tune or overclock the FSB for each box individually, and so dual channel can give 3%, maybe 5% memory bandwidth advantage.

But for us, who play with FSB, the difference between the two would be even smaller (2-3%), due to the fact that single channel can be clocked higher.
 
I just added some info on multipliers and unlocking chips, but I very well could be wrong since I usually am not to good with all of that stuff. Will someone please look over it and let me know if it is right? Thanks a lot guys. ;)
 
Well even though I don't quite consider myself the noob that I was a few months ago( due to my vigorous reading of the forums) this is still a great read for me. I should be getting my final part( NF7-S) next week and then the fun begins. Although I do want to make sure it is put together right before I mess with any overclocking. Again thanks for the refresher guys.
 
That number 3600 MB/s at 220 MHz is probably wrong. Since at 220 MHz, the max bandwidth is 220 x 2 x 8 = 3520 MB/s which is less than 3600 MB/s !!!

I do realize this, and this wouldn't be the only case where its happened. One of my friends runs out of sync with a KT400 board; memory at 200mhz, fsb at 180, and gets a reported bandwidth of over 2900. With dual channel, I have seen others go past their theoretical maximum in Sandra, but I can't find any examples again. It must be some kind of a glitch. Even considering that this data is false, which it most likely is, running in dual channel usually actually only takes about 2% or so of one's memory bus speed, sometimes still less.

I myself have never gotten better than 91% synchronously, yet I've seen plenty of dual channel users get close to or slightly above 97%. With the proper northbridge cooling and voltage, I think that dual channel can be used quite advantageously in the hands of someone determined and experienced. I'd expect close to or above 7% increase in efficiency, but I don't want to jump to any conclusions until I've tried it out myself.
 
All looks absolutely great. Maybe add some stuff about fsb/dram ratios, and when people should use them, and perhaps a basic skimming on memory timings other than CAS latency. Since you've already covered so much, might as well take the rest down! :)
 
I would do that but I am not too much of a memory expert, so it will take me some time to do the research to do the write up.
 
Regarding to memory, I have written a number of posts in the thread of the memory section of this forum SDRAM and DDR Specifications about the following. Maybe we can pick and choose some, copy some here or rewrite or ...:

- RAS, CAS Timing and BIOS Memory Setting
- Single Channel vs Dual Channel DDR Memory Module
- Overclocking memory module(s) vs dual channel
- 256MB vs 512 MB vs 1GB, 1 module vs 2 modules
- Dual Channel Memory (1 vs 2 sticks, sync vs async)
- Dual Channel, Nforce2, P4 and AMD FSB
- Different RAS/CAS timing on memory latency (estimate)
- What is cycle time and frequency of memory module
- Frequency, clock, period of synchronous operations
- Analogy on Bus Speed, Bandwidth and Latency
- Analogy for FSB, CAS2, CAS3 latency and bandwidth for DRAM memory

There are lots of details and analysis. Maybe a little bit lengthy (and boring to read) for beginners. When I wrote them, I tried to put down all the things that I thought are important, ... , as detailed and concise as possible.

At that time, I did not have access to picture server, and no pics make it also harder to read. We NEED more pictures.


03/07/19:
It seems that sticky is no longer there and the links to the posts do not work any more. I found some copies of them and repost here.


Memory

RAS, CAS Timing and BIOS Memory Setting

Single Channel vs Dual Channel DDR Memory

Overclocking memory module(s) vs dual channel

256MB vs 512 MB vs 1GB, 1 module vs 2 modules

Dual Channel Memory (1 vs 2 sticks, sync vs async)

Dual Channel, Nforce2, P4 and AMD FSB

Some Benchmarks of CAS Latency on Overall Performance

Different RAS/CAS timing on memory latency (estimate)

What is cycle time and frequency

Frequency, clock, period of synchronous operations, latency

Latency

Analogy on Bus Speed, Bandwidth and Latency

Analogy for FSB, CAS2, CAS3 latency and bandwidth for DRAM memory


For CPU, FSB, memory, ...

Summary for overclocking CPU and FSB (page 3)
 
Last edited:
I, for one, would appreciate an entry level discussion on RAM. I don't have quite the same understanding of memory that I have gained on CPU functioning. For me it just helps to keep reading these guides/discussions to understand it better. Even if I have read the exact same info already sometime it is phrased differently/better or there is just a part that I had missed before.

As soon as I get my system up, I figure I can start to gain some knowledge through experience but until then these types of threads are great for me.
 
For these RAS/CAS magic number such as 6-2-2-2 1T or 6-3-3-2 1T. They refer to

1st number: Tras or tRAS (Active to Precharge Delay)
2nd number: Trcd or tRCD (RAS to CAS delay)
3rd number: Trp or tRP (RAS Precharge Delay or Precharge to Active) (different bios name them differently)
4th number: CAS (CAS Latency)
5th number: 1T: Command rate

RAS, CAS Timing and BIOS Memory Setting

1. Active (to) Precharge Delay (aka Tras, tRAS) - usually 5 or 6 or 7, smaller the better, (Tras >= Trcd + CAS)
2. RAS to CAS Delay (aka Trcd, tRCD) - 2 is good, 3 is OK
3. RAS Precharge Delay (aka Trp, tRP, Precharge to Active) - 2 is good, 3 is OK
4. CAS Latency (aka CAS) - use 2 whenever possible
5. Cmd Rate (some bios does not have this, set automatically) - 1T is better than 2T

Memory cells in a DRAM chip are organized as rows and columns. To access data, first has to access a row (which contains a block/page of data). Then has to access columns (a subset of the row data). Then data will be output. Then followed by precharge (restore DRAM cell data and back to ready state).

Using commonly available (as of today) 256 MB module as an example. A 256 MB module consists of 8 DRAM chips (non ECC). Each column access will output 8 bit of data from a DRAM chip. 8 chips form a bank which gives 64 bit of I/O which is the data path of a memory module. Some modules have two banks (16 chips), e.g. 512 MB, but still a total of 64 bit I/O.

DRAM chip cycles look like this:
Active / Precharge / Active / Precharge / Active / Precharge ...

Two possibilities:
1. Single column access:
Active = row access / column access
2. Multiple column access to burst more data of a page to reduce row access overhead:
Active = row access / column access / column access + ...
Typically, data are bursted in 4 or 8.

As a result:
1. DRAM cycle = Trcd + CAS + Trp
2. DRAM cycle = Trcd + CAS + CAS + ... + CAS + Trp
where Trp is precharge time.

Latency:
1. Trcd + CAS = number of cycles to get 1st data out
2. Trcd + CAS + CAS + CAS + CAS = number of cycles to get all data out for a burst of 4 (if data is in the same page)
...
etc, etc

Difference between CAS2 and CAS3 memory latency

For example, typically, Trcd = 3 or 2. Make it 2 as an example.
1. For CAS2,
latency = 2 + 2 = 4 cycles for single column access
latency = 2 + 2 + 2 + 2 + 2 = 10 cycles for bursting of 4 column accesses
latency = 2 + 2 x 8 = 18 cycles for bursting of 8 column accesses
2. For CAS3,
latency = 2 + 3 = 5 cycles for single column access
latency = 2 + 3 + 3 + 3 + 3 = 14 cycles for bursting of 4 column accesses
latency = 2 + 3 x 8 = 26 cycles for bursting of 8 column accesses

So one can see the differeent in latency (from read and address command sent to DRAM to getting the data back) is quite different between CAS2 and CAS3. Between CAS2 and CAS3, it takes 33% more time to get the first bit of data (4/3 = 1.33). For bursting large block or page of data (assuming burst of 8 typically), the latency ratio is as high as 44% (26/18 = 1.44). This means it takes more time (44% more) to complete receiving the data for each access of 8 bit of data on each data line.

Don't confuse latency with bandwidth which depends on the memory bus speed, which can be the same for CAS2 and CAS3. That means the data is still moving as fast in both cases of CAS2 and CAS3 (which depends only on the speed of the FSB, and also memory bus), but for CAS3 the CPU is receiving the data 33-44% later each time accessing the memory.

Luckily, CPU does not access memory all the time. Statistically only 5-10% of the time, and the rest of the time to the on-chip L1 and L2 cache, which have much shorter latency and faster cycle time. So as a result, the actual performance hit between CAS2 and CAS3 for most programs including benchmarks is 1-2%.


Importance of these ras/cas magic numbers

CAS Latency is most important for memory bandwidth, or for bursting a large block/page of data like in 3D, video, gaming applications. Set it to 2 whenever possible. Most memory modules I came across have no problem taking CAS 2.

2nd important is the RAS to CAS Delay (Trcd). It is the number of cycles between the row and column access. 2 is good, but 3 is OK.

The Precharge Delay (Trp) is the precharge time after an active access. 2 is good, but 3 is OK.

Active to Precharge Delay (Tras) is the minimum time for an active access (to perform a single row and column access). It is the least importance for memory performance. Usually 5 or 6 or 7 is fine. Tras >= Trcd + CAS
 
Single Channel vs Dual Channel DDR Memory

In a motherboard, the CPU is connected to the memory modules (one or two or more) via a memory controller (inside the north bridge).

Single channel memory modules and memory bus

CPU <---- fsb ----> Memory Controller <---- memory bus ----> Memory Module (dimm1/dimm2/dimm3)

maxFsbBandwidth = fsb x 2 x 8 MB/s
maxMemoryBandwidth = memoryBus x 2 x 8 MB/s
x 2 because data is pumped at twice the bus frequency (using both rising and falling clock edges) in DDR (double data rate) form,
x 8 because the memory data bus is 64 bit or 8 Byte wide.

E.g. fsb = 200 MHz,
memoryBus = 200 MHz,
maxMemoryBandwith = 200 x 2 x 8 = 3200 MB/s
(This is why fsb 200 MHz, DDR 400, PC3200 refer to same thing)

In a motherboard with dual channel, two memory dimms can be connected to the memory controller(s) in parallell to provide in theory twice the bandwidth to the memory controller(s)

Dual channel with two memory modules, memory buses in parallel

........................... Memory Controller <---- memory bus ----> Memory Module (dimm1/dimm2)
CPU <---- fsb ---->
........................... Memory Controller <---- memory bus ----> Memory Module (dimm3)

singleChannelMemoryBandwidth = memoryBus x 2 x 8 MB/s
maxMemoryBandwidth = 2 x memoryBus x 2 x 8 MB/s
x 2 because data is pumped at twice the bus frequency (using both rising and falling clock edges) in DDR (double data rate) form,
x 8 because the memory data bus is 64 bit or 8 Byte wide.

E.g. fsb = 200 MHz,
memoryBus = 200 MHz,
maxSingleChannelMemoryBandwidth = 200 x 2 x 8 = 3200 MB/s
(This is why fsb 200 MHz, DDR 400, PC3200 refer to same thing)
maxDualChannelMemoryBandwidth = 2 x 3200 MB/s = 6400 MB/s

So indeed, in dual channel, the memory controller(s) sees twice the memory bandwidth if two memory modules are put into the correct dimms (e.g. dimm1+dimm3, or dimm2+dimm3 for A7N8X). So far so good.

For the current AMD nforce2 MB, the fsb data rate for dual channel is still running as the SAME as a single channel memory double data rate (DDR). Even the memory modules are in parallel to give twice the bandwidth to the memory controller(s), but the fsb CANNOT take advantage of getting twice the data rate from memory controller(s)!!!!

Things are actually more complicated than that for nforce2 memory controller, there is some speculative prefetch data cache in the north bridge to make use of the dual channel bandwidth, so that some data are cached in the NB and can get to the CPU faster than getting from the memory. But the net is that until AMD can provide quad rate (or double fsb speed) than now, there is little or no advantage to make use of the dual memory bandwidth for the CPU. There are some MBs implementing integrated video (a diff version of the NB) which has 2x64 bit datapath and can take advantage of the dual channel memory bus bandwidth.

For current nforce2 MB, maxMemoryBandwidth = fsb x 2 x 8 MB/s
At fsb = 200, maxMemoryBandwidth = 200 x 2 x 8 = 3200 MB/s.
If one can overclock hard (for nforce2) to 220 fsb.
At fsb = 220, maxMemoryBandwidth = 220 x 2 x 8 = 3520 MB/s.

For Intel, the fsb is quad data rate (QDR), i.e. the fsb data rate is four time as fast as the memory bus speed.
E.g. fsb = 166, memory bus = 166 (DDR 333)
maxMemoryBandwidth (quad data rate) = 166 x 4 x 8 = 5312 MB/s
In general, the dual channel memory controller efficiency is not 100%, and also due other bus traffic. So one cannot get twice the memory bandwidth, the effective is around 70-80% = 4000 MB/s.

For P4 dual channel, the effective bandwidth running fsb:memory=1:1 (SYNC mode)= 0.75 x 4 x 8 FSB = 24 FSB MB/s.
E.g. FSB = 200 MHz, effective bandwidth = 4800 MB/s.
E.g running fsb:memory=5:4, with FSB=250, memory=200, effective bandwidth ~ 24 x 225 = 5400 MB/s.

For current nforce2 dual channel, little or no advantage until AMD fsb data rate is twice as fast as the memory double data rate (DDR). Currently, the fsb data rate is only the same as memory's data rate.

For Intel P4 chip set, since fsb quad data rate (QDR) is four time as fast as memory speed (or twice the DDR of memory), minus some overhead in the memory controller, the effective bandwidth is about 70-80% of dual channel bandwidth. It is clear 2 modules in dual channel should be used whenever possible.

Summary:

Dual channel or single channel mode in nforce2 mb is not that crucial for overall performance. The difference is few % (say 2-3%, maybe 5% for new NB stepping ?). Also single channel may let FSB to go a bit higher due to a smaller chance of potential dual dimm mismatch and memory controller stress at high FSB, I think. On the other hand, dual channel memory controller provides some performance advantage due to its intrinsic speculative caching capability. At this point, the little higher FSB from single channel offset the performance advantage of dual channel, and the two is within few %, I think, for AMD mb. For some nforce2 mb that have integrated video which can benefit from twice the nforce2 memory bandwidth, since the bus between the video and the memory controller has 2x64 bit bus.

The max bandwdith between memory controller and CPU would be 2 x 8 x FSB = 16 FSB MB/s. x2 is because of DDR (data are transferred at both rising and falling edge of the FSB clock, x8 because of 8-byte bus or 64-bit bus). The effective bandwidth, taking into memory controller (~95% efficiency), would be around 15.2 FSB. E.g. FSB = 200 MHz, effective bandwidth ~ 3040 MB/s.

Dual channel makes a big difference for P4 dual channel mb though, due to quad pump data of P4 (or QDR). The max bandwidth for P4 dual channel is 4 x 8 x FSB = 32 FSB MB/s. The effective bandwidth, taking into memory controller overhead (~ 75% efficiency), would be around 24 FSB MB/s.

For single channel, max bandwidth = 16 FSB, effective bandwidth ~ 15.2 FSB. Hence the improvement of effective bandwidth of dual channel = (24 - 15.2)/15.2 = 58% for P4 dual channel system over single channel.

E.g. FSB = 200 MHz, effective bandwidth ~ 4800 MB/s, which is around 60% more than that of a nforce2 mb running same FSB 200 MHz.
E.g. running fsb:memory=5:4, with FSB=250, memory=200, effective bandwidth ~ 24 x 225 = 5400 MB/s.
 
Last edited:
Dual Channel, Nforce2, P4 and AMD FSB

P4 FSB is quad pump (or QDR) where data are transferred at a rate four times that of the FSB frequency, and there is no fast memory to match the FSB, hence running memory in dual channel to increase bandwidth. But in AMD NF2 dual pump (or DDR0, memory can match the FSB speed and if you want to squeeze last % of bandwidth, you have to run memory in SYNC with FSB, but ASYNC dual channel will give the best price/performance.

I found that in terms of memory controller efficiency for NF2 and P4, both max out around 75-80%, not much different.

It seems that it is price/performance effective to use PC2100-2700 memory modules in dual channel so the combined bandwidth is more in line with the AMD dual pump FSB running 50-100% faster. This is like an artificial emulation of the scaled down P4 quad pump FSB.

Actually, things are more complicated than just that:

1. In practiice and absolute term, most current NF2 MB can run up to about 210-220 MHz (even with the vdd mod on the NB). I think it is due to the memory controller(s) in the NB, even the FSB can physically go much higher to 230-240 MHz by itself, without heavy memory traffic (I did some tests on this using ASYNC).

So the max FSB bandwidth = 2 * 8 * 220 = 3520 MB/s

2. Further there is an overhead in the dual channel memory controller, I estiamted to be about 75% efficiency. This figure seems to be about right for both dual channel in P4 and nforce2.

For P4 running 166 MHz, max bandwidth = 4 * 8 * 166 = 5312 MB/s
Actual memory bandwidth measured is around 4000 MB/s (hence ~75% efficiency).
For P4 running 200 MHz, max bandwidth = 4 * 8 * 200 = 6400 MB/s
Actual memory bandwidth measured is around 4800 MB/s (hence ~75% efficiency).

For AMD nforce2, since FSB is only 2x, the efficiency is only 50% when memory is running 100% SYNC (half of the memory bandwidth is wasted). And the efficiency is about 75% when memory is running much slower than the FSB (at 50-75%).

So if FSB = 220 MHz, memory speed = 133 MHz (PC2100),
the max memory bandwidth = 2 * 2 * 133 * 8 = 4256 MB/s
which ideally is good for filling up the FSB, ... but taking into account overhead and efficiency,
the effective bandwidth = 4256 * 0.75 = 3192 MB/s

Actually this is pretty good for price/performance, given the memory cost is 50% lower than that running 100% SYNC using PC3500 memory, and getting close to 10-15% of the max bandwidth at 220 MHz.

But for overclockers who always want the max, this is a hard sell and requires more work to optimize the overall system. Since AMD FSB is not quad pump, and one can always find fast memory say PC3500/3700 to match the 220 MHz FSB. And the effective bandwidth will be about 3300 MB/s (subtract overhead 200 from 3500). In general, for AMD nforce2 system, I found that the efficiency for dual channel or single channel running in SYNC mode is around 93-95%. This is a little bit better in terms of absolute bandwidth performance than the dual channel 3192 MB/s using slower PC133 memory.

I think this is why for overclockers, most will just keep pushing the FSB with faster memory running at SYNC.

3. IMHO, running slower memory in ASYNC at 50-66-75% is much more price/performace effective than 100% SYNC. 50% memory cost will get to 10-15% of the max bandwidth. This approach has been used in P4 dual channel since its FSB is QDR (quad pump data) and there is no fast memory to match at that speed, and dual channel is the only way to fill up the system bandwidth.

For AMD DDR system, ppl who want absolute performance will just use the fastest memory to match the FSB and run them in SYNC to get the last 10% of memory bandwidth.

In summary, for AMD nforece2, the effective bandwidth running in SYNC mode = 0.95 x 2 x 8 x FSB = 15.2 FSB MB/s. E.g. FSB = 220 MHz, effective bandwidth = 3344 MB/s.

For P4 dual channel, the effective bandwidth running fsb:memory=1:1 (SYNC mode)= 0.75 x 4 x 8 FSB = 24 FSB MB/s. E.g. FSB = 200 MHz, effective bandwidth = 4800 MB/s.
E.g running fsb:memory=5:4, with FSB=250, memory=200, effective bandwidth ~ 24 x 225 = 5400 MB/s.
 
Last edited:
Back