Memory Basics
DRAM Memory Technologies
DRAM is available in several different technology types. At their core, each technology is quite similar to the one that it replaces or the one used on a parallel platform. The differences between the various acronyms of DRAM technologies are primarily a result of how the DRAM inside the module is connected, configured and/or addressed, in addition to any special enhancements added to the technology.
There are three well-known technologies:
Synchronous DRAM (SDRAM)
An older type of memory that quickly replaced earlier types and was able to synchronize with the speed of the system clock. SDRAM started out running at 66 MHz, faster than previous technologies and was able to scale to 133 MHz (PC133) officially and unofficially up to 180 MHz. As processors grew in speed and bandwidth capability, new generations of memory such as DDR and RDRAM were required to get proper performance.
Double Data Rate Synchronous DRAM (DDR SDRAM)
DDR SDRAM is a lot like regular SDRAM (Single Data Rate) but its main difference is its ability to effectively double the clock frequency without increasing the actual frequency, making it substantially faster than regular SDRAM. This is achieved by transferring data not only at the rising edge of the clock cycle but also at the falling edge. A clock cycle can be represented as a square wave, with the rising edge defined as the transition from ‘0’ to ‘1’, and the falling edge as ‘1’ to ‘0’. In SDRAM, only the rising edge of the wave is used, but DDR SDRAM references both, effectively doubling the rate of data transmission. For example, with DDR SDRAM, a 100 or 133 MHz memory bus clock rate yields an effective data rate of 200 MHz or 266 MHz, respectively. DDR modules utilize a 184-pin DIMM (Dual Inline Memory Module) packaging which, like SDRAM, allows for a 64 bit data path, allowing faster memory access with single modules over previous technologies. Although SDRAM and DDR share the same basic design, DDR is not backward compatible with older SDRAM motherboards and vice-versa.
It is important to understand that while DDR doubles the available bandwidth, it generally does not improve the latency of the memory as compared to an otherwise equivalent SDRAM design. In fact the latency is slightly degraded, as there is no free lunch in the world of electronics or mechanics. So while the performance advantage offered by DDR is substantial, it does not double memory performance, and for some latency-dependant tasks does not improve application performance at all. Most applications will benefit significantly, though.
Rambus DRAM (RDRAM)
Developed by Rambus, Inc., RDRAM, or Rambus DRAM was a totally new DRAM technology that was aimed at processors that needed high bandwidth. RAMBUS, Inc. agreed to a development and license contract with Intel and that lead to Intel’s PC chipsets supporting RDRAM. RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Specific information on this memory technology can be found at the
RAMBUS Website.
Unfortunately for Rambus, dual channel DDR memory solutions have proved to be quite efficient at delivering about the same levels of performance as RDRAM at a much lower cost. Intel eventually dropped RDRAM support in their new products and chose to follow the DDR dance, at which point RDRAM almost completely fell off the map. Rambus, SiS, Asus and Samsung have now teamed up and are planning a new RDRAM solution (the SiS 659 chipset) providing 9.6 GB/s of bandwidth for the Pentium 4. It will be an uphill battle to get RDRAM back in the mainstream market without Intel's support.
DDR Memory Speeds
The speed of DDR is usually expressed in terms of its "effective data rate", which is twice its actual clock speed. PC3200 memory, or DDR400, or 400 MHz DDR, is not running at 400 MHz, it is running at 200 MHz. The fact that it accomplishes two data transfers per clock cycle gives it nearly the same bandwidth as SDRAM running at 400 MHz, but DDR400 is indeed still running at 200 MHz.
Actual clock speed/effective transfer rate
100/200 MHz => DDR200 or PC1600
133/266 MHz => DDR266 or PC2100
166/333 MHz => DDR333 or PC2700
185/370 MHz => DDR370 or PC3000
200/400 MHz => DDR400 or PC3200
217/433 MHz => DDR433 or PC3500
233/466 MHz => DDR466 or PC3700
250/500 MHz => DDR500 or PC4000
267/533 MHz => DDR533 or PC4200
283/566 MHz => DDR566 or PC4500
So how do they come about those names? Well, the industry specifications for memory operation, features and packaging are finalized by a standardization body called JEDEC. JEDEC, the acronym, once stood for Joint Electron Device Engineering Council, but now is just called the JEDEC Solid State Technology Association.
The naming convention specified by JEDEC is as follows:
- Memory chips are referred to by their native speed. Example, 333 MHz DDR SDRAM memory chips are called DDR333 chips, and 400 MHz DDR SDRAM memory chips are called DDR400.
- DDR modules are also referred to by their peak bandwidth, which is the maximum amount of data that can be delivered per second. Example, a 400 MHz DDR DIMM is called a PC3200 DIMM. To illustrate this on a 400 MHz DDR module: Each module is 64 bits wide, or 8 Bytes wide (each byte = 8 bits). To get the transfer rate, multiply the width of the module (8 Bytes) by the rated speed of the memory module (in MHz): (8 Bytes) x (400 MHz/second) = 3,200 Mbytes/second or 3.2 Gbytes/second, hence the name PC3200
To date, the JEDEC consortium is yet to finalize specifications for PC3500 & higher modules. PC2400 was a very short lived label applied to overclocked PC2100 memory. PC3000 was not and will not ever be an official JEDEC standard.
Processors and Bandwidth
The front side bus (FSB) is basically the main highway or channel between all the important functions in the motherboard that surround the processor through which information flows. The faster and wider the FSB, the more information can flow over the channel, much as a higher speed limit or wider lanes can improve the movement of cars on a highway. As with the FSB, a low speed limit or narrower lanes will retard the movement of cars on the highway causing a bottleneck of traffic. Intel has been able to reduce the FSB bottleneck by accomplishing four data transfers per clock cycle. This is known as quad-pumping, and has resulted in an effective FSB frequency of 800 MHz, with an underlying 200 MHz clock. AMD Athlon XPs, on the other hand, must be content with a bus that utilizes different technology, one that utilizes both the rising and falling sides of a signal. This is in essence the same double data rate technology used by memory of the same name (DDR), and results in a doubling of the FSB clock frequency. That is, a 200 MHz clock results in an effective 400 MHz FSB.
Processors have a FSB data width. This data width is much like the "lanes on a highway" that go in and out of the processor. The processor uses this highway to transfer data mainly between itself and system. When the first 8088 processor was released, it had a data bus width of 8 bits and was able to access one character at a time (8 bits = 1 character/byte) every time memory was read or written. The size in bits thus determines how many characters it can transfer at any one time. An 8-bit data bus transfers one character at a time, a 16-bit data bus transfers 2 characters at a time and a 32-bit data bus transfers 4 characters at a time. Modern processors, like the Athlon XP and Pentium 4, have a 64-bit wide data bus enabling them to transfer 8 characters at a time. Although, these processors have 64-bit data bus widths, their internal registers are only 32 bits wide and they're only capable of processing 32 bit commands and instructions while new AMD64 series of processors are capable of processing both 32 bit and 64 bit commands and instructions.
When talking memory, bandwidth refers to how fast data is transferred once it starts and is often expressed in quantities of data per unit time. The peak bandwidth that may be transmitted by an Athlon XP or a Pentium 4 is the product of the width of the FSB and the frequency it runs at. To illustrate:
Athlon XP “Barton” 3200+ -- 400FSB
64(bits) * 400,000,000(Hz) = 25,600,000,000 bits/sec
(25,600,000,000/8) / (1000*1000) = 3200 Mb/sec
Intel Pentium 4 “C” 3.2 GHz -- 800FSB
64(bits) * 800,000,000(Hz) = 51,200,000,000 bits/sec
(51,200,000,000/8) / (1000*1000) = 6400 Mb/sec
These are the bus’ theoretical peak bandwidths. There's a difference between peak bus bandwidth and effective memory bandwidth. Where peak bandwidth is just the product of the bus width and bus frequency, effective bandwidth takes into consideration others factors such as addressing and delays that are necessary to perform a memory read or write. The memory could very well be capable of putting out 8 bytes on every single clock pulse for an indefinitely long time, and the CPU could likewise be capable of consuming data at this rate indefinitely. The problem is that there are turnaround times (or delays) in between when the processor places a request for data on the FSB; when the requested data is reproduced by RAM and when this requested data finally arrives for use by the CPU. Luckily, the bandwidth-killing effects of these delays are reduced through various methods; most important being reducing the number of requests the CPU must issue.
DDR Dual Channel
Most of today’s mainstream chipsets are using some form of dual channel to supply processors with bandwidth. The nForce and nForce2 are, at this time, the only two chipsets to supply dual-channel goodness for the Athlon XP. The original nForce was not on the same performance and stability level as the competitor VIA's chipset was, but the new and improved dual-channel DDR400 nForce2 has been a smash success -- in fact, is today's de facto choice for performance-minded / overclocker AMD desktop buyers. VIA is now about to release a Dual Channel chipset for the Athlon XP/Duron family called the KT880.
Take note that the memory isn't dual channel, the platform is. In fact there is no such thing as dual channel memory. Rather, it is most often a memory interface composed of two (or more) normal memory modules coordinated by the chipset on the motherboard, or in the case of the AMD64 processors, coordinated by the integrated memory controller. But for the sake of simplicity, we refer to DDR dual channel architecture as dual channel memory.
The nforce2 platform has two 64 bit memory controllers (which are independent of each other) instead of just a single controller like other chipsets. These two controllers are able to access "two channels" of memory simultaneously. The two channels, together, handle memory operations more efficiently than one module by utilizing the bandwidth of two modules (or more) combined. By combining DDR400 (PC3200) with dual memory controllers, the nForce2 could offer up to 6.4 GB/sec of bandwidth in theory.
However, this extra bandwidth produced by dual channel cannot be fully utilitized by the Athlon XP and Duron family (K7) of processors. Data(bandwidth) will reach these processors no sooner than the system bus (FSB) allows them, and the processor therefore cannot derive an advantage from memory operating faster than DDR266 when operating on a 133/266Mhz FSB, DDR333 with a 166/333Mhz FSB or DDR400 at 200/400Mhz FSB even in single channel mode. Visualize a four lane highway, symbolizing your Dual Channel configuration. As you go along the highway you come up to a bridge that is only 2 lanes wide. That bridge is the restriction posed by the dual-pumped AMD FSB. Only two lanes of traffic may pass through the bridge at any one time. That's the way it is, with the K7 processors and Dual Channel chipsets.
In case you're wondering, the K in K7 stands for Kryptonite later changed to Krypton to avoid copyright infringement. Yes, that very same fictional element from comic books that could bring the otherwise all-powerful Superman (Intel
) to his knees. Intel's P4 architecture, in contrast, is designed to exploit the increased bandwidth afforded by dual channel memory architectures. The 64-bit Quad Pumped Bus of the modern Pentium 4 CPU working at 800MHz, in theory, requires 6.4GB/s of bandwidth. This is the exact match of the bandwidth produced by the Intel i875 (Canterwood) and i865 (Springdale) chipset families. The quad pumped P4 FSB seemed like drastic overkill in the days of single channel SDR memory, but is paying handsome dividends in today's climate of dual channel DDR memory subsystems. This is one lasting and productive legacy of Intel's RDRAM efforts. As implemented on the P4 RDRAM was also dual channel architecture, and mandated the quad-pumped FSB for its extra bandwidth to be exploited. This factor continues to serve the P4 well in the dual channel DDR era we are currently in, and allows P4's greater memory performance than all other PC platforms, save the new AMD Athlon64 FX with all its new bells and whistles.
The Athlon 64 FX processor has a fully integrated DDR Dual Channel memory controller providing a 128-bit wide path to memory and therefore eliminating the need for a Dual Channel interface on the motherboard which traditionally was always located in the Northbridge. The old term front-side bus has always represented the speed at which the processor moves memory traffic and other data traffic to and from the chipset. Since the AMD64 processors has the memory controller located on the processor die, that memory subsystem traffic no longer has to go through the chipset for CPU-to-memory transfer. Therefore, the old term "front-side bus" does no good as it is not applicable anymore. With AMD64 processors, the CPU and memory controller interface with each other at full CPU core frequency. The speed at which the processor and chipset communicate is now dependent on the chipset's HyperTransport spec, running at speeds of up to 1600 MHz. Although the P4 (800fsb variety) and the A64 FX 940 pins, both share the same theoretical peak memory bandwidth of 6.4GB/sec, the Athlon FX realizes significantly more throughput due mainly to it’s integrated memory controller which drastically reduces latency. Even so, it still suffers from the required use of registered modules which are slower than regular modules. The upcoming Athlon 64 / A64 FX processors designed for Socket 939 will be free from this major drawback and will also feature Dual Channel memory controllers. One negative, though, of having the memory controller integrated into the processor is that to support emerging memory technologies, like DDR-2 for example, the controller has to be redesigned and the processor needs to be replaced.
Which slots to use?
- If you're using a single module, it's best practice to use the first slot. If using two or more modules in a non-dual channel motherboard, populate the first slot and use any other slots you wish. Q: I've had my single module installed in slot 2 for the last few months now, should I change it? No, it's also best practice to keep on using the slot(s) you're been using before. If you replace RAM, then insert the new modules, in the same slots the older ones were in before.
You may find the system overclocks better with the ram in a different slot. It is very hard to predict when this effect occurs, as well as which one might work best. In the overclocking game he who tries the most things wins, and if you are running an overclocked configuration that is asking a lot of the ram it is a good idea to try all available slots to make sure the one you are using yields the best results.
- If you're using two or more modules of unequal size, you will get the best performance if you put the largest module(s) (in megabytes) in the lowest-numbered slot(s). For example, if your system currently has 256MB of memory and you want to add 512MB, it would be best to put the 512MB module into slot 0 and the 256MB module into slot 1.
Using Dual Channel
Dual Channel requires at least two modules for operation. It is recommended that the modules you use be of the same size, speed, arrangement etc. Dual Channel is optional on the original nforce2 motherboards and nforce2 ultra400. You can choose to run in single channel mode on these motherboards. Nforce2 400 boards are singe-channel only. Most dual channel capable nforce2 motherboards come with three slots. On these motherboards the first memory controller controls only the first slot (or the slot by itself), while the second memory controller controls the last two slots (which are usually closer together). Name them slots 1, 2 & 3 respectively. To implement Dual Channel, it is necessary to occupy the slot 1 (channel 0) and either one of the two slots that are closer together, slots 2 or 3 (channel 1). The entire config would be running in 128 bit mode.
You can use three modules in Dual Channel Mode, by filling the third unoccupied slot. With three sticks, slots 1 remains as channel 0 while slot 2&3 become channel 1. To maintain 128-bit mode, with all three slots filled, each channel must have an equal amount of memory. For example, slots 1 should be filled with a 512 Mb module, while slots 2 & 3 are populated 256 Mb modules. If you were to use three modules of the same size, then only the first two modules would be running in 128 bit Dual Channel Mode. Example, using 3x 256 Mb modules will have the first 512 Mb running in 128 bit Dual Channel mode, while the remaining 256 Mb will be in 64-bit Single Channel mode.
Intel dual-channel systems are different. The have either two or four slots, and to run dual channel mode must have either one or two pairs of (hopefully) matching modules. Running three modules on a P4 system will force it to run in single channel mode, and is therefore to be avoided.
Consult your motherboard manual for instruction on exactly which slots exactly to use.
Bios Settings
Memory timings
Memory performance is not entirely determined by bandwidth, but also the speeds at which it responds to a command or the times it must wait before it can start or finish the processes of reading or writing data. These are memory latencies or reaction times (timings). Memory timings control the way your memory is accessed and can be either a contributing factor to better or worse 'real-world' performance of your system.
Internally DRAM has a huge array of cells that contain data. (If you've ever used Microsoft's Excel, try and picture it that way) A pair of row and column addresses can uniquely address each cell in the DRAM. DRAM communicates with a memory controller through two main groups of signals: Control-Address signals and Data signals. These signals are sent to the RAM in order for it to read/write data, address and control. The address is of course where the data is located on the memory banks, and the control signals are various commands needed to read or write. There are delays before a control signal can be executed or finish and this is where we get memory timings. The standard format for memory timings are most often expressed as a string of four numbers, separated by dashes, from left to right or vice-versa like this 2-2-2-5 [CAS-tRP-tRCD-tRAS] . These values represent how many clock cycles long each delay is but are not expressed in the order in which they occur. Different bioses will display them differently and there maybe additional options (timings) available.
Which timings mean what?
In most motherboards, numerous settings can be found to optimize your memory. These settings are often found the Advanced Chipset section of the popular award bioses. In certain instances, the settings maybe placed in odd locations, so please consult your motherboard manual for specific information. Below are common latency options:
- Command rate - is the delay (in clock cycles) between when chip select is asserted (i.e. the RAM is selected) and commands (i.e. Activate Row) can be issued to the RAM. Typical values are 1T (one clock cycle) and 2T (two clock cycles).
- CAS (Column Address Strobe or Column Address Select) - is the number of clock cycles (or Ticks, denoted with T) between the issuance of the READ command and when the data arrives at the data bus. Memory can be visualized as a table of cell locations and the CAS delay is invoked every time the column changes, which is more often than row changing.
- tRP (RAS Precharge Delay) - is the speed or length of time that it takes DRAM to terminate one row access and start another. In simpler terms, it means switching memory banks.
- tRCD (RAS (Row Access Strobe) to CAS delay) - As it says it's the time between RAS and CAS access, ie. the delay between when a memory bank is activated to when a read/write command is sent to that bank. Picture an Excel spreadsheet with a number across the top and along the left side. They numbers down the left side represent the Rows and the numbers across the top represent the Columns. The time it would take you, for example, to move down to Row 20 and across to Column 20 is RAS to CAS.
- tRAS (Active to Precharge or Active Precharge Delay) - controls the length of the delay between the activation and precharge commands ---- basically how long after activation can the access cycle be started again. This influences row activation time which is taken into account when memory has hit the last column in a specific row, or when an entirely different memory location is requested.
These timings or delays occur in a particular order. When a Row of memory is activated to be read by the memory controller, there is a delay before the data on that Row is ready to be accessed, this is known as tRCD (RAS to CAS, or Row Address Strobe to Column Access Strobe delay). Once the contents of the row have been activated, a read command is sent, again by the memory controller, and the delay before it starts actually reading is the CAS (Column Access Strobe) latency. When reading is complete, the Row of data must be de-activated, which requires another delay, known as tRP (RAS Precharge), before another Row can be activated. The final value is tRAS, which occurs whenever the controller has to address different rows in a RAM chip. Once a row is activated, it cannot be de-activated until the delay of tRAS is over.
To tweak or not to tweak?
In order to really maximize performance from your memory, you'll need to gain access to your system's bios. There is usually a Master Memory setting, often rightly called Memory Timing or Interface, which gives usually gives you the choice to set your memory timings by SPD or Auto, preset Optimal and Aggressive timings (e.g. turbo and ultra), and lastly an Expert or Manual setting that will enable you to manipulate individual memory timing settings to your liking.
Are the gains of the perfect, hand-tweaked memory timing settings worth it over the automatic settings? If you're just looking to run at stock speeds and want absolute stability, then the answer to that question would probably be no. The relevance would be nominal at best and you would be better off going by SPD or Auto. However, if your setup is up on the cutting edge of technology or you’re pushing performance to the limit as do some overclockers, or gamers or tweakers, it may have great relevance.
SPD (Serial Presence Detect)
SPD is a feature available on all DDR modules. This feature solves compatibility problems by making it easier for the BIOS to properly configure the system to optimize your memory. The SPD device is an EEPROM (Electrically Erasable Programmable Read Only Memory) chip, located on the memory module itself that stores information about the DIMM modules' size, timings, speed, data width, voltage, and other parameters. If you configure your memory by SPD, the bios will read those parameters during the POST routine (bootup) and will automatically adjust values in the BIOS according preset module manufacturer specifications.
There is one caveat though. At times the SPD contents are not read correctly by the bios. With certain combinations of motherboard, bios, and memory setting SPD or Auto may result in the bios selecting full-fast timings (lowest possible numbers), or at times full-slow timings (highest possible numbers). This is often the culprit in situations where it appears that a particular memory module is not compatible with a given board. Often in these cases the SPD contents are not being read correctly and the bios is using faster memory timings than the module or system as a whole can boot with. In cases like these try replacing the module with another, setting the bios to allow manual timings, and setting those timings to safer (higher) values will allow the combination to work.
Ok so I want to tweak, what do I do?
Now for the kewl stuff!!!
The first order of business, when tweaking your memory, is to deactivate the automatic RAM configuration -- SPD or Auto. With SPD enabled, the SPD chip on the memory module is read to obtain information about the timings, voltage and clock speed and those settings are adjusted accordingly. These settings are, however, very conservative to ensure stable operation on as many systems as possible. With a manual configuration, you can customize these settings for your own system and in most cases, the memory modules will remain stable even when they exceed the manufacturer's specifications.
As a general rule, a lower number (or timing) will result in improved performance. After all, if it takes fewer cycles to complete an operation, then it can fit more operations within X amount of time. However, this comes at a cost, and that is stability. It is similar to wireless networking with short and long preambles. A long pre-amble might be slower, but in a heavy network environment it is much more reliable than short preamble because there is more certainty a packet is for your NIC. The same is for memory - the more cycles used, in general, the more stable the performance. This is inherently true for all of them because to access precisely the right part of the memory, you have to be accurate, and the more time to do a calculation will make it more accurate in this instance. Most typical values are 2 and 3. You might ask: Why can't we use 1 or even 0 values for memory timings? JEDEC specifies that it's not possible for current DRAM technology to operate as it should under such conditions. Depending on motherboard, you might be able to squeeze '1' on certain timings, but will very likely result memory errors and instability. And even if it doesn't, it is unlikely to result in a performance gain.
If you are not planning on overclocking the clock speed of your RAM or if you have fast RAM rated at speeds above that of your current FSB, it may be possible to just lower the timings for a performance gain in certain applications that require most frequent accesses to system memory like, for instance, games. Memory timings can vary depending on the performance of RAM chips used. Not all memory modules will exhibit the ability to use certain timings without producing errors. So testing, trial and error, is required.
Here are general guidelines to follow while "tweaking":
- As with CPU/video card overclocking, adjusting the memory timings should be done methodically and with ample time to test each adjustment.
- lower figures = better performance, but lower overclockability and possibly diminished stability.
- higher figures = lesser performance, but increased overclockability and more stability -- to an extent
- tRCD & tRP are usually equal numbers between 2 and 4. In tweaking for more overclockability, lower tRP first between these two
- CAS should be either 2.0 or 2.5. Many systems, most nforce2, fail to boot with a 3.0 setting or have stability problems. CAS is not most critical of the various timings, unlike what is taught by many. In general, the importance of CAS when placed against tRP and tRCD is nominal. Reducing CAS has a relatively minor effect on memory performance, while lower tRP & tRCD values result in a much more substantial gain. In other words if you had to choose, 3-3-2.5 would be better than 4-4-2.0 (tRCD-tRP-CAS)
- tRAS should always be larger the before mentioned timings. – see below
tRAS is unique, in that lowering it can lead to problems and lesser performance. tRAS is the only timing that has no effect on real performance, if it is configured as it should. By definition, real-life performance is the same with different tRAS settings with a certain exception. This
document from Mushkin outlines how tRAS should be a sum of tRCD, CAS, and 2. For example, if you are using a tRCD of 2 and a CAS of 2 on your RAM, then you should set tRAS to 6. At values lower than that theory would dictate lesser performance as well as catastrophic consequences for data integrity including hard drive addressing schemes --- truncation, data corruption, etc --- as a cycle or process would be ended before it's done. How is it possible for memory timings to affect my hard drive? When the system is shut down or a program is closed, physical ram data that becomes corrupted may be written back to the hard drive and that’s where the consequences for the hard drive come in. Also let’s not forget when physical ram data is translated by the operating system to virtual memory space located on the hard drive.
While it's important to consider the advice of experts like Mushkin, your own testing is still valuable. Systems – both AMD & Intel alike, can indeed operate with stability with 2-2-2-5 timings, and even exhibit a performance gain as compared to the theoretically mandated 2-2-2-6 configuration. The most important thing in any endeavor is to keep an open mind, and don't spare the effort. Once you've tried both approaches extensively it will be clear to you which is superior for your particular combination of components.
The Anomaly: nVIDIA’s nForce2 and tRAS
An anomaly can be described as something that’s difficult to classify; a deviation from the norm or common form. This is exactly the situation with tRAS (Active to Precharge) and nVIDIA’s nforce2 chipset. As said before, not sparing the effort is what has lead to the initial discovery of this anomaly many months ago. It’s pretty well known by now, in a nutshell, a higher tRAS (i.e. higher than, say, the Mushkin mandated sum of CAS+tRCD+2) on nforce2 motherboards consistently shows slightly better results in several benchmarks and programs. In most cases, 11 seems to be the magic number. Other chipsets do not display this “deviation from the norm”, so what makes the nforce2 different?
This
http://forums.amdmb.com/showthread.php?s=&threadid=237991]thread[/url] has been on the topic for a while now, and
TheOtherDude has given a possible explanation for this anomaly.
“Unlike most modern chipsets, the Nforce2 doesn't seem to make internal adjustments when you change the tRAS setting in the BIOS. These "internal" (not really sure if that’s the right word) settings seem to include Bank Interleave, Burst Rate and maybe even Auto-precharge. For optimal performance, tRAS (as measured in clock cycles) should equal the sum of burst length, plus the finite time it takes the RAM to conduct a number of clock independent operations involved with closing a bank (~40 ns) minus one clock if Auto-precharge is enabled (this factor can be slightly effected by CAS, but should not play a role in optimal tRAS). To complicate things even more, one bank cannot precharge a row while the other specifies a column. This brings tRCD into the mix.
Higher isn't always better, but the reason everything is so weird with tRAS and the Nforce2 is simply because the chipset doesn't make the internal optimizations to accommodate your inputted tRAS value like most other chipsets.”
Dealing with Memory Speeds / Frequencies
When the memory frequency runs at the same speed as the FSB, it is said to be running in synchronous operation. When memory and FSB are clocked differently (lower or higher than), it is known to be in asynchronous mode. On both AMD and Intel platforms, the most performance benefits are seen when the FSB of the processor is run synchronously with the memory – Although Intel based systems have a slight exception, this is completely true of all AMD-supporting chipsets. When looking at the AMD-supporting chipsets async modes are to be avoided like a plague. AMD-supporting chipsets offer less flexibility in this regard due to poorly implemented async modes. Even if it means running our memory clock speed well below the maximum feasible for a given memory, an Athlon XP system will ALWAYS exhibit best performance running the memory in sync with the FSB. Therefore, a 166FSB Athlon XP would run synchronously with DDR333/PC2700 (2*166) and give better performance than running with DDR400/PC3200, despite its numbers being bigger.
Only Intel chipsets have implemented async modes that have any merit. If you are talking about the older i845 series of chipsets, running an async mode that runs the memory faster than the FSB is crucial to top system performance. And with the newer dual channel Intel chipset (i865/875 series) in an overclocked configuration, often you must run an async mode that runs the memory slower than the FSB for optimal results. The async modes in SiS P4 chipsets also work correctly.
To achieve synchronous operation, there is usually a Memory Frequency or DRAM ratio setting in the bios of your system that will allow you to manipulate the memory speed to a either a percentage of the FSB (i.e. 100%) or a fraction (or ratio) i.e. N/N where N is any integer available to you. If you want to run memory at non 1:1 ratio speeds, motherboards use dividers that create a ratio of CPU FSB: memory frequency. However, intrinsically, it is possible to see the problem with this and why synchronous operation is preferable on all PC platforms. If there is divider, then there is going to be a gap between the time that data is available for the memory, and when the memory is available to accept the data (or vica versa). There will also be a mismatch between the amount of data the CPU can send to the memory and how much the memory can accept from the CPU. This will cause slowdowns as you will be limited by the slowest component.
Here are three examples illustrating the three possible states of memory operation:
- 200MHz FSB speed with 100% or 1:1 (FSB:Memory ratio) results in 200MHz memory speed (DDR400)
Such a configuration is wholly acceptable for any AMD system, memory should be set this way at all times for best performance. Asynchronous FSB/Memory Speeds are horridly inefficient on AMD systems, but may well be the optimal configuration for P4 systems.
- 200MHz FSB speed with 120% or 5:6 (FSB:Memory ratio) results in 240MHz memory speed (DDR480)
This example shows running the memory at higher asynchronous speeds. Assume we have a Barton 2500+ which by default is running at a FSB of 333 MHz (166 MHz X 2) and we also have PC3200 memory which by default is running at 400 MHz. This is a typical scenario because many people think that faster memory running at 400 MHz, will speed up their system. Or they fail to disable the SPD or Auto setting in their bios. There is NO benefit at all derived from running your memory at a higher frequency (MHz) than your FSB on Athlon XP/Duron sytems. In actuality, doing so has a negative effect.
Why does this happen? It happens because the memory and FSB can't "talk" at the same speeds, even though the memory is running at higher speeds than the FSB. The memory would have to "wait for the FSB to catch up", because higher async speeds forces de-synchronization of the memory and FSB frequencies and therefore increases the initial access latency on the memory path -- causing as much as a 5% degradation in performance.
This is another ramification of the limiting effect of the AMD dual-pumped FSB. A P4's quad pumped FSB (along with the superior optimization of the async modes) allows P4's to benefit in some cases from async modes that run the memory faster than the FSB. This is especially true of single channel P4 systems. There still are synchronization losses inherent in an async mode on any system, but the adequate FSB bandwidth of the P4 allows the additional memory bandwidth produced by async operation to overcome these losses and produce a net gain.
- 250MHz FSB speed with 80% or 5:4 (FSB:Memory ratio) results in 200MHz memory speed (DDR400)
This example is most often used in overclocking situations where the memory is not able to keep up with the speed of the FSB. On AMD platforms, there is really no point having a high FSB, if the memory can’t keep up. When the memory or any other component is holding back system performance, this is called a “bottleneck”. As in the example above, a memory bottleneck would be if you were running your memory at DDR400 MHz with a 500 MHz (250x2) system bus. The memory would only be providing 3.2GB/s of bandwidth while the bus would be theoretically capable of transmitting 4.0GB/s of bandwidth. A situation like this would not help overall system performance.
Think of it like this; let's say you had a highway going straight into a mall, with an identical highway going straight out of the mall. Both highways have the same number of lanes and initially they have the same 45mph speed limit. Now let's say that there's a great deal of traffic flowing in and out of the mall and in order to get more people in and out of the mall quicker, the department of transportation agrees to increase the speed limit of the highway going into the mall from 45mph to 70mph; the speed limit of the highway leaving the mall is still stuck at 45mph. While more people will be able to reach the mall quicker, there will still be a bottleneck in the parking area leaving the mall - since the increased numbers of people that are able to get to the mall still have to leave at the same rate. This is equivalent to increasing the FSB frequency but leaving the memory frequency/bandwidth unchanged or set to a slower speed. You're speeding up one part of the equation while leaving the other part untouched. Sometimes the fastest memory is not always afforded or available. In this case, more focus should be placed on balancing the FSB and memory frequencies while still keeping latencies as low as possible AND while still maintaining CPU clock speed (GHz) by increasing the multiplier. The benefit of a faster FSB (and higher bandwidth) will only become more and clearer as clock speeds (GHz) increase; the faster the CPU gets, the more it will depend on getting more data quicker. The only real benefit of async modes on AMD platforms is the fact that it comes in handy to overclockers for testing purposes; to determine their max FSB and to eliminate the memory as a possible cause for not being able to achieve a desired stable FSB speed. Even so, async modes on early nforce2 based motherboards caused many problems; problems as serious as bios corruption.
Looking to the Intel side of the fence, async modes that run the memory slower than the FSB have merit because of how async modes are implemented in the Intel chipsets. This is extremely important, as we cannot change the CPU multiplier on modern Intel systems and therefore have to use and async mode to allow substantial overl!!!!! on the majority of systems utilizing the current 200/800MHz fsb family of P4 processors. To illustrate, if you increase the FSB on a new C stepped P4 to 250 MHz (250 x 4) with a 1:1 ratio, memory will work at 250 MHz (DDR500). This can be done in two ways. The first is with exotic PC4000 or DDR500 memory modules, but these are expensive just to run synchronously at such speeds and their timings are exactly delightful either. The other way is to overclock DDR400/DDR433 to much higher speeds through overvolting, but this is seemingly dangerous and often motherboards don’t provide nearly enough voltage to achieve such speeds without physical voltage mods. Therefore to avoid expensive PC4000 or volt mods, you change the memory ratio so that a 250FSB overclock will become something that the memory can handle to allow for a substantial overclock of the Pentium 4. In the example, to let PC3200(DDR400) remain as DDR400 with a 250MHz.
Overclocking & Memory
How do I overclock my memory?
On modern systems, memory is very rarely, if ever, overclocked for the sake of overclocking memory. Lemme rephrase that, people don’t overclock memory to make it run higher than what is actually needed. There are many instances where memory is even underclocked. You first determine the default frequency of your memory, 1MHz higher than that frequency is the point where overclocking begins. Now how do you increase that frequency? As previously discussed, best performance on all platforms is gained by running the memory frequency synchronously with speed of the FSB. This means that for every 1MHz the FSB is increased, so too will the frequency of memory clock. So in effect, memory overclocking is just a part of overclocking your processor. They are done simultaneously. Since FSB frequency and Memory frequency are most times made to be the same, this poses a problem - as overclockers look for the highest possible FSB while the memory may struggle behind because it’s not able to keep.
Other aspects to memory overclocking are memory timings and of course the amount of voltage supplied. Unlike CPU overclocking or video card tweaking, adjusting memory timings and frequencies offers very little physical risk to your system, other than the possibility of a windows failure to load or a program failure while testing. The memory will either be able to handle the overclocking/tweaking, handle it with instability or not handle it all. There are no grey areas in between, it either does, does with lots of problems, or doesn't at all. This makes it a bit simpler to quickly find the precise limits of any memory.
The memory timings can also play a role in how far the memory will go, in keeping with the FSB. Lower timings (numbers) will hinder how fast the memory can run, while higher timings allow for more memory speed. So which is better, lower timings or higher memory speeds? Why not both? Overall data throughput depends on bandwidth and latencies. Peak bandwidth is important for certain applications that employ mostly streaming memory transfers. In these applications, the memory will burst the data, many words after each other. Only the very first word will have a latency of maybe several cycles, but all other words will be delivered one after another. Other applications with more random accesses, like games, will get more mileage out of lower latency timings. So weigh the importance of higher memory clocks against lower latency timings, and decide which is most important for your application.
Memory Voltage
Sometimes a little extra voltage is all that's required to encourage your defiant DDR to straighten up and fly right. You can adjust the ddr voltage quite easily through your motherboard’s BIOS as you would for your CPU’s voltage. Like CPU overclocking, raising memory voltage above default (default is usually 2.5v or 2.6v for DDR) at higher memory clock speeds may aid stability and/or enable you to use lower latency timings.
Although the ddr voltage has nothing to do with the CPU itself, it plays an integral part in the big picture. If we are running a synchronous mode (1:1), then for every 1MHz increase in FSB speed, the RAM speed will increase by 1MHz. So in these cases an elevated memory voltage will often prove helpful in maximizing the overclocking potential of the CPU.
A few points to consider when raising memory voltage:
- Like CPU overclocking, increasing memory voltage should be done in the smallest increments available. Put your system through a few paces of a program like memtest86 after each step. If it fails testing, bump the voltage a little more and test again.
- 0.3 Volts over Default - That's a bit conservative for some people (including me), but should be enough for most. This is also the maximum provided by most motherboards. On such motherboards, hardware mods or modified bioses maybe required to gain access to more voltage.
- Some of the higher voltages (2.9v to 3.3v) available on certain motherboards may damage the RAM with long exposure, so check with other people who have your RAM to get a feel for its voltage tolerances. The memory you save may be your own.
Do I Need Ram Cooling?
Memory cooling has become very popular, most notably on video cards. The effectiveness of memory cooling on both system ram, however, is often fuel for lengthy discussions on many internet hardware forums, including our own message board. Does system memory get hot enough to require cooling? Depends on what you consider is hot. My opinion is that memory modules never build up enough heat, to require any sort of cooling. Even when overclocking, they still stay pretty cool. If extra cooling, puts your mind at ease, then go for it, but you can't necessarily expect better overclocking results or even any extensions in the life of your overclocked / overvolted memory. Premier manufacturers such as Corsair, Mushkin, and OCZ ship their modules with heatspreaders across the chips. They look very nice and are often solid copper or aluminum. A handful of other companies sell ram cooling kits, and other solutions for modules that come without cooling. Ram sinks are pretty much the same as standard heatsinks for graphics chips and CPUs, except they're a lot smaller and tailored for RAM chip sizes. Tests show these heatspreaders & kits to do VERY little as far as cooling the memory goes. With no real benefit, placing these cooling kits on memory modules is more for looks than for cooling, and that can be appreciated.
Burning-In Memory
Burn-in can be defined as the process of exercising an integrated circuit (IC) for a period of time at elevated voltage, speed and temperature with the aim of improving performance. With CPUs it’s another one of those debated topics, but in my experience “burning-in” memory does make it perform better. The time required varies and it doesn't always work, but it's worth a try. A
tutorial on how to go about doing this. If your DDR400, for example, doesn’t run stably at 220MHz, try something lower. Leave the computer on for a few days straight. Give it a workout, then try it again at 220MHz
Memory Chips and their Importance to Overclocking
Very few companies in the world actually make memory chips, but literally hundreds of companies sell memory modules. Some of these few companies are Winbond, Hynix (Hyundai), Micron, Samsung, Infineon (Siemens), Nanya, Mosel-Vitalic, TwinMos and V-data/A-data. Like most other PC components, all RAM are not created equal. For DDR400, you can find memory varying from - very fast modules sporting 2-2-2-6 timings from Mushkin, Corsair, OCZ, Kingston (for example), to relatively low-cost modules that aren't as favorable with the timings. Even memories of the same brand and model may exhibit varying performance levels because of the chips being used. Manufactures use whatever memory chips are available at the time, and certain memory chips don’t stay in stock indefinitely.
Take a look at the markings on the chips of your memory module. If your module has heatspreaders, it will have to be carefully removed to see the memory chips. (Doing so will void your warranty). Each chip is covered with numbers and those numbers have tell what chips they are and may even have the logo and name of the chip maker. Why does this matter? Like motherboards, for example, not all brands offer the same performance and overclocking potential. The same goes for memory chips. So people (usually overclockers) seek out certain preferred brands of memory chips for their systems. Example, Winbond’s famed BH-5 which was discontinued, then reintroduced and then was discontinued again to be replaced with newer cheaper CH-5, although there still is relatively high demand for BH-5.
Check your module manufacturer’s website, they may or may not list what chips they use their modules.