• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

wth is rdram?!??

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Thank you for making me feel old :sn:

RDRAM came out before dual channel DDR, maybe even before DDR itself. It was the original RAM for P4's, back in the Willamette and early Northwood era. I don't remember all the tech, but it basically used a small bus that ran really fast along with dual channel to get higher transfer rates than DDR, and so it ran pretty hot and started the heatspreader craze.

It was quite expensive leading tech at one time, now its just a vague memory.

Edit: Here and here are good reads.
 
batboy said:
Also, you have to use them in matched pairs or they won't work. They are basically obsolete now.

Poor Rambus. at one point or another they ruled the memory design. As matter of fact SDRAM, DDR and DDR2 rams are all the inventions of Rambus. But AMD and others stoled the technology from them and werent willing to pay royalty. They went to lawsuits and was hard for them to prove otherwise. but in heart all memory manufacturing companies know SDRAM, DDR, DDR2 are all RAMBU's intelectual properties.
4 years ago their stocks @ wallstreet was a hot Item $180 per share. But because of the loss of their intelectual properties to many (biggest being AMD)now they are strugling. their stock sales for $13 right now. But the lawsuits are not finished yet, you never know rambus share may go up again to $180. RDRAM I think stands for Rambus Direct random access memory. an interface that has reduced the bottlenecking of the memory bus.
I hope their new XDRAM makes it.
 
There's a few serious misrepresentations here. Firstly, RDRAM predates the P4. RDRAM was introduced with the i820 chipset, which of course, supported the original 133fsb P3 cpus. This was a long time before the launch of the first P4 processors.

Secondly, you inhernently no more have to use RDRAM in pairs than you do DDR SDRAM. i820 was a single channel chipset, and as such, did not have to use pairs. The i840 server P4 chipset introduced dual channel RDRAM, and dual channel memory configuration itself. The only difference here is that dual channel RDRAM chipsets do not allow the option of operating in single channel mode, as did the later DDR SDRAM implementations. But i820 systems, being purely single channel, do.

i850 was the P4-supporting RDRAM chipset, with the later i850e being more successful. These are dual channel chipsets, and as mentioned, offer no single channel mode. For these chipsets, and the i840 P3 chipset, it is true RDRAM must be used in pairs, but it is not inherent in RDRAM and is not true of i820 systems. But as i820 systems were a huge market failure, and those that were sold are old to the point of near uselessness now, it is true that bascially any RDRAM platform you find still in operation will require memory to be installed in pairs.

As far as the performance impact of RDRAM, it was (and still is) an extremely high-performance memory solution. But memory performance is sort of a bit player when it comes to system performance as a whole. If you look back to the 286 platform, there were no caches, either on the cpu or on the motherboard. In situations like this, the performance impact of memory is direct. Gains in memory performance produce a faster system, and losses in it produce a slower one, in nearly direct porportion.

As time advanced, so did system architechture. The i386 processor family still had no memory caches on the cpu, but a common feature of later 386 chipsets was that of a cache made of commercially available SRAM DIPP chips, 32, 64, 128, or 256KB in size. This helped performance in its own right, but also ushered in an era of decreased dependance on the performance of the memory subsystem.

The i486 cpu took the idea to a new level. i486s integrate an 8KB memory cache on the cpu itself, a measure that produces much higher cache bandwidth and reduced latency as compared to the standard SRAM DIPP implmentations common on later 386 systems. This, in high performance 486 systems, was assisted by SRAM DIPPS on the motherboard. This brings the L1 and L2 terminology into play, with the cpu's cache being L1 (Level 1) and the motherboards being L2.

The Pentium series of cpus brought a doubling of the L1 cache size to 16KB, with the later PentiumMMX doubling it again to 32KB. These systems almost uniformly implemented the L2 cache on the motherboard as well, although there were very basic systems that did not. When EDO ram superceeded FPM types, propoganda was that the increase in memory performance EDO brought would obviate the need for this L2 cache. This was of course a lie, as although by this time the effects of changes in memory performance had very subtle affects, the loss of the L2 cache had marked effect.

The PentiumII processor had L1 cache integrated onto the processor die itself, as did the Pentium and 486 processors, but moved towards integration of the L2 cache, as well. P2s have the L2 cache still composed of commercial SRAM, but it was placed on a special circuit board along with the processor die itself. This arrangement, dubbed the "Slot 1" processor by Intel, was dominant for several years, with i820 being the last Intel Slot 1 chipset.

Early P3 processors were fundamentally no different than P2's with SRAM chips for L2 cache. But the later P3's, known as Coppermine, finally introduced the topology that survives to the present. The L2 cache was fully integrated into the processor's die, making the circuit board arrangement of Slot 1 unecessary, and setting the stage for the return to a simple socket affair like have been used prior to Slot 1 and since.

The point of all this? Well, as mentioned earlier, cache reduces the performance impact of changes in the memory subsystem performance on the performance of the system as a whole. And with each of these progressions in cache technology, the effect becomes more prevalent.

Today we have L1 caches of subtance, and really large L2 caches, carved from the same lightening-fast silicon as the cpu core iteself, and integrated into it. Cache hit rates are so high, and the latency so low, that it takes huge differences in memory performance to really amount to much in terms of the performance of the system as a whole. Even switching between single and dual channel memory, as comparision of S754 and S939 A64 sytems clearly shows, has only a modest impact on performance. The caching is so effective that it almost entirely removes dependance on the performance of the memory subsytem.

This was the environment into which RDRAM was born. i820 was created specifically to support the Coppermine P3, which as discussed above, brought the on-die L2 cache. So while i820 also introduced RDRAM, the architectural changes that minimize the impact of such a change were already fully-fledged. RDRAM was way better than SDRAM from a bandwidth standpoint, but bandwidth couldn't help you much by that point. RDRAM also has latency challenges, and that hurt as much (or more) than the bandwidth, whose impact was so reduced by caching, could help. This made SDRAM P3 systems as fast as their i820/RDRAM counterparts, and as the increased cost of the RDRAM setup did not translate into a benefit to the consumer, set the stage for RDRAM's failure in the market.

The later dual-channel RDRAM implementations featured improved latency performance along with awesome bandwidth, as as such were performance leaders. But as detailed above, memory caching had reached such a high level of sophistication by this point that the lead was miniscule. And the price difference was not, sealing RDRAM's fate as a system-memory solution.

There used to be huge debates centered around RDRAM vs SDRAM, just like you have with AMD vs Intel, ATI vs Nvidia, and hell, even at one point IDE vs SCSI. But those who really care about computers, rather than just their computer, recognize the real trends in system architecture and their impact on the choices we are presented over time, and use this knowledge to pick the ones that really deliver value.

Even though RDRAM is but a memory (no pun intended) now, advancements in memory technology still make it to market. We now have DDR2 in place of DDR in high-end Intel platforms, but just like with RDRAM, the benefits to the consumer are hard to identify. In an age of 1-2MB L2 caches integrated into the processor die and running at full cpu clock, DDR2 can't influence system performance any more than RDRAM did at introduction. Fortunately DDR2 doesn't impose the huge cost increase that RDRAM did, leaving room for it to succeed even though the performance improvement is not compelling.

The big improvments DDR2 brings are those of higher clock rates and higher-density DRAMs, making larger system memory amounts feasible. But these same advantages could have been wrought on DDR platforms, had development not shifted to DDR2 instead. Was this a mistake? Time will tell. As with Intel's RDRAM debacle, and with the introduction of the P4 family itself vs the continued P3 development that has become the Pentium-M, new is not always better. At times platform change is just change for the sake of change, not for the sake of advancement of the PC.

And just as in the past, getting the most from your computing dollar is in part dependant on recognizing change that is only for the sake of change. This is one of the most common types of marketing--you create change and then tell people they have to have the new products because the old are obsolete. And sometimes this accurately reflects the technical situation. But at times, just the same, it does not, and is just propaganda created to hype the next attempt to seperate one from one's wallet. A pickpocket with a doctorate and a three-piece suit is still a pickpocket, and PC enthusiasts do well to keep a sharp eye for thieves no matter how well they are dressed.
 
larva said:
*lots of neat facts*

Holy crap!! That is a lot of info there! Growing up with those computers I can see that most of it seems correct enough. I remember putting in those horrid first generation sipp chips and bending the little pins right off! I also remember the tech support call telling us to just solder the pins back on. lol Who else remembers sipps? That should make ya feel even older.

I never liked RDRAM just because of the huge price difference. Options of going $500 for 256 meg of that stuff or less than half that for the SDRAM.

JT
 
I believe that for the dual channel rdram chipsets you could use a single stick of memory and a fake (continuity?) stick to complete the circuit.

I agree that it would be cool if Rambus makes it in the market this time with their XDR.
 
moz_21 said:
I believe that for the dual channel rdram chipsets you could use a single stick of memory and a fake (continuity?) stick to complete the circuit.

Not quite. A RDRAM system (dual channel) still requires 2 sticks. It has 4 slots, and also requires (fake) C-RIMMS in them (the continuity sticks) All 4 slots need to be filled. At least all of the RDRAM stuff I have played with, does.

Th7, TH711, and an Aopen rig.

steve
 
JTanczos said:
Growing up with those computers I can see that most of it seems correct enough.
I can assure you, it's all correct. I worked 60 hours a week for 10 years learning it, it's not speculation.
 
skou said:
Not quite. A RDRAM system (dual channel) still requires 2 sticks. It has 4 slots, and also requires (fake) C-RIMMS in them (the continuity sticks) All 4 slots need to be filled. At least all of the RDRAM stuff I have played with, does.

Th7, TH711, and an Aopen rig.

steve
Correct, and it is true across all RDRAM systems. i820 rigs required continuity RIMMS for the unused slots as well.
 
Back