RAMdrives For The Masses?

Add Your Comments

The next generation of CompactFlash is going to get accelerated.

According to the article, the data transfer rate of CompactFlash is due to go up to 66MB/sec, and it will be provided with ATA 33/66 interfaces.

The head of the Compact Flash Association is quoted as saying how good this will be for digital cameras.

Yeah, right. Like you really need ATA interfaces for a digital camera.

Can you say, “RAMDrive?” Because that’s what this is about.

The Current State of RAMDrives

In a world of electronics where a nanosecond is beginning to look like a long time, any device that takes a million times longer to do something useful is S-L-O-W.

Let’s say your computer need a file to do something, and it’s not already in memory. Just to make this easy, let’s say it’s 60Kb big.

There’s basically one of two things that can happen. If you’re lucky, your hard drive has already figured out you might be needing that file soon, and has it sitting inside its cache. If that’s the case, the file will be found in a flash (at least in the world of milliseconds), and it will take about a millisecond to get it into main memory so the CPU can use it.

If you’re not so lucky, the hard drive then has to look for your file. On average, that takes about 8-10 milliseconds, and then we have another millisecond to transfer the file.

You can see that most of the delay getting data from a hard drive to the CPU is spent looking for it when it isn’t already at hand. If the data is already in a cache, there’s practically no looking-for-it delay at all. If you need multiple pieces of data, and they’re right next to each other on the hard drive, the looking-around part takes much less time than if the hard drive has to look all over the place for it.

What a RAMDrive seeks to do is virtually eliminate the looking-around-part of data retrieval, since you can find stuff much faster when it is stored in memory than when it is stored in rust. As you can also see, though, this doesn’t save a bunch of milliseconds every single time; sometimes it does, sometimes it doesn’t.

You can also see that it generally doesn’t save a lot of time by human standards. Waiting one millisecond is better than waiting ten milliseconds, but we humans are such slugs that we don’t really appreciate the savings.

Of course, this is all grossly oversimplified, but those are the basics.

Despite being so slow, hard drives are generally used rather than some form of memory because hard drives store data much, much more cheaply than memory. A GB of mainstream hard drive space costs less than a dollar, while a GB of mainstream system memory costs over $100.

This has generally restricted large RAMDrives to use in commercial servers, where saving a few milliseconds each on a few million records means saving real money. You can buy cards on which you load RAM and plug into your system like a hard drive. Unfortunately, given the small markets, prices for what are called solid state drives have been (and remain) skyhigh.

Another way to get a RAMDrive is to use software to trick a portion of system memory into thinking that it’s a hard drive. Generally, though, these kind of hard drives are relatively small; the free ones generally cannot use more than 64MB of RAM, and even the commercial ones don’t promise more than around 3GB due to current Windows limitations.

Both methods have major limitations. Both use dynamic RAM, which means when you turn the power off, all data vanishes, too. You either have to give a solid state card its own power supply, or you have to endlessly backup and refill the RAM drive to and from a hard drive every time you turn the machine off and reboot.

This, among other things, prevents you from booting from the RAMdrive and greatly lengthens the time (in human time, clusters of whole seconds) it takes your machine to shut down/boot up). Many will say, “Why should I go through all this expense and bother to save split-seconds here and there, when it costs me an extra 30-60 real seconds to turn the machine on and off?” and they have a point.

Flash: Cheap Solid-state?

Flash memory has several advantages over system memory, or rather, it doesn’t have some of the major disadvantages of system memory.

Flash memory is static memory. That means the data sticks around even when you turn off the lights. That means you can boot off it, and there’s no lengthy back-up/restore routine.

Until recently, they haven’t been all that big, but 2GB cards are now commonplace, 4GB cards are vaguely affordable (about 4GB), and cards up to 8GB are at least available. That’s enough to store the OS and critical apps for many.

The killer limitation on flash memory cards has been data transfer rate. Even in theory, up to now, these cards have been capable of a maximum transfer rate of 16Mb/sec (and 10Mb/sec is probably more like it in real-world activities). This makes hard drives look blindingly fast in comparison, and hands back most of the time advantage you get from using memory to begin with.

That’s why increasing the maximum data transfer rate to 66Mb and providing hard drive interfaces is potentially a big deal. It removes most of the killer bottleneck preventing the use of these cards as RAMdrives (even assuming these drives only manage 40Mb/sec in real life).

Why Now?

There have been consistent rumors that at least some models of the next-generation of consoles will use flash memory in addition to or a substitute for a hard drive. That would seem to be the driving reason for these things happening now.

Not An Unconditional Blessing

Rest assured in the next year or so, we’ll see flash card RAMdrives. Will they become a must-have?

Probably not, for the following reasons:

First, they’ll cost money: it’s hard to see how a 4GB high-speed flash drive won’t add at least a few hundred dollars more to the cost of a computer.

Second, they are unlikely to do much to reduce delays where they remain substantial: long sequential reads. Remember, having data in memory only helps a lot in situations where your hard drive spends a lot of time having to look around. If your hard drive knows exactly where to look and what to do, a flash drive might actually be slower than a hard drive (due to slower transfer rates).

So if you think a RAMdrive will load the next level of your game in a second rather than a minute, you are very likely to be disappointed.

Finally, the transfer rate of these cards is just aesthetically displeasing to a geek. Memory is supposed to be practically instant, moving at GBs a second, not some dinky 40MB/sec.

x86-64 Helps Out

What we really need is fast static system memory, and lots of it. It’s very unlikely we’ll get that anytime soon, but there ought to be one advance on the dynamic memory front, and that is x86-64.

x86-64 ought to remove the current limitations on RAM use, which means that big 4 or 8 Gb or more RAMDrives using system memory ought to become possible. For those who ask, “What can you do with more than 4Gb of RAM?” here’s an answer to that.

That still doesn’t address the problems associated with dynamic RAM, but given the likelihood that using memory rather than hard drives is likely to become more fashionable (at least at the high end) in the next couple years, the odds that someone will try to address these problems one way or another will get a lot better.

Conclusion

In the long run, the computing world is going to end up going from here to there. Hard drives are just too slow, too mechanical, too prone to failure to remain the primary means of storage on the desktop forever.

However, “long run” may prove to be a long, long time. What is most likely to happen in the next five to ten years is that we’ll go to a hybrid system. Memory storage will supplant hard drive storage for the most important programs and data. Hard drives will continue to be used for archival purposes until we start seeing things like dirt-cheap terabyte data cubes, maybe ten-fifteen years from now.

Addendum: 1/8/05

Some have written me to say that flash memory is unsuitable for a RAMDrive because it wears out after a relatively limited number of cycles. They cite either 10K or 100K write cycles and note that this is not exactly what you want running a swap section.

A few points:

10/100K Isn’t Cast In Stone: AMD, for instance, has guaranteed a million write cycles on many of its previous flash products, and plans to do the same with its latest Mirror-bit flash. So these things can be toughened up considerably, and any device that will even vaguely be offered as a RAMdrive will .

Why Do You Need A Swap File These Days? There’s a more fundamental issue at hand here, though. If you’re afraid that a swap file will wear something out, do you really need the wearer-outer?

Swap files had (and have) its purpose when memory isn’t around. Now that memory is relatively cheap, why bother? Swap files should be an option, but the last resort, not a requirement or first resort. After all, what’s the purpose of a swap file? It is a substitute for real RAM. Why use a substitute when you have plenty of the real thing?

After all, for the foreseeable future, all this discussion is about high-end systems. If you’re ready to pay for a multigigabyte RAMDrive, you’re hardly likely to scrimp on system RAM, and with x86-64 coming, the sky (and wallet) will be the limit.

The problem isn’t really flash memory wearing down; the problem really is a worn-down way of doing things. Windows insists on having and using a swap file even when there is plenty of free RAM available to it, and I’ve yet to see a tweak that really does something about this for NT/W2K/XP.

That’s the problem, and one that will inevitably be fixed.

Ed

Leave a Reply

Your email address will not be published. Required fields are marked *