• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

8mb cache on a hard drive ?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.


Oct 24, 2002
The Dark Side of the Moon
This is probably a dumb question, but what the heck, here goes. Can someone explain to me the significance of having a 8mb cache on a hard drive, say as opposed to 2mb cache? I know that it's obvious that 8mb's going to perform better. But I guess I just don't understand what function of cache is on a hard drive. Can someone enlighten me? I'm thinking about buying one of those 80mb SE Western Digital hard drives, I'd like to understand just what I'm buying.


Mar 3, 2002
Barcelona, Spain

A cache is by definition, a fast place (with regards to the device) where to store frequently used data, just for the purpose of avoiding going to the device to read again the same data that was written/read some seconds ago... This was very important in the past, when PCs had not so much main memory.

If it were just for above paragraph, then a hdd cache would only achieve *marginal* benefits when working with a modern operating system, because OS caches are usually bigger than 8 Mb (nowadays), and closer to the program (main memory), which means a lot faster from the application point of view.

BUT there is also the fact that they act as 'placeholders' or intermediate buffers, to hold data that needs to be written (write behind cache) or data that the disk logic guesses will be needed next (read ahead cache).

In the first case, the disk can fill its cache with write data and return an 'OK' much before the data hits the media, thus freeing up the application from waiting for the disk to be written. Note that this is quite risky in case of power loss, but common business nowadays for home usage.

In the second, and since many disk reads are sequential, the disk firmware can start reading sector n+1 after serving sector n, even when it has not yet been requested by the OS, just because it has the space to put the data (cache).

There is a third case, which is a refinement of previous two, that is called 'command queuing'. This means that the disk can store and reorder the read/seek/write commands looking for the faster sequence, and then serve the resulting data to the host (PC) in the resulting *optimal* order. Basically, if there is are 3 read requests for sectors 1,1000,2, It's a lot faster overall to serve them in order 1,2,1000. For effectively implementing command queuing, the bigger the available cache, the better (more commands and intermediate results can be held simultaneously).

Note that only SCSI drives or the IBM ATA drives implement command queuing (although you need special drivers to take advantage of it under ATA).

All in all, and as a summary, a big cache is always better than a small one, but probably not as much as in the past, specially under ATA and modern operating systems.

Hope this helps

Last edited: