• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Drive Feature Peformance Guide

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Dukeman

Member
Joined
Jul 7, 2004
Location
Rocky Mountian High
I know that many people here understand everything I've written at a low level, but they've done this through extensive research. I wanted to write something that was quick and would help people get a grip on what the real impact was of all the performance features found in currently available drives. I won't go into excessive detail, just want to provide an idea of what each feature does and how it affects performance.

I welcome any comment or real world observation to help add to the usefulness of this post.


Spindle Speed:
This seems pretty straight forward. The faster the drive spins the faster the data moves under the head the faster it gets into your system. All factors being the same AND granting that firmware and other components can handle all speeds, a faster spindle speed will translate directly into faster data rate. Faster spindle speed also reduces rotational latency and average access time.Performance benefit is proportional to spindle speed increase.


Bit Density:
This can be a hidden performance feature if the bit density is higher. For a given spindle speed a higher bit density translates into faster internal data transfer rates because more bits are moving under the head vs. lower bit densities. This is a primary reason that generation to generation drives with the same spindle speed still get faster. Performance benefit is proportional to the increase in bit density.

Bus Speed:
Bus speed is how fast the drive can transfer or get data from the controller to the drive's buffer, NOT the media. A WD Raptor has an internal transfer rate of 72MB/sec which is only 72% of Ultra ATA 100 bus speed so anything faster than UATA is not going to translate into faster speeds for a SINGLE drive system. Generally a bus can only support up to 75% of its rated speed in actual data transfer due to command overhead and timing overhead. The reason for faster bus speeds is generally to support multi-drive systems and EVENTUALLY support faster single drive systems. RAID systems (especially SCSI) can actually use multiple drives to load the bus to maximum transfer rate but single drives cannot (Unless you hooked a Raptor to a UltraATA 66 controller but then I would have to question your priorities). SATA II and Serial Attached SCSI (SAS) specify expanders (hubs) that allow multiple drives to be attached to a single cable allowing RAID on a single channel. In single drive systems, only if the drive had the requested data completely in the cache, would the drive actually transfer the data at burst speed. This is rare as most files large enough to notice performance difference are larger than the cache. Performance benefit is generally only realized in multiple drive systems.

Cache:
Cache is also a easy number to publish but difficult to quantify in daily use. The reason is that a large cache is only better than a small cache if you are using files that are larger than the small cache AND the file is contiguous on the disk so that the file can be loaded into the cache at the fastest internal transfer rate. If the file is not contiguous then in the time it takes the drive to find the next block the cache is already being unloaded to the bus. Performance benefit can be significant when working with files larger than 8MB (for a 16MB cache drive), Performance benefit is less when large files are fragmented.

Command Queuing:
SCSI has had command queuing for some time. Command queuing is the ability of the drive to re-queue the commands to optimize them for accessing the data. The data is the reassembled in the order the host requested it. This feature is used to access data physically scattered on the disk in a more efficient manner. Performance benefit can be 5-15% in heavy multitasking environments or with heavily multi-threaded applications where each thread is accessing data on the drive. The benefit is less in light or non-multitasking environments.

What Affects Data Transfer Speeds:
The hard drive is at the end of a chain that starts with the application. The data path looks like this:

Application
Operating System
Driver
Controller (This includes DMA controller and memory)
Cable
Drive


Each of these components affects transfer rate and adds overhead and can affect the perceived performance of the drive. My favorite analogy is snail mail. Data is like a letter that goes through several steps, some slow, some fast, on its way to destination. Even with identical brand parts and installations there are differences in performance due to sample to sample variation.

My advice when looking to improve drive performance is to strive for a noticeable improvement at the best cost, since this is what you will be experiencing everyday.

Thanks to:
silentdebuggers - Note on Burst Speed
JCLW - Notes on Bus Expanders, Native Command Queuing and Access Time
 
Last edited:
Good information. I like the summaries in color for readability.

How about burst speeds though? Although the internal workings of a raptor are less, I can get a single sata2 drive (interface limit 300MBps) to burst over 166MB/s using hdtach. Although that's never going to be a sustained rate, bus interface does matter for bursting in my experience with hdtach.
 
Thanks and good question. HDTach and other benchmarks are good for testing that the bus and cables are working properly (way faster than the drive's sustained transfer rate), however they to these test by artificially exploiting the bus speed. Only if the drive had the requested data completly in the cache would the drive actually transfer a file at burst speed. This is rare and not found in average real world use.

I will add this to the post.
 
Very nice, but:
Dukeman said:
A WD Raptor has an internal transfer rate of 72MB/sec which is only 72% of Ultra ATA 100 bus speed so anything faster than UATA is not going to translate into faster speeds for a SINGLE drive system. Generally a bus can only support up to 75% of its rated speed in actual data transfer due to command overhead and timing overhead. The reason for faster bus speeds is generally to support multi-drive systems and EVENTUALLY support faster single drive systems.
Each drive has it's own channel, so currently bus speed is only a factor in burst speeds. In a SATA RAID array, each drive will still have 150MB/s available for use. So really anything faster than UATA is not going to translate into faster speeds for any current drive setup (ie: I can burst over 400MB/s amd sustain close to 250MB/s on my SATA-150 RAID 0 array).

I say current because the SATA II specs allow SATA hubs where you can connect multiple SATA drives to one SATA II controller. In this scenario you could max out the SATA channel. But I haven't actually seen any SATA hubs around yet.

--------

Dukeman said:
Performance benefit is somewhat for multi-threaded operations on a single core processor. Can be significant in multi-core/processor systems with multi-threaded applications however desktop use and applications have not exploited this feature yet.
Command queing can improve performance in any instance of disk intensive heavy multitasking, no matter how many processors you have (as long as your computer is not running at 100% CPU load). If you're not a heavy multitasker then it won't matter to you. How applications are coded won't change anything, as applications don't have any say in where they actually put data on the disk. The drive and controller manufacturers will probably be able to tweak their NCQ implementation a little more in the future.

--------

Access speed is very important. Increased spindle speed will reduce rotational latency (as the data comes around quicker). Unfortunately everyone's favourite app, HDtach, doesn't graph it so no one pays any attention to it.
 
JCLW, Good points. I'll make an edit tomorrow.

For the SATA II connectors there are supposed to be expanders for SATA and SAS (Serial Attached SCSI) that will let you mix and match the two and as you say expand a single channel to multiple drives. When these are available then just like SCSI, bus speed improvements will have more significance.

Access time (seek time) and latency are difficult to measure from the host due to the number of layers in between the drive and the software. Besides, I wanted to provide a rundown on major enhancements. Still, adding more detail for those who want it can't hurt.

Do you have some sites that I can look at on Command Queuing performance impact? I'd like to reference them before I make any edits on that subject. Thanks.
 
Last edited:
Back