- Joined
- Jul 7, 2004
- Location
- Rocky Mountian High
I know that many people here understand everything I've written at a low level, but they've done this through extensive research. I wanted to write something that was quick and would help people get a grip on what the real impact was of all the performance features found in currently available drives. I won't go into excessive detail, just want to provide an idea of what each feature does and how it affects performance.
I welcome any comment or real world observation to help add to the usefulness of this post.
Spindle Speed:
This seems pretty straight forward. The faster the drive spins the faster the data moves under the head the faster it gets into your system. All factors being the same AND granting that firmware and other components can handle all speeds, a faster spindle speed will translate directly into faster data rate. Faster spindle speed also reduces rotational latency and average access time.Performance benefit is proportional to spindle speed increase.
Bit Density:
This can be a hidden performance feature if the bit density is higher. For a given spindle speed a higher bit density translates into faster internal data transfer rates because more bits are moving under the head vs. lower bit densities. This is a primary reason that generation to generation drives with the same spindle speed still get faster. Performance benefit is proportional to the increase in bit density.
Bus Speed:
Bus speed is how fast the drive can transfer or get data from the controller to the drive's buffer, NOT the media. A WD Raptor has an internal transfer rate of 72MB/sec which is only 72% of Ultra ATA 100 bus speed so anything faster than UATA is not going to translate into faster speeds for a SINGLE drive system. Generally a bus can only support up to 75% of its rated speed in actual data transfer due to command overhead and timing overhead. The reason for faster bus speeds is generally to support multi-drive systems and EVENTUALLY support faster single drive systems. RAID systems (especially SCSI) can actually use multiple drives to load the bus to maximum transfer rate but single drives cannot (Unless you hooked a Raptor to a UltraATA 66 controller but then I would have to question your priorities). SATA II and Serial Attached SCSI (SAS) specify expanders (hubs) that allow multiple drives to be attached to a single cable allowing RAID on a single channel. In single drive systems, only if the drive had the requested data completely in the cache, would the drive actually transfer the data at burst speed. This is rare as most files large enough to notice performance difference are larger than the cache. Performance benefit is generally only realized in multiple drive systems.
Cache:
Cache is also a easy number to publish but difficult to quantify in daily use. The reason is that a large cache is only better than a small cache if you are using files that are larger than the small cache AND the file is contiguous on the disk so that the file can be loaded into the cache at the fastest internal transfer rate. If the file is not contiguous then in the time it takes the drive to find the next block the cache is already being unloaded to the bus. Performance benefit can be significant when working with files larger than 8MB (for a 16MB cache drive), Performance benefit is less when large files are fragmented.
Command Queuing:
SCSI has had command queuing for some time. Command queuing is the ability of the drive to re-queue the commands to optimize them for accessing the data. The data is the reassembled in the order the host requested it. This feature is used to access data physically scattered on the disk in a more efficient manner. Performance benefit can be 5-15% in heavy multitasking environments or with heavily multi-threaded applications where each thread is accessing data on the drive. The benefit is less in light or non-multitasking environments.
What Affects Data Transfer Speeds:
The hard drive is at the end of a chain that starts with the application. The data path looks like this:
Application
Operating System
Driver
Controller (This includes DMA controller and memory)
Cable
Drive
Each of these components affects transfer rate and adds overhead and can affect the perceived performance of the drive. My favorite analogy is snail mail. Data is like a letter that goes through several steps, some slow, some fast, on its way to destination. Even with identical brand parts and installations there are differences in performance due to sample to sample variation.
My advice when looking to improve drive performance is to strive for a noticeable improvement at the best cost, since this is what you will be experiencing everyday.
Thanks to:
silentdebuggers - Note on Burst Speed
JCLW - Notes on Bus Expanders, Native Command Queuing and Access Time
I welcome any comment or real world observation to help add to the usefulness of this post.
Spindle Speed:
This seems pretty straight forward. The faster the drive spins the faster the data moves under the head the faster it gets into your system. All factors being the same AND granting that firmware and other components can handle all speeds, a faster spindle speed will translate directly into faster data rate. Faster spindle speed also reduces rotational latency and average access time.Performance benefit is proportional to spindle speed increase.
Bit Density:
This can be a hidden performance feature if the bit density is higher. For a given spindle speed a higher bit density translates into faster internal data transfer rates because more bits are moving under the head vs. lower bit densities. This is a primary reason that generation to generation drives with the same spindle speed still get faster. Performance benefit is proportional to the increase in bit density.
Bus Speed:
Bus speed is how fast the drive can transfer or get data from the controller to the drive's buffer, NOT the media. A WD Raptor has an internal transfer rate of 72MB/sec which is only 72% of Ultra ATA 100 bus speed so anything faster than UATA is not going to translate into faster speeds for a SINGLE drive system. Generally a bus can only support up to 75% of its rated speed in actual data transfer due to command overhead and timing overhead. The reason for faster bus speeds is generally to support multi-drive systems and EVENTUALLY support faster single drive systems. RAID systems (especially SCSI) can actually use multiple drives to load the bus to maximum transfer rate but single drives cannot (Unless you hooked a Raptor to a UltraATA 66 controller but then I would have to question your priorities). SATA II and Serial Attached SCSI (SAS) specify expanders (hubs) that allow multiple drives to be attached to a single cable allowing RAID on a single channel. In single drive systems, only if the drive had the requested data completely in the cache, would the drive actually transfer the data at burst speed. This is rare as most files large enough to notice performance difference are larger than the cache. Performance benefit is generally only realized in multiple drive systems.
Cache:
Cache is also a easy number to publish but difficult to quantify in daily use. The reason is that a large cache is only better than a small cache if you are using files that are larger than the small cache AND the file is contiguous on the disk so that the file can be loaded into the cache at the fastest internal transfer rate. If the file is not contiguous then in the time it takes the drive to find the next block the cache is already being unloaded to the bus. Performance benefit can be significant when working with files larger than 8MB (for a 16MB cache drive), Performance benefit is less when large files are fragmented.
Command Queuing:
SCSI has had command queuing for some time. Command queuing is the ability of the drive to re-queue the commands to optimize them for accessing the data. The data is the reassembled in the order the host requested it. This feature is used to access data physically scattered on the disk in a more efficient manner. Performance benefit can be 5-15% in heavy multitasking environments or with heavily multi-threaded applications where each thread is accessing data on the drive. The benefit is less in light or non-multitasking environments.
What Affects Data Transfer Speeds:
The hard drive is at the end of a chain that starts with the application. The data path looks like this:
Application
Operating System
Driver
Controller (This includes DMA controller and memory)
Cable
Drive
Each of these components affects transfer rate and adds overhead and can affect the perceived performance of the drive. My favorite analogy is snail mail. Data is like a letter that goes through several steps, some slow, some fast, on its way to destination. Even with identical brand parts and installations there are differences in performance due to sample to sample variation.
My advice when looking to improve drive performance is to strive for a noticeable improvement at the best cost, since this is what you will be experiencing everyday.
Thanks to:
silentdebuggers - Note on Burst Speed
JCLW - Notes on Bus Expanders, Native Command Queuing and Access Time
Last edited: