SSDs, TRIM, and Available Disk Space

There is a lot of discussion on forums about the impact of available disk space on SSD performance. A pertinent question was raised; At what point does drive utilization impact SSD performance? Running some quick tests, the results proved rather insightful.

Questions and Answers

Earthdog raised that question in this post on Overclockers Forums. It spawned from the prevailing knowledge professed across forum communities reporting that available disk space has rather relevant impact upon the performance one obtains from an SSD. There is considerable evidence supporting that stance also.

Supporting Evidence for Excess Drive Capacity

For example, one method people are using to reap the benefits of extra drive capacity is being termed “over provisioning” or “short stroking” the SSD as is discussed in this article or this thread at the OCZ Forum. Check out the thread, and their forum is a great source of information also – especially their “Stickies”. Both these terms seem rather obtuse for whats actually going on however. “Short stroking” has previously been used in regards to optimizing RAID arrays made up of traditional HDDs – it applies well in that sense because it achieves its advantages by shortening the stroke distance of the read/write head when accessing data from the platter, hence “short stroking”. Noting the obvious absence of rotating platters and read/write heads in SSDs, it doesn’t really have any meaning for an SSD. “Over provisioning” is a bit more acute, but still not very indicative of the method. Whatever you want to call it however, essentially to gain performance benefits you only partition and format a fraction of the drives total capacity. The resulting unformatted capacity is then used by the firmware’s TRIM/Garbage Collection (GC) to prepare NAND blocks so that it is “free” for writes.

What does “free” mean? If writing to a previously “used” NAND block, it requires a read/erase/write operation that really bogs things down. If TRIM/GC has operated on the block, that block is then “free” for the drive to just write – things get done much faster.

While we’re on the topic, I’ll share a few short words on TRIM/GC. TRIM requires support by the drive firmware as well as the OS. It’s similar in concept to an application API if you know anything about programming – at an elementary level an API can be considered a built-in set of functions, but those functions don’t do anything unless called and utilized by some external process. Windows 7 is the only OS which issues TRIM commands natively – Linux can support it by patching the kernel, but Kernel Hacker Mark Lord doesn’t recommend it for the masses yet and I’d take his advice. GC operates purely in firmware, so the drives controller manages keeping itself tidy – this benefits any OS. On OCZ Vertex drives like mine, you need to be running firmware v1.4 to get TRIM, or v1.41 to get GC. Firmware version varies across make and model, so you’ll need to do your homework specific to the drive model you’ve purchased.

But let’s get back to understanding the original question, as well as why we are seeing the answers we’re seeing. At what point does drive utilization impact performance? I performed some rudimentary benchmarks, and this is where things started to get interesting to me.

Testing Platform

The tests were performed on a Thinkpad T400, running an OCZ Vertex 60GB drive I purchased locally at Microcenter. Searching the shelves, I couldn’t find a single OCZ drive – they keep them in the back, and a really helpful sales associate by the name of Paul grabbed one for me. Microcenter stores aren’t everywhere, but if you have one near you it’s worth a visit – they typically have a relatively solid staff of tech enthusiasts and geeks compared to other major retail outlets I’ve visited, and their hardware selection is great.

OCZ Vertex (Source:

OCZ Vertex (Source:

The OS is 64bit Arch Linux, and Linux doesn’t yet support TRIM natively. It does however have a manual TRIM utility written by Mark Lord called wiper which can be downloaded here. This enables a level of control in this testing which isn’t possible on any other platform – specifically, it controls exactly when TRIM operations are run, as well as the state of the drive immediately prior to performing the tests.

Test Methodology

My testing methodology is as follows:

1. Fill the drive to the desired capacity with garbage data
2. Run the linux TRIM utility to ensure we’re writing to “free” NAND
3. Perform a test timing the amount of time it takes to write 4.1GB of data, and outputting the average throughput for that write operation
4. Delete the test data, repeat from step 2

The Results

Test runs on a trimmed drive at 87.3% capacity (93.9% capacity upon completion of each run):

4096000000 bytes (4.1 GB) copied, 20.917 s, 196 MB/s
4096000000 bytes (4.1 GB) copied, 20.9674 s, 195 MB/s
4096000000 bytes (4.1 GB) copied, 22.1089 s, 185 MB/s
4096000000 bytes (4.1 GB) copied, 20.4842 s, 200 MB/s

Test runs on a trimmed drive at 62.9% capacity (69.4% capacity upon completion of each run):

4096000000 bytes (4.1 GB) copied, 20.4235 s, 201 MB/s
4096000000 bytes (4.1 GB) copied, 22.7503 s, 180 MB/s
4096000000 bytes (4.1 GB) copied, 21.9374 s, 187 MB/s
4096000000 bytes (4.1 GB) copied, 21.1049 s, 194 MB/s

Understanding, Not Just Answers

While the testing was by no means rigorous, it’s clear from the results that the amount of available disk space has no tangible impact upon the performance of the drive in these tests. That contradicts what we’re commonly reading across the forums, as well as the performance improvements achieved by “short stroking” or “over provisioning” SSDs. So what gives?

In these tests, the available disk space is in a purely clean state where TRIM has been executed prior to each run. The drive is able to get directly to writing throughout the test, without any read/erase/write hoopla. These results demonstrate that drive performance is not directly related to available disk space – performance is relatively equivalent between a drive that’s half full and a drive that’s almost completely full.

So if drive utilization is just a bit of seasoning in the SSD performance cuisine, why are people seeing the advantages they are seeing from over provisioning/short stroking? The improvements they are leveraging are a direct result of how TRIM/GC is implemented at the firmware/OS level. This implementation limits the drives performance relative to the amount of available disk space which can be utilized for TRIM/GC. More available space, lends itself to more efficient TRIM/GC operation, thereby improving real world performance.

Look forward to part 2 of my testing coming soon, which will quantify the effect drive utilization has on benchmarks running on Windows 7. Part 2 will examine and demonstrate real world SSD performance as it relates to available disk space.


Tags: , , ,


Marshmallow64's Avatar
Thank! Good article.
Mr Alpha's Avatar
Interesting article. I wonder if you wouldn't see bigger differences with random writes.
I.M.O.G.'s Avatar
I wonder also. I've installed and am evaluating sysbench currently - its designed for benchmarking database performance, but it does random I/O testing on linux so it may suit our interests.

EDIT: To answer your question, preliminary testing gives no indication that the results are any different with random write performance, in relation to available disk space.

Preliminary random write performance testing using sysbench at 63.9% capacity:

Preliminary random write performance testing using sysbench at 83.5% capacity:

These followup tests were performed with the following command:
TollhouseFrank's Avatar
So you have tested Sequential and Random writes/reads apparently.

One thing I have wondered is how the drives stack up to true 'multi-tasking' like a mechanical drive..... how fast is it while reading/writing at hte same time?
Conumdrum's Avatar
I got one of the Intel 40 gig SSDs that was on sale middle of Jan. W7 sets it up, diabling Defrag, Trim is okay I guess.

I use it as a boot drive. I have 11.5 GB free, is that enuff for allowing room to do it's thing? What do you think is the min empty area for all the goodness to continue?
I.M.O.G.'s Avatar
I can't say really, these tests imply available space doesn't have noticeable impact. Under different conditions, maybe it does... I still haven't gotten around to loading windows 7 and running a more thorough gambit of tests. When I get that done, and the article is published, I'll have more to say which will be useful for people in your situation.

You also said "trim is okay I guess", you can issue this command from the command prompt:
fsutil behavior query DisableDeleteNotify

If disabledeletenotify comes back as 0, then trim is enabled and recognized correctly by your OS.
Conumdrum's Avatar
Yep, I'm Trim enabled, thanks.
bcsizemo's Avatar
I'm just not getting it. If Trim has been run on the drive then it really shouldn't matter how much space is being used. All the free space would be available to be written to. Isn't that the idea of Trim? (I haven't done much research, but that is the take away I get from this article.)

Now on an SSD that isn't using Trim I could see issues as the free space decreases. I would think the same thing would also apply to a drive that is written to heavily regardless of free space...

I'm looking forward to the rest of the testing.
I.M.O.G.'s Avatar
That's what the results suggest, that if Trim has been run it doesn't matter how much free space is available when the test is run. However, in real life daily usage people report better performance with more free space available, and they're making configuration changes to leverage that.
Mr Alpha's Avatar
Well kinda, TRIM alleviates the issue, but does not completely solve it on its own. Over provisioning the flash and data compression also help. I would guess the Vertex 2 Pro and other Sandforce controller based SSDs with the combination of TRIM, heavy over-provisioning and data compression have this problem solved.

As I understand it the issue stems from how data is written to flash and the fact that you can't overwrite data on flash. Writing a piece of data to a flash block is simple, you just write it to a page on the block. Writing another piece is as simple. Just write to another page in the block. But modifying a piece of data is more complicated. Since you can't overwrite the data what the controller does is that it writes the modified data to a new page in the block and marks the old page as invalid. When the block starts to fill up with data garbage collection happens. The controller, in the background, writes the valida data to a new block and wipes the old one.

The performance problem from having full SSDs is that when the blocks are kinda full with valid data only a little modifying of data on the block will mean that it is filled up by the extra invalid pages and forcing GC to run. With a really full drive you are forced to run GC almost constantly resulting in a performance drop.

What TRIM does to help is that without TRIM even deleted data will look like valid data to the SSD controller, meaning the the amount of what the SSD thinks is valid data is much higher than what the OS reports. TRIM tells the SSD controller that the deleted data is invalid and so lets it wipe it out in GC.

You can still get the performance drop even with TRIM by just really filling up the SSD. Also the type of writes you do. Lots of small write will produce fragmentation which exacerbates the issue. Since you are TRIMing you might not be filling the drive up enough. You would probably have to go over 90% full to see the effect. You also need to be modifying the data to get the problem.

P.S. The GC I am talking about is not the same GC that is an alternative to TRIM people usually talk about when referring to SSD.

P.P.S. Because the Indilinx controller uses a largely static LBS mapping it only uses the over-provisioned flash as scratch-pad, while the Intel controller uses all empty flash blocks. This gives the Intel controller extra performance from having lots of empty flash blocks the the Indilinx controller doesn't get. Conversely the Intel controller has form of performance drop from filling up that the Indilinx controller doesn't suffer from.

P.P.P.S. Having lots of empty space also helps with write-leveling since the controller has a bigger chance of finding a flash page that has been written relatively little to to write on.
link1305's Avatar
OCZ released version 1.5 of the firmware which supports both TRIM and GC without requiring different versions. You may want to run your tests again.
I.M.O.G.'s Avatar
Yes, I have it loaded, and it might be interesting to see the tests run again. Time lately has been crunched from work, and I somewhat place the Windows7 tests in regards to capacity as a higher priority. But, it's on my "want to do" list. Thx for the heads up.
Leave a Comment