• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

RESULTS: Short-Stroke Single 640gb WD Black

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Can you do another test setup a 200gig partion and then partition the rest and bench it see if there is a big difference ?
 
I never see the point in "short stroking".

Modern (and not so modern) filesystems do that for you already anyways. They store data starting from the beginning (outer ring). So if you only have 20GB of your 640GB drive (partition) filled, it will perform the same as if you "short stroked" it to 20GB. Except you can use more space if you want, and you will always be using the fastest part.

You are just arbitrarily limiting your harddrive space without any benefit.

It only looks good in benchmarks because they only tell you the "average" in benchmarks.

If you have a 20GB that gives you 200MB/s on avg, and a 640GB that gives you 50MB/s on avg, if you just use 20GB of the 640GB partition, it will also give you 200MB/s for the 20GB you used.

If it's such a great idea (sacrificing the space that's not needed anyways for speed), why aren't harddrive manufacturers doing it?
 
Because looking at a benchmark which remain much the same regardless of what you do is boring.

missed my point, i think. snappiness in the OS and responsiveness in doing majority of things is related to those 2. So for HDD's, is all we care about is how fast they can sequentially read/move data?

How much would a 200GB SSD cost? :thup:

Oh sure, it would cost exponentially more, but it offers exponentially more performance if you want to start comparing access times, random 4k MB/sec performance, and random IOPS.
 
I never see the point in "short stroking".



You are just arbitrarily limiting your harddrive space without any benefit.

It only looks good in benchmarks because they only tell you the "average" in benchmarks.

Its all about latency man..

But its all heresay till you try it yourself :beer:
 
Can you do another test setup a 200gig partion and then partition the rest and bench it see if there is a big difference ?

I tried to bench this first, but I couldn't find a good program that would bench only the partition. HDTach benches the whole drive. (I foresee the results being the same if I had 2 partitions and only benched the first partition)
 
I tried to bench this first, but I couldn't find a good program that would bench only the partition. HDTach benches the whole drive. (I foresee the results being the same if I had 2 partitions and only benched the first partition)

Atto should allow you to do this.
 
Modern (and not so modern) filesystems do that for you already anyways. They store data starting from the beginning (outer ring). So if you only have 20GB of your 640GB drive (partition) filled, it will perform the same as if you "short stroked" it to 20GB. Except you can use more space if you want, and you will always be using the fastest part.
Wow, talk about dead wrong. What have you read to espouse such nonsense? Install an aftermarket defragmenter and do a scan which will show you the physical location of data on the drive. You will find the data scattered all over the drive until you actually complete a defragmentation run. Windows based defragmenters won't even do this.
 
Wow, talk about dead wrong. What have you read to espouse such nonsense? Install an aftermarket defragmenter and do a scan which will show you the physical location of data on the drive. You will find the data scattered all over the drive until you actually complete a defragmentation run. Windows based defragmenters won't even do this.

hes right... windows will scatter stuff over a complete partition... doesnt leave it to the outer ring.

OT.... umm wth happened to the smiles
 
Its all about latency man..

But its all heresay till you try it yourself
haha yes I know latency matters. I use a SSD.

But it's the same with latency. If you only use 20GB of your 640GB partition, it will be more or less the first 20GB, and latency for those files will be the average latency of a 20GB partition.

Wow, talk about dead wrong. What have you read to espouse such nonsense? Install an aftermarket defragmenter and do a scan which will show you the physical location of data on the drive. You will find the data scattered all over the drive until you actually complete a defragmentation run. Windows based defragmenters won't even do this.
If it's a new drive, and you start writing files to it, it will start filling up from the beginning.

Fragmentation happens because files are deleted, and it would be costly (in terms of performance) to defrag whenever such a gap appears. The reason why they are not doing it is because they figured the performance increase of moving everything to the front is not enough to justify the time it takes to do it. That's the reason why almost-full partitions are slower. You are forcing the filesystem driver to eliminate any gap in order to fit in more data. And you are essentially telling it to defrag on every write, which is slow.

So, can they make sure there is no gap? of course. why are they not doing it? because they thought it would actually make your harddrive slower. By short-stroking, you are forcing the driver to do it.

Even with those gaps, data will be clustered at the beginning. If you have 20GB of data, it will probably be within 30GB of the beginning of the partition.

So if you short-stroke it to 20GB, and use 19GB of it, chances are, it's going to be slower than using 19GB of a 640GB partition.
 
Fragmentation happens because files are deleted, and it would be costly (in terms of performance) to defrag whenever such a gap appears. The reason why they are not doing it is because they figured the performance increase of moving everything to the front is not enough to justify the time it takes to do it. That's the reason why almost-full partitions are slower. You are forcing the filesystem driver to eliminate any gap in order to fit in more data. And you are essentially telling it to defrag on every write, which is slow.
File fragmentation and data placement are two different things. Speaking of data placement as I was, programs like Perfect Disk do this when file defragmenting and benefits of doing so are documented.
If it's a new drive, and you start writing files to it, it will start filling up from the beginning.
BS. Can that happen? Sometimes. I certainly don't claim to know exactly how data placement is determined, but the software is there to determine where it is. Do a fresh install and map the data, it's all over place. Take an empty drive used for data, throw some MP3's and video files on it, map it. Heck, I've thrown my page file on an empty drive and I still have to use Perfect Disk to move it to the front of the disk before dumping data on it.
 
Those are some very strong words.

No, I have not worked on a filesystem driver, much less the NTFS driver (which I am assuming is what we are talking about), and my statement was based on logical speculation and my understanding of UNIX filesystems and how they do block allocations, so I stand corrected if what you said is true.

It does not make much sense, however. Why WOULDN'T the driver do it.

Do you have any scientific sources, besides personal anecdote, to back up your claim?
 
I tried researching for this issue, and it seems like information is few and far between.

May be easier for us to just conduct the test ourselves.

Have anyone tried a file-based (not disk or partition based) benchmark? bonnie++? (it's pretty unheard of here, but it's a very popular extensive benchmark outside of Windows world)

I'm not exactly in the mood of re-formatting my harddrives a few times, and plus, I am on a SSD. Since Bios24 already has this set up, can you run it? (Or anyone else, if you want to).
 
Do you have any scientific sources, besides personal anecdote, to back up your claim? Nothing is anecdotal about it. The results can be repeated over and over.

I tried researching for this issue, and it seems like information is few and far between. I agree.

May be easier for us to just conduct the test ourselves. I've done so many times on multiple builds and installs(Windows environment where most of us are). I stated my findings and stand behind them. I'd bet anyone else's results would be similar.

Have anyone tried a file-based (not disk or partition based) benchmark? bonnie++? (it's pretty unheard of here, but it's a very popular extensive benchmark outside of Windows world) Not a benchmark, but Perfect Disk, Disk Keeper, O&O Defrag, etc all can optimize windows system files("smart placement") in order to speed the boot process. I've picked up a few seconds here and there testing a fresh vs. optimized install. I used to keep such data, I really don't care to anymore. You can use the software to analyze file placement as well.

I'm not exactly in the mood of re-formatting my harddrives a few times, and plus, I am on a SSD. Since Bios24 already has this set up, can you run it? (Or anyone else, if you want to). I'm not interested either, I know how the story ends. Take care.
 
Not a benchmark, but Perfect Disk, Disk Keeper, O&O Defrag, etc all can optimize windows system files("smart placement") in order to speed the boot process. I've picked up a few seconds here and there testing a fresh vs. optimized install. I used to keep such data, I really don't care to anymore.
Note that I am not arguing whether defrag helps. It's whether short stroking helps.

Have you tried short-stroked optimized vs non-short-stroked optimized?
 
Note that I am not arguing whether defrag helps. It's whether short stroking helps.

Have you tried short-stroked optimized vs non-short-stroked optimized?
Not with a hardware based short-stoke as the OP which I consider the hard way to do what a partition or another RAID array via Intel's Matrix RAID would do. And yes, the files end up in the same place with the same boot times and programs launch times. What's behind it in the empty space, whether just free space, another partition(formatted or uninitialized) or say another RAID array using Intel MATRIX RAID doesn't matter in my tests. My final say on short-stroking is if you have a program like Perfect Disk, it will improve upon file placement instead of just squeezing files closer together with no optimization and negate the benefit of short-stroking. This may very well be worth a handful of seconds, but that's part of the "hobby" right?
 
Last edited:
Ah, so you DO agree that short stroking really doesn't help :). Might as well have free space behind the data.
My arguement was with your statement on default file placement on hard drives within Windows OS'. It isn't "fastest to slowest", it's a bit random. Short-stroking confines the random placement and $30 software guarantees the best placement. Either is worth a handful of seconds at best.
 
Either is worth a handful of seconds at best.

Apart from those 1 or 2 seconds, I've seen rig running few VMs with 10 to 20GB virtual disks on each own, when those vdisks were "mild" fragmented, sometimes those poor VMs were screeching halt when running apps with disk intensive activities, I mean we're talking about 5 to 10 seconds or even more. :rolleyes:

Once fully defragmented, the heavy stuttering problem is gone. :D
 
I still find it strange that Windows, by default, allocates blocks THAT poorly (if what you said is true). Certainly make sense to leave some spaces (leaving no gap means files will get fragmented after each append), but are you sure it's random? If I am to take a guess, they are doing it for a reason. If even we know drives are faster near the beginning, I'm guessing they know, too.

It would be a very serious performance issue (since NTFS is M$'s main FS) that can be easily fixed.

I tend to be wary of programs that claim to more or less magically give you better performance. Programmers make up all kinds of BS to get our money.

If it's such a useful and simple concept, why isn't M$ doing it?

Remember "memory cleaner" programs that were very popular a while ago? They are less than useless (actually make computers slower), yet many people swore by them. Placebo effect is significant.
 
If the allocation is that bad by default, how would it work on say a 3 platter drive? Would it do things like allocate the data unevenly to the platters (favoring one over the other)? Would short stroking then use the fastest part of each platter or could you end up with a situation where the partition you've created is made entirely on 1 platter (out of the 3) thus hurting performance?
 
Back