• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Testing SSDs to help eliminate defectives

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Max0r

Member
Joined
Oct 18, 2005
Location
Chicago Burbs
For a long time I have been running newly acquired HDDs through two passes of quad bit-flips scans, as well as doing huge data copies and using the data copied to the drive, to help quickly filter out defective drives. My hdds have gone through these tests and are still with me years later.

However, with SSDs this type of thing, even with TRIM and good self-applied automatic data hygiene of some drives ("garbage collection" it is called?), seems to age the drives very quickly compared to normal usage. It seems a lot of torture tests end up killing drives, perhaps ones that would have lasted years through normal use. Or is there merely a very high defective rate on these drives?

How would you recommend stress testing a SSD drive to ensure it is a good one, while at the same time not screwing it up? I am new to the SSD game.
 
I myself have been thinking about the very same issue.

First thing is: do not do those quad bit-flips. There are several problems with that. It is about the fastest way imaginable to wear out an SSD. It may also be useless in finding flaws, especially in an more advanced SSD drive, like something SandForce based, which does data de-duplication, so you may not end up actually writing to the whole SSD anyways.

One really needs to think about what kind of failures one is trying to detect. I suspect many of the early failures are a result of using cutting edge tech that hasn't yet gotten all the kinks worked out and modern SSD controllers are much more complex than what you find in hard drives. Bugs in the controller design or the firmware. This is not really something you can test for without a massive QA lab and thousands of SSDs. Really something that the SSD maker needs to do.

The other is the flash media wearing out from writes. Much of the SSD controller exists to deal with just this issue. I don't know how much you can do about it. The problem is that the controller sits as a abstraction between the actual flash memory and the view the OS gets of it. Anything you want to do or test on the flash you can only do with the controller as an intermediary.

You meet the same problem with the controller being in the way if you are looking to test for production flaws in the flash. To really get to test the underlying flash you either need some function in the controller which lets you do this, or a detailed understanding of the way the controller does that you can work through it whether it wants to let you or not.

Ultimately I'm at a loss.
 
With HDDs you have different things to worry about. Due to their moving parts, and delicate head-to-platter design they can become damaged between the manufacturers testing and the final installation in a PC. Thus, a good testing is in order to make sure nothing has happened during that time.

With SSDs this is basically a non-issue. There can still be flaws, but SSDs are designed to be "flawed" out of the gate. That's why they have so much unallocated space that can be allocated as those flaws come out.

HDDs have become very large over time, and so it has become more and more important to run these checks on new disks before putting them into service. No one wants to loose 2TB worth of data if it can easily be prevented. SSDs are still fairly small, so I guess the best practice is to back up any SSD(s) in use to HDD(s), and hope that your SSD is OK, and if it isn't, hope that it fails before the warranty expires.
 
Back