• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

What to expect after SSD health hits 0%?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Doco

Member
Joined
Feb 25, 2010
what exactly happens when an SSDs health hits 0%? i was always curious about that.

ssd.png
 
You Buy another one ? :D
(sorry couldnt pass that up)

From what i understand (so far) various cells will no longer read back what was written correctally. seen with parity check or something, and those cells or whole blocks are then logged out so they are not used. the data is then written in a good location taking a bit more time to do so, and you go on. Eventually many sections will be logged out and a Smart error (on supported systems) will pop up warning you of impending doom, even though many sections of it are still in very good shape.

then Wear-Leveling features, discontinue the use of sections of cells , and uses the fresh ones instead, so the wear is balanced. lots of this stuff depending on the controller, spare space they have for setting out EOL areas, type of data consistancy checking.

it can't be factually determined when a cell will actually fail, because each charged gate is a sort of Schrödinger's Cat, we know it will die, but it could also live for years longer. Because they are used in blocks , one bleeding paw will mean the whole cat is dead anyways :screwy: :cool:

it really doesnt look so bad, I quickly figured That one could have run for 1/2 a year 12 hours a day, and will make it 3 more of them making it more than 1 1/2 years.
a hard drive spun up for 3-4 years for 24 hours a day would also be dead. or about 6-8 years at 12 hours a day.

It is all about how many times the cells have actually been changed up, you could do a lot more reading of it and less writing and be ok.

then as my last analisis Only 1T of data and its 1/3rd gone :cry:

But i wouldnt claim to have completly understood it all yet , i am waiting for that to reach 0 and for you to tell :)
.
 
Last edited:
As I understand it the failure of a flash cell due to being worn out by writes is relatively predictable. In theory what happens is that the oxide layer that keeps the electrons in the cell gets worn out and the electrons escape and you loose the data. Given good wear-leveling this should happen for all the flash blocks in the SSD pretty much at the same time. I suppose it is possible for a smart controller to figure out that the flash in the SSD is nearing the end of its life to stop writing to the SSD to stop data loss, and basically turning it into a read only device.

A problem with that application is that it seems to assume 10 000 write cycles, which may be true for some sorts of flash but is not at all true for others. This is specially problematic for the Agility type drives which use different types of flash.
 
i see, so the percentage tossed up there is by some 3rd party software that just takes the numbers it has , assumes a max number of cycles.

this guy says they last 2 Million, and manufactures have quoted 5 million.
http://www.storagesearch.com/ssdmyths-endurance.html
some 54 years of operation :-O

which doesnt explain in the other place , some of the people noted that when the SSD drive fails it just comes out as a system error. they havent been making them for 54 years have they :)
and from the same site .
He (David Flynn) said one organization (which he named) had installed RAID systems using Intel SSDs in a high performance environment.
About half the SSDs had "burned out" after a year.
Worse than that - when the customer investigated more closely they found that some SSDs had failed in a way which had not been detected by the RAID controllers.
(Design Recalls)

way back in 2008 a loose concensus of the failure rate was ~10% annuel and that was when tech news was reporting it as news.


many other places would say 100,000 for todays versions of them. which would make the displayed SSD an infant.

OCZ vertex claimed a 1.5million MTBF
INTEL claimed that their drives will last >5 years if you write 5GBytes per day to them.
Kingston SSDNow claimed failure rate .5% a year vses 4.9% Hard drive.
Corsair Nova 3 year warrenty 1Million
(note claimed at one time or another)

from andtech, this was interesting (and on topic :)
3rd Gen Intel X-25M
Standard MLC will last for 12 months after all erase/program cycles have been consumed. Enterprise grade MLC will last only 3 months after exhausting all erase/program cycles but will instead support many more cycles per cell.
.
 
Last edited:
The problem people make is that they assume wearing out is the only kind of failure SSDs suffer from. I don't know of anyone who has worn out an SSD yet, although I suppose some of those early, small SSDs might start wearing out by now. The thing is: just because it will take 50 years for an SSD to wear out doesn't mean it will last 50 years. What it means is that it will likely fail for some other reason before that.

SSDs are cutting edge tech. This means there will be unexpected failures. I do know of people who have had their SSDs simply die. It is just that they didn't die because they were worn out, but rather because of a flaw in the production of the SSD or maybe a flaw in the firmware design. Those types of thing will lead to unpredictable failures.

An other thing to remember is that the 10 000 write cycles generally is for 5x nm MLC class flash. the 3x nm MLC used in SSD today only lasts around 5 000 write cycles, and the upcoming 2x nm flash will probably cut that in half again. On the other hand, if you double the size of the SSD it will last twice as long.

Then one also needs to keep in mind is that there is a big difference between server and desktop usage. A SSD which is suppose to last more than long enough in desktops to fail because of some other reason before it wears out may not last all that long in a server, both because the workload in server can be many times heavier, but also because the frimware in a desktop SSD isn't tuned for the kinds of IO a server does so wear-leveling algorithms and other protection measures don't do as well.
 
Bleeding edge, when the 1T HDs first came out the failure rate was high, then 1.5s then 2.0s months later you could buy one and it might even work for years on end.
(not to mention user error on all this new stuff)

-------------------

SDD The best place for them is the worst place for them :)
if they will read and read till the cows come home, then putting them on data serving servers , should be wonderfull. one Write , and masses of people feeding off it, reading hundreds of times. But they arent multi terrabyte size like data serving servers use, and out of the blue failures of more items is inconvienient. I can pretty much predict when my HDs are going to go, from the dust on them :)

put it in as my "data" disk , write the data there ONCE , and access that data for the rest of my life. but other than music, i am not likly to repeat using that data many many times, no real need for its speed.

put it in as my OS, and the systems STUFF going on all the time, the great "improvements" that i prefer to shut off , will put as many writes on it as reads. Control the OS and configure for SSD.

----

like my regular Flash chips i am Never going to have a Write too much issues. My use is to Pour all the data on, then access it over time, and often many times.
write 16G to a camera flash chip, it takes at least the whole day 1 cycle . put 8gig of music or movies on the phone flash chip, and it can last the whole week. put 32Gig of movies on a Media chip, and even in blue ray it could last a whole week.
.
 
Last edited:
You Buy another one ? :D
(sorry couldnt pass that up)
lol nah. i've had it since june 4, 2010. so almost 5 months since i installed windows 7 on it.

so from these replies i've read, the ssd becomes worthless after it's worn out and cannot be used anymore?
 
lol nah. i've had it since june 4, 2010. so almost 5 months since i installed windows 7 on it.

so from these replies i've read, the ssd becomes worthless after it's worn out and cannot be used anymore?

Unless they let you open it up and replace the chips . and it is unlikly that the program will determine its moment of death.
and I agree with Mr Alpha, it wont nessisarily be due to "erase" cycles or writing. Many components make up the whole, and have possiblity for failures too, and the technology is new and changing.
 
Back