• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

SSD Reliability...is it better than HDD? (Answers!?)

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

EarthDog

Gulper Nozzle Co-Owner
Joined
Dec 15, 2008
Location
Buckeyes!
So I was taking a break from running wires under the floor and a buddy of mine sent me this link. Im just going to take some things out of context so please be sure to read the article to get a better grasp of the context it was taken from.

It's more difficult to overcome the challenges presented by flash manufactured at 25 nm than it was at 34 nm. But today's buyers should still expect better performance and reliability compared to previous-generation products. Succinctly, the lower number of program/erase cycles inherent to NAND cells created using smaller geometry continues to be overblown.

P/E Cycles
Total Terabytes Written (JEDEC formula)
Years till Write Exhaustion (10 GB/day, WA = 1.75)

25 nm, 80 GB SSD
3000
68.5 TBW
18.7 years

25 nm, 160 GB SSD
3000
137.1 TBW
37.5 years

34 nm, 80 GB SSD
5000
114.2 TBW
31.3 years

34 nm, 160 GB SSD
5000
228.5 TBW
62.6 years

You shouldn’t have to worry about the number of P/E cycles that your SSD can sustain. The previous generation of consumer-oriented SSDs used 3x nm MLC NAND generally rated for 5000 cycles. In other words, you could write to and then erase data 5000 times before the NAND cells started losing their ability to retain data. On an 80 GB drive, that translated into writing 114 TB before conceivably starting to experience the effects of write exhaustion. Considering that the average desktop user writes, at most, 10 GB a day, it would take about 31 years to completely wear the drive out. With 25 nm NAND, this figure drops down to 18 years. Of course, we're oversimplifying a complex calculation. Issues like write amplification, compression, and garbage collection can affect those estimates. But overall, there is no reason you should have to monitor write endurance like some sort of doomsday clock on your desktop.

This PAGE in the referenced article goes back to what Mr Alpha posted a while ago about SSD failure rates.

What Does This Really Mean For SSDs?

Let's put everything we've explored into some rational perspective. Here is what we know about hard drives from the two cited studies.

1. MTBF tells you nothing about reliability.
2. The annualized failure rate (AFR) is higher than what manufacturers claim.
3. Drives do not have a tendency to fail during the first year of use. Failure rates steadily increase with age.
4. SMART is not a reliable warning system for impending failure detection.
5. The failure rates of “enterprise” and “consumer” drives are very much similar.
6. The failure of one drive in an array increases the likelihood of another drive failure.
7. Temperature has a minor effect on failure rates.

At the end of the day, a piece of hardware is a piece of hardware, and it'll have its own idiosyncrasies, regardless of whether it plays host to any moving parts or not. Why is the fact that SSDs aren't mechanically-oriented immaterial in their overall reliability story? We took the question to the folks at the Center for Magnetic Recording Research... ...Dr. Gordon Hughes, one of the principal creators of S.M.A.R.T. and Secure Erase, points out that both the solid-state and hard drive industries are pushing the boundaries of their respective technologies. And when they do that, they're not trying to create the most reliable products. As Dr. Steve Swanson, who researches NAND, adds, "It's not like manufacturers make drives as reliable as they possibly can. They make them as reliable as economically feasible." The market will only bear a certain cost for any given component. So although NAND vendors could continue selling 50 nm flash in order to promise higher write endurance than memory etched at 3x or 25 nm, going back to paying $7 or $8 per gigabyte doesn't sound like any fun either.


Reliability is a sensitive subject, and we've spent many hours on the phone with multiple vendors and their customers trying to conduct our own research based on the SSDs that are currently being used en masse. The only definitive conclusion we can reach right now is that you should take any claim of reliability from an SSD vendor with a grain of salt.
Giving credit where it is due, many of the IT managers we interviewed reiterated that Intel's SLC-based SSDs are the shining standard by which others are measured. But according to Dr. Hughes, there's nothing to suggest that its products are significantly more reliable than the best hard drive solutions. We don’t have failure rates beyond two years of use for SSDs, so it’s possible that this story will change. Should you be deterred from adopting a solid-state solution? So long as you protect your data through regular backups, which is imperative regardless of your preferred storage technology, then we don't see any reason to shy away from SSDs. To the contrary, we're running them in all of our test beds and most of our personal workstations. Rather, our purpose here is to call into question the idea that SSDs are definitely more reliable than hard drives, based on today's limited backup for such a claim.

Anyhoo, a nice read from Tom's Hardware (Source).

The reason why I took the time to post this was more or less becuase of the extremely common misconception about writes killing drives. :thup:

Another great link from XS about writes and drive endurance: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm
 
Last edited:
Great information. Id say stick this at the top as theres always threads about SSD reliability. Ive been on the SSD train from the early beginnings of it and have stood by the fact that they wouldnt release a product that would be worthless in a few years due to usage.

Glad to see this posted up. Maybe more people will buy SSDs and lower the cost for me :D God knows Ive put alot into the SSDs Ive had over the years lol. Ive had many mechanical drives fail over the years in a shorter time than any of my 6 SSDs have. Honestly, failure rate for Mechanical drives Ive purchased vs. SSD is ridiculous and from my personal experiences mechanical HDDs are more of a liability. Granted the longest I used a SSD was 1 year at most, but Ive had many HDDs fail in under a year.
 
very nice compilation.
now about those 5000 cycles, how many times can i do that to my HD?

its like my power bill, it only costs me 30cents for a kilowatt , but the bill at the end of the month isnt 30cents :-(
say in some world of sheer magic the OS only writes 2 times in one hour
that is 48 times a day, 1440 times a month and 7200 times in 5 years.

the MS os is instead writing some usless junk to disk more like 2 times in a second
120 times in a minute, 7200 times in an hour 172,800 time a day 5,184,000 in a month 62,208,000 times in a year and 311,040,000 in 5 years
(course i was never good at math, so someone might need to check that)
did my hard drive really handle 300million writes to basically the same few locations, without skipping a beat?

luckily we can READ that stuff millions of times , it is only the writing that kills them.

and the ISP claims that normal people only use 3-4g of bandwith a month too, that is ~100meg a day, less than the size of one pig driver.
anyone here normal?

the fast SSD items are RAID type controllers , how does that effect losses? How many flash items work as a team in a really fast SSD device?
.
 
Last edited:
very nice compilation.
now about those 5000 cycles, how many times can i do that to my HD?

its like my power bill, it only costs me 30cents for a kilowatt , but the bill at the end of the month isnt 30cents :-(
say in some world of sheer magic the OS only writes 2 times in one hour
that is 48 times a day, 1440 times a month and 7200 times in 5 years.

the MS os is instead writing some usless junk to disk more like 2 times in a second
120 times in a minute, 7200 times in an hour 172,800 time a day 5,184,000 in a month 62,208,000 times in a year and 311,040,000 in 5 years
(course i was never good at math, so someone might need to check that)
did my hard drive really handle 3million wirtes to basically the same location, without skipping a beat?

luckily we can READ that stuff millions of times , it is only the writing that kills them.

and the ISP claims that normal people only use 3-4g of bandwith a month too, that is ~100meg a day, less than the size of one pig driver.
anyone here normal?

the fast SSD items are RAID type controllers , how does that effect losses? How many flash items work as a team in a really fast SDD device?
.

Turn off virtual memory and system restore as well as have 6-16 gigs of memory:) should fix it.
 
very nice compilation.
now about those 5000 cycles, how many times can i do that to my HD?

its like my power bill, it only costs me 30cents for a kilowatt , but the bill at the end of the month isnt 30cents :-(
say in some world of sheer magic the OS only writes 2 times in one hour
that is 48 times a day, 1440 times a month and 7200 times in 5 years.

the MS os is instead writing some usless junk to disk more like 2 times in a second
120 times in a minute, 7200 times in an hour 172,800 time a day 5,184,000 in a month 62,208,000 times in a year and 311,040,000 in 5 years
(course i was never good at math, so someone might need to check that)
did my hard drive really handle 300million writes to basically the same few locations, without skipping a beat?

luckily we can READ that stuff millions of times , it is only the writing that kills them.

and the ISP claims that normal people only use 3-4g of bandwith a month too, that is ~100meg a day, less than the size of one pig driver.
anyone here normal?

the fast SSD items are RAID type controllers , how does that effect losses? How many flash items work as a team in a really fast SDD device?
.
Im really not sure what this post means (sorry!)...

Where/what in the OS writing something to a drive 2x /second constantly? If that is true, which I find it incredibly hard to believe, its not writing to the same NAND cell anyway, is it?

How often are you DL "pig" drivers? Once I DL ALL my drivers (which are actually on a storage HDD anyway) Thats only a couple hundred MB including my "pig" Nvidia drivers and print drivers. So yes, Im quite normal and that is with my wife who also (legally) DL's music!

What people sometimes tend to forget is that most users here at this website are NOT 'normal' users. We play games more, have faster systems, generally know more about said system, DL more than say your average user (think mom/dad/aunt/uncle) who barely know how to turn on and use a PC.
 
Turn off virtual memory and system restore as well as have 6-16 gigs of memory:) should fix it.

i have that stuff off, there is just a few writes left .
but i think that is a good point, that Controlling the MSOS itself could be very different LIFE for a SDD item. Logging and tracing can be turned off, paging moved, system restore tossed, Hibernation not used, the event log causes troubles turning it off :-(

firing up system internals "File Monitor ", should be a given for SSD users , with MS operating systems. there are just some things a user sholdnt have to see , but in this case mabey they should.
 
Last edited:
Im really not sure what this post means (sorry!)...

.

Neither do I, its me still wondering how to work a SSD
i want it to make things FASTER, that means to use it for all the Stuff, use it for the web cache, and the paging (as rarely needed) and to make stuff fast it is usefull to have writes go to it. the web is a lot faster with a ram cache (for example), even though the bottlenecks are the net itself.
Scratch disks on hard drives are waste my time slow, temps in ram can really speed stuff up, lots more than one would imagine, i would want them to be on SSD, but it is not practical.
i am not the type of person who reads the same thing over and over again, so i percieve that i will want it for many write cycles too.
i normally didnt cling to caches either, Keeping stuff in caches , assuming it would be ever used again, becomes more valuable. problem is it doesnt get used again?
say you got to YousTube, how many times you gonna watch the same thing? it is one and done, but having a speedy cache for it to drop into is cool,
If i go to play flash games or various downloadable ActiveX toys on the web, the downloads folder, mabey that stuff beings to Live there, instead of get tossed?

in just one day of browsing, 100 cookies :) taking a cluster each. even go cleanup some "normal" users cookies they got 175-900 , and i swear they only visit one or 2 sites :)
a web site nowdays can heve like 2-9 of these useless cookie items, many of which can be just Stopped . the tracking ones keep updating (writing)
advertisements galore , were paying to "write" the life out of a SSD.

mabey some new differences in how a person might tune and clean a SSD vrses a HD. how they would tune the os for speed, or longevity.
everyone likes to clear and clean a HDD, with a SSD , clear it out and it could just encourage more writes again.

.
 
Last edited:
I will admit I do not know the inner workings of how an OS caches things, but so long as you are not dumping web cache and other cache constantly so its being refreshed, its already there and not being written to so excessive writes in that respect isnt a problem.

The entire point of this thread was to shed some light on the MASSIVE (to me) amount of write data you can have to a drive before cell failure. Did it fail?

AFAIK, SSD's do not write like HDD's in that it will spread writes across the many NAND cells (wear leveling?) in the entire drive. Seek/access time is so low this is possible to do so and not experience slow downs towards the 'end of the drive' like you do on platters.

Bascially, its much ado about nothing. Im certain your concerns were thought of (as that is how a normal user uses his PC) already. ;)
 
Last edited:
assume IE
check out in internet properties, the web cache settings.
test the "never" setting, this stops the normal refreshing of a web page that occurs way more on "auto" the pages your viewing go "cold" but the speed-up indicates that more stuff was refreshing than nessisary. i dont know if that changed?
at times i have had it set to "never", and just refreshed based on NEED instead, it was many times faster. way less write items. good for certian types of browsing, bad for forum updating.

some sites are so clouded with Junk, flat out putting them into restricted sites, (stops all scripting) and 10 less garbage scripts run. good for sites that have to much unnessisary stuff going on.
those scripts run each refresh, each page turn, and are again often updating with new (usless) data. 10 more useless writes in 2 seconds.

we just jump to a new site and how many clusters get used? then the big questions are how many of them are re-written by just going to the next page, or clicking a link.

i am saying that in a flash , we get a load of writes, and many of them useless. even ram cache based browsers , will write out to disk, to secure data, even though cache comes from ram again.

with SSD and now Much larger RAM ability, mabey it is time to return to some of the RamDisk techniques that were abandoned , because of the problems it can have.
 
Last edited:
Much Ado about nothing Pyscho. What I think you are attempting to describe is normal use and would of course be included with the data written. There is no need to eliminate those types of writes (and Page File) as I believe those are small files where SSD's actually excel in.
 
Just a thought.

SSD, emphiasis on the solid-state. All it has to do is sit there. On the other hand, a HDD is a complex machine in itself, with the moving head, the spinning platters, seeking to the right spot, and so on. Now which do you expect to fail first, to be more reliable. Especially with laptops, I see dead hard drives more then anything else.
 
Much Ado about nothing Pyscho. What I think you are attempting to describe is normal use and would of course be included with the data written. There is no need to eliminate those types of writes (and Page File) as I believe those are small files where SSD's actually excel in.

yes , random small, SSD should be pulling off random small reads very well.

there are other things that bug me too.
i see the tests, "we open up this program , only takes 10 seconds with SSD".
i am like Freak it takes you 10 seconds to open that program :) it only takes me 6 .
"we do this tasks in this many seconds, ", and my well tuned system does it in 1/2 that time using raid0.
"We sped up the boot, it only takes 59 seconds now" wow, the old PK5 over there Full reboots in 55 with no raid and no SSD . plus it changes the wallpaper and sounds, and by the time they are back in the system, the user is already browsing again.
And i did nothing to improve it , other than a few minor tweaks, and keeping it cleaned up.
and it had little to do with the HD , most of it is the BIOS time, drive recognitions, and driver initlisations.

they test a game, oooh wow they open a game in 14 seconds, i am playing the game by the time they get it running :) just tossed out a few videos at the front , defraged the hard drive and run raid0.

and i can write to my disk more than 5000, or 100,000 times too?

I would put a Disposable speed item in the computer to get speed, but they need to make a 2Terrabyte one for say $200 :)
.
 
Last edited:
I run cache and a 6gb page file on my adata S599 SSD that is in my laptop. 6 months into it and I see no issues whatsoever. Still marks within 99% of original spec with ATTO. I use the crap out of my laptop as its my main rig until I finish my next build. So as for your fear of windows running your SSD into the floor I dont believe it to be a real issue.
 
I run cache and a 6gb page file on my adata S599 SSD that is in my laptop. 6 months into it and I see no issues whatsoever. Still marks within 99% of original spec with ATTO. I use the crap out of my laptop as its my main rig until I finish my next build. So as for your fear of windows running your SSD into the floor I dont believe it to be a real issue.

people have had HD OR SSD, that read SMART data that shows them 100% and they croaked the next day.
smart predictibility for SSD write cycles and HDD spin-up times are certannly usefull, but look how many drives croak of either kind , before the user gets even a few days of warning from smart.

for Laptops it is excellent, wouldnt have it any other way, one good bump of a hard drive, even with thier magic bump fixers that alledge to solve all those problems, and the price of a SSD is less than the first few head crashes and resulting loss of data.

here is another thing i dont get, there seems to be little effort on the part of the makers to utalise the external casing to cool the internal components, some dissasemblies show more insulation than heat removal.

are any of the manufactures making any big efforts on the longevity based on cooling, cooling of at least the ram buffers and controller cards?
how many of us would box that stuff up with no thoughts as to cooling ? and i dont mean water cooling :)

they use MLC , even though it is known that SLC is many times more reliable, SLC is fully 0&1 Digital, MLC is in need of more interpretation, and could be vastly improved or even increased greatly on the bits per cell, with new technology. SLC is rock solid and many times more reliable, MLC i think will be improved on greatly in the next few years.
when it comes to costs, of a fully reliable many many times write memory, it is SLC that should be compared costwise to HD.

there are times when the Reliability of SLC is what they discuss, while they sell MLC to us.
as we can see Earthdog doesnt give us this kinds of BS we can get from other locations :thup:
.
 
Last edited:
Just a thought.

SSD, emphiasis on the solid-state. All it has to do is sit there. On the other hand, a HDD is a complex machine in itself, with the moving head, the spinning platters, seeking to the right spot, and so on. Now which do you expect to fail first, to be more reliable. Especially with laptops, I see dead hard drives more then anything else.
Thats actually the main point of this thread... may want to reread the highlighted portions. That sure was MY thinking too before this... :thup:


@ Pyshco - If I see SDD one more time... :p ... SSD!

Pyscho, Im not sure what to tell you. There is plenty of data posted in this thread, and many other places that should put your concerns to rest. As far as the R0 mechanical drives, again Im not sure what you are going on about there. I would bet my life that most applications would open faster on an SSD vs 2 of any mechanical drive in Raid 0. Access times are 70-140 times faster (assuming .1ms for SSD and 7-14ms for mechanical). and with the case of last gen sandforce is faster with reads and writes. This gen SF drives would take 5 drives in R0 to beat its throughput, but then you are still 70+ times slower in access.

It just seems like its not for you, and thats ok, even though the writing is on the wall on the improvements you gain just from one drive.

people have had hard drives OR sdd, that read SMART data that shows them 100% and they croaked the next day.
smart predictibility for SDD write cycles and HDD spin-up times are certannly usefull, but look how many drives croak of either kind , before the user gets even a few days of warning from smart.

for Laptops it is excellent, wouldnt have it any other way, one good bump of a hard drive, even with thier magic bump fixers that alledge to solve all those problems, and the price of a SDD is less than the first few head crashes and resulting loss of data.

here is another thing i dont get, there seems to be little effort on the part of the makers to utalise the external casing to cool the internal components, some dissasemblies show more insulation than heat removal.

are any of the manufactures making any big efforts on the longevity based on cooling, cooling of at least the ram buffers and controller cards?
how many of us would box that stuff up with no thoughts as to cooling ? and i dont mean water cooling :)
I emploer you to look through the first post and the actual links/articles again. :)

Heat is a non issue really with these drives. Even when beating on them for hours at a time they are barely warm to the touch. Cant say the same for a Caviar Black, or a Raptor. ;)

From above:
7. Temperature has a minor effect on failure rates.
 
Last edited:
Psycogeec, your only talking about the number of writes done by the OS. This is not the relevant part in determining how many writes are done to the flash. It is the total amount of data written to the SSD that determines how many writes is done to the flash (along with write amplification). The OS writing 32k of data four times or 128k once has the same effect. There is a lot of logic in between the OS writing something and the writes happening to the SSD.

Any data coming over the internet is largely irrelevant from the point of view of wearing out an SSD. Internet connections simply aren't fast enough to wear out an SSD in any reasonable amount of time.


What I found curios about the Tom's hardware article is the claimed 10GB/day of writes since I've been doing way more than that. Now, I know I'm not an average computer user, but I'm surprised I would be so much outside the norm.
MySSDwrites.png
Still, I'm not in the least bit worried. Firstly, it is a 300GB SSD, so there is a lot of flash to wear out, and secondly, what everybody forgets is that the 3000-5000 p/e cycles of modern 25nm flash is the minimum number of cycles it can handle. The average can be significantly higher than that.
 
Back