So I was taking a break from running wires under the floor and a buddy of mine sent me this link. Im just going to take some things out of context so please be sure to read the article to get a better grasp of the context it was taken from.
This PAGE in the referenced article goes back to what Mr Alpha posted a while ago about SSD failure rates.
Anyhoo, a nice read from Tom's Hardware (Source).
The reason why I took the time to post this was more or less becuase of the extremely common misconception about writes killing drives.
Another great link from XS about writes and drive endurance: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm
It's more difficult to overcome the challenges presented by flash manufactured at 25 nm than it was at 34 nm. But today's buyers should still expect better performance and reliability compared to previous-generation products. Succinctly, the lower number of program/erase cycles inherent to NAND cells created using smaller geometry continues to be overblown.
P/E Cycles
Total Terabytes Written (JEDEC formula)
Years till Write Exhaustion (10 GB/day, WA = 1.75)
25 nm, 80 GB SSD
3000
68.5 TBW
18.7 years
25 nm, 160 GB SSD
3000
137.1 TBW
37.5 years
34 nm, 80 GB SSD
5000
114.2 TBW
31.3 years
34 nm, 160 GB SSD
5000
228.5 TBW
62.6 years
You shouldn’t have to worry about the number of P/E cycles that your SSD can sustain. The previous generation of consumer-oriented SSDs used 3x nm MLC NAND generally rated for 5000 cycles. In other words, you could write to and then erase data 5000 times before the NAND cells started losing their ability to retain data. On an 80 GB drive, that translated into writing 114 TB before conceivably starting to experience the effects of write exhaustion. Considering that the average desktop user writes, at most, 10 GB a day, it would take about 31 years to completely wear the drive out. With 25 nm NAND, this figure drops down to 18 years. Of course, we're oversimplifying a complex calculation. Issues like write amplification, compression, and garbage collection can affect those estimates. But overall, there is no reason you should have to monitor write endurance like some sort of doomsday clock on your desktop.
This PAGE in the referenced article goes back to what Mr Alpha posted a while ago about SSD failure rates.
What Does This Really Mean For SSDs?
Let's put everything we've explored into some rational perspective. Here is what we know about hard drives from the two cited studies.
1. MTBF tells you nothing about reliability.
2. The annualized failure rate (AFR) is higher than what manufacturers claim.
3. Drives do not have a tendency to fail during the first year of use. Failure rates steadily increase with age.
4. SMART is not a reliable warning system for impending failure detection.
5. The failure rates of “enterprise” and “consumer” drives are very much similar.
6. The failure of one drive in an array increases the likelihood of another drive failure.
7. Temperature has a minor effect on failure rates.
At the end of the day, a piece of hardware is a piece of hardware, and it'll have its own idiosyncrasies, regardless of whether it plays host to any moving parts or not. Why is the fact that SSDs aren't mechanically-oriented immaterial in their overall reliability story? We took the question to the folks at the Center for Magnetic Recording Research... ...Dr. Gordon Hughes, one of the principal creators of S.M.A.R.T. and Secure Erase, points out that both the solid-state and hard drive industries are pushing the boundaries of their respective technologies. And when they do that, they're not trying to create the most reliable products. As Dr. Steve Swanson, who researches NAND, adds, "It's not like manufacturers make drives as reliable as they possibly can. They make them as reliable as economically feasible." The market will only bear a certain cost for any given component. So although NAND vendors could continue selling 50 nm flash in order to promise higher write endurance than memory etched at 3x or 25 nm, going back to paying $7 or $8 per gigabyte doesn't sound like any fun either.
Reliability is a sensitive subject, and we've spent many hours on the phone with multiple vendors and their customers trying to conduct our own research based on the SSDs that are currently being used en masse. The only definitive conclusion we can reach right now is that you should take any claim of reliability from an SSD vendor with a grain of salt.
Giving credit where it is due, many of the IT managers we interviewed reiterated that Intel's SLC-based SSDs are the shining standard by which others are measured. But according to Dr. Hughes, there's nothing to suggest that its products are significantly more reliable than the best hard drive solutions. We don’t have failure rates beyond two years of use for SSDs, so it’s possible that this story will change. Should you be deterred from adopting a solid-state solution? So long as you protect your data through regular backups, which is imperative regardless of your preferred storage technology, then we don't see any reason to shy away from SSDs. To the contrary, we're running them in all of our test beds and most of our personal workstations. Rather, our purpose here is to call into question the idea that SSDs are definitely more reliable than hard drives, based on today's limited backup for such a claim.
Anyhoo, a nice read from Tom's Hardware (Source).
The reason why I took the time to post this was more or less becuase of the extremely common misconception about writes killing drives.

Another great link from XS about writes and drive endurance: http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm
Last edited: