• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

SSD for Database Server

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

McGrace

Member
Joined
Dec 21, 2006
SSD newbie here. Already posted one question in a thread. Now I have a new question.

Is it a bad idea to use a SSD for a database server, purely based on the technology (ignoring cost as I am assuming over time it will level out at a reasonable price point)?

Uptime and reliability would obviously be important. But if performance was phenomenally increased, I don't see why a good backup regimen to mitigate chances of a catastrophic loss wouldn't be sufficient.

I ask the question in case the technology and usage itself would basically cause you to burn through an SSD in a week, so to say. Or lose data at a significantly higher frequency than a mechanical drive.

Again, I am taking cost out of the equation.

Thanks!
 
How large of a database are we talking about and how critical is it?

If this is a production business-critical system, please research, think out and test anything I suggest before actually going live. I don't have experience with systems like this, but I can speculate and give ideas.

Killing a consumer-level drive would take tens of years of constant overwriting of the entire drive at full speed. It has wear leveling and many write cycles before a cell dies. Reliability of the drive should not be an issue, short of a faulty or DOA drive. I don't see any issues using a consumer drive.

You could even run RAID 1, if you want. The only downside I see to this is that TRIM does not go through RAID and I don't believe GC does either. I haven't needed to research this, so I don't know it off the top of my head.

While the server is running, you could make periodic backups to another drive or as the database changes (if it doesn't change often).
 
For as server with lots of random IOs the performance improvement of an SSD is far bigger than what you see on the desktop. Here is somebody who tested an IoDrive for their MMO database.

For server usage I would definitely go with SLC based flash. That will give you and order of magnitude of more writes than what you get with MLC. That said, I wouldn't trust any data solely to the drive, but have robust backup solution in place.

While TRIM doesn't work on a RAID array GC is internal to the drive and work always.
 
A backup would definitely be important for any live, production database. That of course is a given. What the question is directed at is whether the nature of typical hard drive access for a database causes any clear problems on a SSD.

Basically, is there a very large and obvious canyon I am driving towards? It's even ok if there is a canyon, I am just trying to avoid going into one that everyone else sees and already knows about.

The database would be a standard, small-business database running MySQL or something comparable. Talking probably a few hundred MB of data, no more. Reporting would be the heaviest use, so a lot of reading and searching. But of course, there is also a substantial amount of data entry.

Ideally, I was thinking maybe of having a web server running on SSD then the database server would be on another server. I want the customer waiting on the internet, not on my end to respond to the call. (In other words, if they have cable internet, they see data essentially real-time from simple small queries.) :comp:

I would not be concerned over data loss since I am a backup maniac. I am worried about downtime. I want max uptime, but the best performance with a negligible change in uptime. From the info I have, it seems SSD would be that.
 
As long as you don't get a dud drive, you should be pretty good. Test it out before you put it into production. I'm doing the same but not for sql. HDDs for OS/Backup, SSDs for the heavy I/O.
 
One thing to note is that many enterprise level drives are over-provisioned by 20%, while their consumer level counterparts only have 7% spare area. The extra "hidden" flash on the drive is used for wear leveling, and from everything I've read, 20% seems to be the magic number with regards to performance durability. Lots of SSD's exhibit a slow down as the drive fills up, extra flash that the controller can use to wear level decreases this performance degradation.
So does that mean you need to get enterprise level drives? No...simply means that whatever drive you do get, research how much flash is physically on the drive, and how much you can actually access. Then when formatting, under-partition the drive so there is at least 20% of the total flash that can be used as spare area. For example, a 120gb Vertex 2 has roughly 128gb of flash on-board. So when formatting, only allocate 100gb or so.
 
We've been using SSDs for our production database servers for about a year, now. The difference in performance switching from 15K SCSI to SSDs was so dramatic to performance that it defies description. Using 8 core servers, 32 GB of RAM, and 1.5 Gbps SATA, RHEL/CENTOS 5/6 and PostgreSQL 8.4, we saw at LEAST a 90% reduction in query times. Real numbers are more like 92 to 95% reduction. (queries taking 1/20th the time that they used to take to complete, while under load)

Any database admin who doesn't insist strongly on using SSDs for the stunning performance increase is likely to be incompetent, IMHO.

We tried a number of different drives, and we did have some problems. Stay away from the Sandforce drives. Intel 720s seem to be fine, as do Crucial M4. We didn't find that trim support was all that beneficial, instead we run ours in software RAID1 configuration and are happy with the boost in reliability.

As stated before, we have had some failures, so it's not all been perfect, but the performance increase is so worth it if you use RAID1 which had negligible impact in our testing, with about 10% write / 90% read DB usage.
 
Back