• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

SSD and Linux

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

perato

New Member
Joined
Jun 14, 2010
Does anyone run Linux on an SSD? I use archlinux and followed the Arch Wiki on SSD to set up the heads, sectors, change the scheduler, enable trim via discard option.

I use an HDD for /var. Does anyone use an SSD for /var? If so, how has it worked out? Currently I have / and /home on the SSD and /var on a reiserfs partition on an HDD along with a partition for Mythtv recordings on the HDD.
 
I'm currently running Fedora 14 on a Kingston 128gb Drive. I host everything off this, but I intend to move /home to another drive once I have enough left over drives to do RAID1.

It has worked great, boots in less than 6 seconds and almost instantly loads the desktop. Takes longer to POST than to load the OS.

I have a 30gb SSD in my file server, but there it isn't even formatted. I'm trying to decide whether to wait and get a larger drive.
 
I run Ubuntu 10.10 off a Intel SSD at work. The whole system is on the SSD, and I have a separate drive for larger junk (which I don't really need at work). I have EXT4 all over, my main use is programming and with this drive I can compile my work project (few million lines of code) about 3 times faster then with my previous Sata drive (computer is completely different though).

Having the SSD at work has made me feel like my home computer is broken.
 
Is it just me (sorry for getting off topic...) but does it seem that the POST is taking longer than it used to? I run win7 off SSD and I feel like most of time botting up is POST. or is it just because the rest of the boot is so much faster than a normal HDD that it feels like it takes ages?
 
Is it just me (sorry for getting off topic...) but does it seem that the POST is taking longer than it used to? I run win7 off SSD and I feel like most of time botting up is POST.

YYYYEPPPP! It's the darn BIOS! I get that with my Asus P5QL Pro.
 
You need to change your default scheduler from cfq (default) to noop. This is the most important thing you can do. That can be changed by rebuilding the kernel (or you can change a file in sys somewhere I think, I forget how exactly - it would have to be done for each boot).

/tmp needs to be mounted either on a hard drive or on a RAM disk. I chose the latter. It's faster, but you lose the contents if you lose power, so autosaves don't always work right, etc. Also, the swap partition needs to either be turned off or you need to use a magnetic drive for that.
 
You need to change your default scheduler from cfq (default) to noop. This is the most important thing you can do. That can be changed by rebuilding the kernel (or you can change a file in sys somewhere I think, I forget how exactly - it would have to be done for each boot).

/tmp needs to be mounted either on a hard drive or on a RAM disk. I chose the latter. It's faster, but you lose the contents if you lose power, so autosaves don't always work right, etc. Also, the swap partition needs to either be turned off or you need to use a magnetic drive for that.

I have a couple of comments on this. Firstly just as an fyi you can
Code:
echo noop > /sys/block/sda/queue/scheduler

to change your scheduler. I have found that in Mint 10, this holds through reboots. But on this note, I have not noticed ANY appreciable difference in doing so, even though I continue to use it in this config

On the partitioning side of things I would move /var off the drive as well, but this is only my personal preference. As for /tmp I actually have had this in tmpfs but my problem is that because /tmp was originally mounted on a different hard drive anyways /tmp fails to mount, so if I dont drop to the command line before logging in at gnome and manually mounting it (or just mount -a) I get all kinds of wierd errors in gnome.

I didnt bother trying to fix this, I just put /tmp back on the machine drive
 
I also did not notice any difference regarding changing the scheduler, but I would imagine the effect really depends on your workloads.

Regarding the swap partition, personally I prefer to keep it on the SSD, if it's used at least it does not put the system into as much of a crawl. I have plenty of memory, so it should not be used anyway. To ensure this I chance the swappiness value to a lower setting so the system is more reluctant to start swapping in the first place (but will still do so if needed).

The default swappiness level should be 60, can be checked with

Code:
cat /proc/sys/vm/swappiness

You can change it to 10 by editing /etc/sysctl.conf and adding the following to the end:

Code:
# swap less
vm.swappiness=10

Requires a reboot, but if you want it to take effect immediately you can also set it in runtime:

Code:
sudo sysctl vm.swappiness=10

(this last one is lost in reboot, so rememeber to also configure it to /etc/sysctl.conf)
 
The scheduler is less critical with a high random write IOPS number. It is still always better though - and critical on some drives.

You want that swap partition on a magnetic drive or tmpfs. That's not only about speed - it's about drive life. Swap partitions employ lots of write cycles which can really shorten the life of an SSD.
 
Back