• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Gentoo SSD

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

UmHelp

Member
Joined
Jul 21, 2005
Since the Seagate 500gig 7200rpm drive has not come back instock for 4 weeks I decided to buy an OCZ Vertex 60gig instead. I feel it is destiny. I always loving running bleeding edge hardware/software :D.

~$140 for 500gigs vs ~$180 for 60gigs. Hopefully it will be worth it. I remember when I got my first raptor totally worth it!!

Anyway on to the real business. I am trying to prepare everything for the new drive so once it comes I can update the firmware and jump right in. I planned on running everything bleeding edge. Probably run either btrfs or ext4, and try out all the latest and greatest packages.

I have a few concerns though. I am running this on a laptop and I only have 4gigs of ram. I plan on running atleast /tmp and portage in memory with tmpfs. I am scared that my /tmp will get really big if the laptop is on for a few days or so and portage will kill memory when emerging large packages. Do you guys find 4gigs to be enough ram for this?

Second concern is long-term performance and recovering the drive to original state. Obviously I do not want to reinstall everything every time my performance breaks. Any tips on backups and restoring them?
 
I have a few concerns though. I am running this on a laptop and I only have 4gigs of ram. I plan on running atleast /tmp and portage in memory with tmpfs. I am scared that my /tmp will get really big if the laptop is on for a few days or so and portage will kill memory when emerging large packages. Do you guys find 4gigs to be enough ram for this?

My system has 4GB's of RAM and I hardly see it use any swap space. I have had the system on for over a week with two folding@home clients running and a firefox session with an absurd amount of tabs open along with some other random stuff and almost had 2GB's of RAM used. My /tmp folder only had 10MB's of files in it. I'm not too familiar with how portage works, doesn't it basically download and compile the source but otherwise works like a package manager? I know the largest updates I've seen aren't more than a few hundred MB's in a pre compiled package form. You also shouldn't have lots of junk piling up in your tmp folder either after you computer has been on for days. 4Gigs of RAM is a lot and I find it hard to use when doing my day to day things. You could try a temp of 2gigs and see if you ever use that much, if not then just lower the size. I doubt you will use the 2gigs of free RAM while doing most non gaming takes. But if you do then at least your swap will still be in RAM lol.

Second concern is long-term performance and recovering the drive to original state. Obviously I do not want to reinstall everything every time my performance breaks. Any tips on backups and restoring them?

Not too sure what you mean by when your performance breaks. I can't imagine an SSD having issues with fragmentation, not to mention ext4 and btrfs use delayed allocation which significantly reduces and possibly eliminates fragmentation. Unless you are talking about the issue that those intel SSD's have where after a certain amount of writes it slows it self down. I don't know much about that or if it can be reset. Does the vertex suffer from this?

But as far as backup and recovery you can just copy everything from the drive to your backup source. Then do what you will with the drive, format it then copy everything back. Either user you file manager while in root or use the cp command also from root. You may also need to restore your boot sector depending on what was done to the drive.
 
Every SSD is going to suffer from internal fragmentation and currently there are not too many effective ways to recover from it without starting over from scratch. The OCZ Vertex does uses write combining to reduce it but it is still going to happen no matter what.

I rarely even see anything go to swap as well but there have been times when I have been doing some heavy computations and my memory usage got close to 90%. This was an extreme case while I was working with an Evolutionary Computations project for school. I just want to make sure that I have enough memory to run without a swap partition.
 
I have a few concerns though. I am running this on a laptop and I only have 4gigs of ram. I plan on running atleast /tmp and portage in memory with tmpfs. I am scared that my /tmp will get really big if the laptop is on for a few days or so and portage will kill memory when emerging large packages. Do you guys find 4gigs to be enough ram for this?
For most stuff, it should be enough. When it isn't enough, it'll start swapping.

Second concern is long-term performance and recovering the drive to original state. Obviously I do not want to reinstall everything every time my performance breaks. Any tips on backups and restoring them?
Do you mean with emerge -e or wiping the SSD out to clear the controller?
 
For most stuff, it should be enough. When it isn't enough, it'll start swapping.

Yea thats what I am afraid of I will not have a swap partition since I only have one hdd in the laptop. At home I have an external eSata drive I could run swap off of but that doesn't help me when I am not at home. Would it be safe to install maybe a 512meg partition for swapping and change the swappiness of everything to very low? Only in extreme cases would it resort to the swap.

Do you mean with emerge -e or wiping the SSD out to clear the controller?

In order to restore your drive to it's former glory you must do a low level format to reset everything (AFAIK?). I don't really want to format everything and then re-emerge the whole system. It is obviously very inefficient. I was thinking of making an image of my system but I hear thats not the best idea either. Just wondering if anyone knows of a better ways. My hope is I don't have to resort to these measures for a while but you never know. Some intel SSD user's report it happening in a matter of days. Hopefully the OCZ drive will be able to handle itself a bit better but I always think in worst case.
 
Yea thats what I am afraid of I will not have a swap partition since I only have one hdd in the laptop. At home I have an external eSata drive I could run swap off of but that doesn't help me when I am not at home. Would it be safe to install maybe a 512meg partition for swapping and change the swappiness of everything to very low? Only in extreme cases would it resort to the swap.



In order to restore your drive to it's former glory you must do a low level format to reset everything (AFAIK?). I don't really want to format everything and then re-emerge the whole system. It is obviously very inefficient. I was thinking of making an image of my system but I hear thats not the best idea either. Just wondering if anyone knows of a better ways. My hope is I don't have to resort to these measures for a while but you never know. Some intel SSD user's report it happening in a matter of days. Hopefully the OCZ drive will be able to handle itself a bit better but I always think in worst case.

And why would an image not be the best idea? Links to evidence?

I'm genuinely curious, cause that would mean I've been doing it wrong for so long :(
 
I have not really looked into it all that much but I did see somewhere on the OCZ forums (I believe). This was awhile ago I am not sure how much of this applies today. SSD hardware/software tech is rapidly evolving.

If I remember this correctly (I read this a while ago) there are a few problems. One of the major problems is the images being written back may not get properly aligned with the erasure blocks. So you must be careful to make sure your images are aligned when you write them back, and supposedly this is hard to do?? Next problem has to do with image size of SSD's. If you image a drive write the image to the drive and make an image again. You would expect the image to be exactly the same. But over time repeating this process you will notice that your image is slowly getting bigger and more and more fragmented. Has to do with how your memory is stored anywhere on the SSD vs on a hdd there is usually some kind of order to where your files are written (works from the outside in). So repeatedly writing the same images causes extra internal fragmentation.

This situation translates into: You create an original image then over time you install new things and such so you create a new image. You use this new image but eventually you update your image. So you are basically doing the same thing but not in such an exaggerated way. I guess using the same image over and over would be fine but I don't think this is realistic in a home/desktop type situation. Where things get constantly changed.

Again don't quote me on any of this until I can find where I read all of this but I think thats the basic just behind it. Kinda makes sense I guess? I was hoping someone else knew something about it :D.
 
Last edited:
I have not really looked into it all that much but I did see somewhere on the OCZ forums (I believe). This was awhile ago I am not sure how much of this applies today. SSD hardware/software tech is rapidly evolving.

If I remember this correctly (I read this a while ago) there are a few problems. One of the major problems is the images being written back may not get properly aligned with the erasure blocks. So you must be careful to make sure your images are aligned when you write them back, and supposedly this is hard to do?? Next problem has to do with image size of SSD's. If you image a drive write the image to the drive and make an image again. You would expect the image to be exactly the same. But over time repeating this process you will notice that your image is slowly getting bigger and more and more fragmented. Has to do with how your memory is stored anywhere on the SSD vs on a hdd there is usually some kind of order to where your files are written (works from the outside in). So repeatedly writing the same images causes extra internal fragmentation.

This situation translates into: You create an original image then over time you install new things and such so you create a new image. You use this new image but eventually you update your image. So you are basically doing the same thing but not in such an exaggerated way. I guess using the same image over and over would be fine but I don't think this is realistic in a home/desktop type situation. Where things get constantly changed.

Again don't quote me on any of this until I can find where I read all of this but I think thats the basic just behind it. Kinda makes sense I guess? I was hoping someone else knew something about it :D.

Yep, it makes sense, but it also seems like an easy problem to fix. Wonder when we'll see imaging software geared toward SSDs...
 
I am sure we will definitely see lots more software geared more towards SSD. There are many areas where we could easily see improvements. Look at btrfs the author specifically says he built it with SSD's in mind. So people are starting to recognizing them.

The more I read about SSD's the more I want mine!! Even with a well used drive the performance degradation isn't enough for me to be turned off. Sure it is bad but we are comparing that to the original state of the drive not to the old hdd you switched from. Most well used SSD's will still outperform a mechanical hdd in most benchmarks. The drive I am coming from is absolutely terrible so anything is better than it. Even an OCZ core with shuttering would be better than what I have now. :) People seeing 5 second lags welcome to my world and I have a mechanical drive. I dare not enter vista it is horrible.
 
I'm not seeing much info on this internal fragmentation outside of those intel X25 M drives.

Some background info. First, sector remapping—a custom solution from Intel— is a method that makes sure wear and tear on the drive is spread over the entire space instead of just in a small area (which would cause the drive to fail earlier). Intel's algorithm unfortunately makes files become fragmented eventually, and defragmenting software currently on the market just screw things up further.

I've also found a few sources saying that in general NTFS is a problem on all SSD's.

http://www.reuters.com/article/pressRelease/idUS223074+28-Oct-2008+MW20081028
The problem goes back to the NTFS file system, which is employed by all
current Microsoft operating systems. This file system is optimized for
hard drives, but not for SSDs. As data is saved to an SSD, free space is
quickly fragmented. Writing data to these small slices of free space
causes write performance to degrade to as much as 80 percent -- and this
degradation will begin to appear within a month or so of normal use. The
problem erodes speed, which is of course a primary value of an SSD.

My understanding of what this is saying is that NTFS is writing too much in the free space after files which causes fragmentation. Really I'm not sure why this is a problem since SSD's do not have the latency caused by the drive head needing to seek out parts of files scattered all over the drive (fragmentation). Reading one point on the SSD vs another point should take the same amount of time. But any way EXT4 and I would suppose btrfs are more intelligent than NTFS as to where it writes files and how much space it leaves after them for file size expansion. Really I'm pretty sure ext3 and ReiserFS as well as XFS all do this as well.
 
While fragmentation is not an issue when reading from an SSD, it might be in writing. SSD's are VERY slow when doing a lot of small, random writes.

I run a quad core AMD with 4gb RAM and a 60GB SSD. Write performance would continuously lock up the system. DO NOT run a swap partition on an SSD. Not only does it degrade the SSD, but more importantly it kills your performance (when the SSD is waiting to be written to, the whole system temporarily locks up).

There are some things you can do to speed things up.

1) Do not run /usr/portage off the SSD. I run mine off an nfs mount.
2) Do not run /tmp off the SSD, mount it to a ramdisk (tmpfs). Be aware that this makes some things really temporary (if you reboot they are gone).
3) Change your io scheduler to noop. Never use anticipatory or cfq (it's a setting in the kernel).
4) Do not run ccache off a SSD.

Basically, you want to minimize writing to the disk. Reads are fast, but writes are horribly slow if not sequential.

I never use 4gb of ram, even making part of it a ram disk.

I ended up adding a 40gb magnetic drive to speed things up some for apps that do a lot of writing. I mount a swap partition there now. I try to keep the SSD to static content. I even mount /var/tmp to the magnetic dfrive.

You will see constant freezing if you are trying to frequently write to an (MLC) SSD.

I am not sure how suspend works with this, my system is a desktop, and suspend writes to disk, so if that is mounted on a ramdisk, that miight cause issues.
 
Here is a really good article that explains why SSD drives degrade over time, shutter, and how they actually work. It is a bit long but a good read. It basically boils down to the fact that SSD's don't delete files when you actually delete them but rather when the drive writes to that block again. So the write lag comes from the deleting of the block then writing to the block. This is why people are saying that adding the TRIM function to the SSD's will help so much.

http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=1

While fragmentation is not an issue when reading from an SSD, it might be in writing. SSD's are VERY slow when doing a lot of small, random writes.
This is not so true with the newer SSD's that are not based on the jmicron controller. The Vertex can perform faster random writes than a Velociraptor, when worn. (Theses tests are based on an older firmware as well they are faster now.)
http://www.anandtech.com/storage/showdoc.aspx?i=3531&p=25

1) Do not run /usr/portage off the SSD. I run mine off an nfs mount.
2) Do not run /tmp off the SSD, mount it to a ramdisk (tmpfs). Be aware that this makes some things really temporary (if you reboot they are gone).
3) Change your io scheduler to noop. Never use anticipatory or cfq (it's a setting in the kernel).
4) Do not run ccache off a SSD.

Good catch I did not even think about ccache. Since I am going to running a laptop I don't really have the luxury of running some of these things on a secondary drive so I might just have to run /usr/portage on my local drive. Unless you can actually run the system without it. IE: At home I can run it off an NFS mount and on the road just live without it, no new programs.

I am not to worried about the life of the drive if it lasts a year I will be happy. Hopefully by then newer bigger badder drives will be out. This drive will be an experiment.
 
Last edited:
I would run it all over NFS or using ramdisks (/usr/portage over nfs, /tmp over a ramdisk, etc.) As long as you don't often need to emerge programs while away from the lan, it should be fine. You could also set up a vpn.

Of course, if you really had to do it, emerge --sync will recreate /usr/portage locally.

My OCZ is a 60gb, but I am not sure if it is a jmicron or vertex controller, or how to tell. It is about 4-5 months old I believe, if that helps.

My reason for making the mods I did was really performance and not drive life, although that is an added bonus.
 
I was thinking about VPN as well it isn't a terrible solution. Probably slower than paint drying but statistically I am at home more often than not. But when using a computer you don't notice things when they are fast only when they are slow. Maybe when I get the drive I will try it both ways. If I find it kills performance quickly then I can put portage on a network share. I don't often emerge sync or even update I am just to lazy. This is an experiment after all.

My OCZ is a 60gb, but I am not sure if it is a jmicron or vertex controller, or how to tell. It is about 4-5 months old I believe, if that helps.
OCZ Core = jmicron
OCZ Apex = jmicron
OCZ Core V2 = jmicron
OCZ Solid = jmicron
OCZ Vertex = Indilinx
OCZ Summit = Samsung

Anything other than jmicron shouldn't have any problems with shuttering. All drives out currently will suffer performance hits the more you write to them. Shuttering is different from aging though.
 
I "think" mine was a core v2. I cannot swear to that though. That being said, I was disappointed by performance before I tweaked it.

I just keep 1 /usr/portage with nfs on the fileserver. I actually do not run Gentoo on my laptop, although it runs on everything else (except a router/firewall/proxy/etc. running ipcop). However, my laptop is a netbook and has 1 1.6 ghz intel atom. My main reason for avoiding Gentoo was compile time with that slow processor. I also wanted to preserve the SSD (which interestingly is perfectly fine on performance, although I did do some tweaking, like the change to noop which any SSD user should use). Gentoo is hard on SSD's, especially smaller ones (less space to level the wear over), with all the compiling and the updating of the portage cache.

Maybe I will look at an update to my SSD. I did feel I jumped into that market a bit early, but I was doing a totally new build, so I went with what I could get. I chose it over the raptors because they were cheaper than raptors (more per GB, but I use a tiny amount of my hard drive as all storage/media run off my file server, so it just has the OS. If I didn't have more than one OS, I'd be ok with 16gb.
 
Maybe I will look at an update to my SSD. I did feel I jumped into that market a bit early, but I was doing a totally new build, so I went with what I could get.

I think it is still early for SSD's, I have a feeling though that by the end of the year they will have worked out most of the bugs. I would look to getting a new drive when they start supporting the TRIM function and SATA6.

My main reason for avoiding Gentoo was compile time with that slow processor.

Thats why we have distcc :D. Even with my faster machines I still run it. It is kind of a pain to setup cross compiling but once it is setup you dont have to worry about it.
 
Too much cannot be compiled with distcc. The install alone would take eons. Also, can't install new software on the road.

Also, Gentoo changes too fast... bad for SSD's.
 
I read that AnandTech article. Wow, very nice, and now I want a Vertex SSD lol!

The slow down from usage is interesting and I see how that applies to all SSD's. I still feel that delayed allocation can help prolong this issue. As well as (not sure if this is a Linux or file system specific thing) Linux tries to keep space after files that are written to allow for future expansion of the file. This should limit how many sectors have multiple unrelated files or parts of files in them. All of this together will ultimately not stop the need to read all the data out to cache, clear the sector, and write everything back with the changes, but I would much rather have this expensive write operation performed only on the file/s being changed and not on files that have not been changed and are just in the wrong place at the wrong time. On the other hand the SSD's wear leveling may still put unrelated data in the same sectors regardless of what the fs wants.
 
I still feel that delayed allocation can help prolong this issue.

Delaying allocation does have some very negative consequences, IE: data loss. To the FS data loss is unacceptable. Delaying allocation greatly increases the chances of data loss.

As well as (not sure if this is a Linux or file system specific thing) Linux tries to keep space after files that are written to allow for future expansion of the file. This should limit how many sectors have multiple unrelated files or parts of files in them.

This is definitely a file system specific implementation but with a SSD you cannot think like this because the drive does not actually put data in a specific place. The file system is a basic abstraction of the hardware it will say write to point x and the hdd will take point x and do f(x) to find the physical location it will place the file. For rotating drives this mapping is much more straight forward because, the best place to put data is on the outside of the platter so it will try to do so as best it can but with a SSD your data could be anywhere, this does not hurt performance because access time is almost negligible. You actually gain performance by having your data split across different memory chips because of parallelism (obviously there is a limit to how much splitting here). This is why you see such good read speeds from SSD, you are reading multiple chips at the same time. A Single chip cannot read at 250MB/s probably not even half (or even 1/4) that so you read multiple chips to get that performance. SSD's strength is in parallelism as with multi-core processors, doing multiple things at the same time is a good thing.

The next thing to consider is that HDDs/SSDs are dumb they don't know what data is live or what data is dead. If you delete a file the HDD does not actually physically erase the file. Here is the major reason why SSDs degrade of time: Rotating HDDs don't mind overwriting data they can do it many many times, SSD's on the other hand cannot they try their best to avoid doing it at all costs, so they just put the new data into fresh new space (normal hdds do this as well but don't have any trouble overwriting old data). Once this space is out then it must start deleting old data, hence the slower writes, you must erase a block before you can write to it. Here is where the TRIM command comes into play, with TRIM the SSD isn't so stupid. Instead of just blindly writing to all the free space and waiting until it actually needs to delete to delete it can free space as you modify your file system (ie: delete/update files). So you are making holes for new data to enter and the cool thing is that it doesn't matter that these holes are scattered all over because again the access time is very low. Also your disk is doing all of this in the background so you don't notice it either.

In the end it doesn't matter where the data is because the drive will not delete until it must. So arranging data to eliminate the "wrong place wrong time" extra overhead probably wouldn't really save you that much because it really isn't even that much. I would be curious to see a very low level image of an SSD to see what kind of old files you can find on it.

Hopefully most of this information is accurate, this is at least my idea of how the world works. :beer:

P.S. i dislike UPS they messed up my 2 day shipping and turned it into 4 days. 8:44 A.M. INCORRECT ROUTING AT UPS FACILITY. I paid like 2/3 more for 2 day shipping :(. So hopefully tomorrow I will have my drive.
 
Back