• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Anyone using a RAMDISK?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

GoD_tattoo

Member
Joined
Jul 30, 2009
Just curious if anyone is using a RAMDISK for folding, and if so, have you seen any improvement?
 
I think it would take a heck of a supercomputer to make Folding become disk-speed constrained.
 
I personally do not/have not, but from what I know, ptty is right. the HDD has very little if anything to do with PPD so I cant imagine a RAMdisk to help at all... but again, I have no clue.
 
I've used a ramdisk quite briefly, for folding in the past, and also in a huge data project. It was very useful in the data project (unrelated to FAH), which had a HUGE amount of data, but not helpful for folding.

The monitoring software wasn't so good in those days, but I saw no gain in ppd by using a ramdisk. I've never tried using it again for folding.
 
I remember people saying the Bigadv clients have a long clean up time when they are finished before sending the data off. I have never run bigadv so I'm not sure of all the details of this. But people were justifying keeping F@H on an SSD to help speed up this process. So while you may not see actual PPD increases, the time between starting a new wu could be reduced. Depending on how many you are able to pump out per day this could add up to more time your cores could be spent on folding rather than waiting for data.

My issue would be losing the data in the RAMdisk, I know I would forget to back it up before a reboot not to mention you lose all ability to recover your wu after a lock up.
 
I remember people saying the Bigadv clients have a long clean up time when they are finished before sending the data off. I have never run bigadv so I'm not sure of all the details of this. But people were justifying keeping F@H on an SSD to help speed up this process. So while you may not see actual PPD increases, the time between starting a new wu could be reduced. Depending on how many you are able to pump out per day this could add up to more time your cores could be spent on folding rather than waiting for data.

My issue would be losing the data in the RAMdisk, I know I would forget to back it up before a reboot not to mention you lose all ability to recover your wu after a lock up.

The long waits to get the data ready to be returned, was solved. Ubuntu changed their default file system for the HD - and it's technically better perhaps, but it's a lot of extra work for FAH - just the way it was written.

Now, whenever you install Ubuntu, don't choose the default file system - ext4, but select ext3. Makes a LOT of difference!

There's no need to go with an SSD - takes maybe two or three minutes on an old HD, for Tanker to get a bigadv data file ready to be returned.

Note that this happens once per day, only. It's a small % of the overall time needed. I'd be worried about using an SSD, because FAH is writing to them, a lot - and SSD's do wear out rather quickly, compared to mechanical HD's.
 
Now, whenever you install Ubuntu, don't choose the default file system - ext4, but select ext3. Makes a LOT of difference!

Alternatively, find the mount options that change the behavior, and retain the benefits of ext4, or ask F@H to figure out what the heck their program is doing, since I have never seen such behavior in any other application (but I may just not have run enough applications to find any).
 
Alternatively, find the mount options that change the behavior, and retain the benefits of ext4, or ask F@H to figure out what the heck their program is doing, since I have never seen such behavior in any other application (but I may just not have run enough applications to find any).

Yeah that is very strange. I've always found ext4 to be worlds faster than ext3. Alternatively I have always run F@H on my RAID array which has been formatted to ReiserFS 3 since day one and I have never seen any issues. Also the new standard is becoming btrfs which also may not be susceptible to this issue as well, last I knew a lot of the functionality in ReiserFS was written into btrfs.
 
Ssd's are so cheap , I use them for a couple of folders including my 4P . That way you can use ext4 and not suffer the delay when it finishes.

I have one that's 2.5 years old(Kingston) so its been holding up well. Maybe setting the write interval to 30min helps. That would cut the HD activity in half, which would contribute to longevity.
 
And this is why I never use a SSD for Windows or running programs, now for backup, storing photos, music and stuff like that it's great. I have a MP3 player that had a 30GB HDD and I replaced it with a 32GB IDE SSD, it'll never wear out. :)

Note that this happens once per day, only. It's a small % of the overall time needed. I'd be worried about using an SSD, because FAH is writing to them, a lot - and SSD's do wear out rather quickly, compared to mechanical HD's.
 
The writes that F@H does every 15 mins (that is what I took away from this thread), is nothing. I wouldnt hesitate one bit to run an SSD with F@H on it....if you are no putting windows and apps on your SSD, quite frankly you are missing out over no real reason. These things are A LOT more robust than this thread makes them out to be.
 
And this is why I never use a SSD for Windows or running programs, now for backup, storing photos, music and stuff like that it's great. I have a MP3 player that had a 30GB HDD and I replaced it with a 32GB IDE SSD, it'll never wear out. :)

I have two 64GB SSD's running in RAID 0. Per my SMART data they have been up and running for 1.1 years and have had zero block reassigns. I have my OS on them and do the updates that pop up every few days. The only thing I have done to mitigate writes is I do not have any swap space enabled otherwise I let everything do what it is going to do.

If I remember the correct numbers here, the Crucial m4 SSD's (like what I am using) were rated at 1000 write cycles per cell. If you wrote the full capacity of the drive (64GB's in my case) once each day, it would in theory take 1000 days to wear it out which equals to 2.7 years. Most SSD's have 10 - 30% additional storage for block reassigns so we might be able to say more like 3 years of writing 64GB's to the drive everyday before it dies. So that being said, if there is a benefit (speed up the OS, faster F@H turn around) then I say take advantage of it.

Back more on the original topic I am now actually thinking a RAM disk might be fairly easy to manage. You just need a startup script that copies the data back to the RAM drive, a cron job that does an rsync to disk every 5 - 30 minutes or whatever, and a shutdown script that saves before shutdown. Worst case you lose 5 - 30 minutes of folding time depending on how paranoid you make those cron jobs. Saving a few minutes everyday could be an extra frame here and there that you could be processing. It might not add up to huge numbers but it wouldn't take much RAM and a small bit of fussing around with shell scripts. Why not optimize?
 
Why not optimize?

Because until somebody can point out other apps having the same issue with a measly 100MB write-to-disk taking 30 minutes, I'm going to be blaming F@H, and I shouldn't have to optimize for a broken program (not that I can even run bigadv anymore, since I have no 16-core machines)
 
Because until somebody can point out other apps having the same issue with a measly 100MB write-to-disk taking 30 minutes, I'm going to be blaming F@H, and I shouldn't have to optimize for a broken program (not that I can even run bigadv anymore, since I have no 16-core machines)

There might be a fix already in place - not sure, but CJ is showing great write data times, despite using ext4. Maybe his drive has a larger cache or he's using a RAID, I'm not sure.

But you can be sure the data writes that were optimized for ext3, will be redone IF other distro's settle on ext4 as their default file system.

Any idea how popular ext4 file system is?
 
But you can be sure the data writes that were optimized for ext3, will be redone IF other distro's settle on ext4 as their default file system.

Any idea how popular ext4 file system is?

It's been the default standard for a few years now with Fedora and Ubuntu. As I said earlier btrfs (Butter FS) is aiming to replace ext4 within a year or two as the default standard for most distros. Ext4 was mostly made to fix many shortcomings in ext3 while btrfs came up to speed and stabilized. As it stands right now brtfs is very usable and fast, I just wouldn't commit any mission critical data to it just yet, but for a 24/7 folding machine it should work just fine. SuSE Linux last I checked was already trying to push btrfs as the default.
 
It's been the default standard for a few years now with Fedora and Ubuntu. As I said earlier btrfs (Butter FS) is aiming to replace ext4 within a year or two as the default standard for most distros. Ext4 was mostly made to fix many shortcomings in ext3 while btrfs came up to speed and stabilized. As it stands right now brtfs is very usable and fast, I just wouldn't commit any mission critical data to it just yet, but for a 24/7 folding machine it should work just fine. SuSE Linux last I checked was already trying to push btrfs as the default.

In that case, I doubt if they will ever optimize the write out of data for ext4. They'll just wait and optimize it for butter fs. If it's the popular file system.
 
Where'd that come from? If you want to pronounce it, use the "real" name B-Tree FS... Same number of syllables and more descriptive.

Haha yeah the real name is B-Tree FS but people pronounce btrfs as butter fs.

Wikipedia
Btrfs (B-tree file system, variously pronounced "Butter F S", "Butterfuss", "Better F S",[1] or "B-tree F S"[2])

So I actually now have an SMP and two GPU clients running from a RAM disk. Client startup is immensely faster and I'm assuming the 30 or so seconds it takes to prep and send a wu back will be nonexistent. Why? Well I have 8GB's of RAM I don't use and was bored.

I still need to setup and test the cron job but I have it set to mount a tmpfs on boot, then during my user log in it copies all the files from a backup location to the RAM drive. On logout or shutdown it does an rsync from the RAM drive back to the backup drive. I just need to get this rsync into a cron job now and run every few minutes.

Anyone want instructions?
 
Back