• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Best Filesystem for 6+ Terabyte HTPC Server? [ext3 vs XFS]

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Teque5

Member
Joined
Apr 5, 2006
Location
Los Angeles
After the catastrophe that happened over in this thread, it is now time to reformat my raid5.

I had previously chosen ext3 which had worked great. In the end maybe not however since it lacks any kind of file recovery

Last time I made this decision I had narrowed down my choices to either XFS or ext3. In the end I chose ext3 since it seemed more supported by the community.
Does anyone have any additional suggestions I should consider?

My Requirements
  • Starting with 6TB Filesystem [80% 300MB+ Files, 10% 5MB files]
  • Must be able to grow.
  • Will be under constant usage.
  • Be as robust as possible.

What I Know
  • XFS is better at handling large files
  • XFS & ext3 are both journaling
  • XFS & ext3 have no file recovery

I am not considering ReiserFS.

useful: wikipedia comparison of filesystems
 
so why did you choose ext4 over xfs?

Last I had checked, ext4 can grow or shrink vs. XFS inability. ext4 was better at handling smaller files speedwise and about equal on large files and overall there was nothing that made me look at XFS that closely.

To the comment on why not ZFS?

Last I checked, ZFS was still predominantly a "Solaris" FS... Im not running Solaris and have NO interest in running it either.
 
On my raid5 array, I run xfs. I don't really care about not being able to shrink my fs. I don't ever intend on needing to shrink it. All I really need to be able to do is expand it. I would say that performance is better than stock ext3.
 
I used to run XFS on my 10TB media server on a RAID 6. I had a number of issues that led to file corruption (mostly raid related and not xfs' fault) and spent months research ways to recover my data from various XFS partitions. What I ended up accepting was that xfs just doesn't handle file recovery very well. Now I tend to run some rather unusual setups and under unusual circumstances so my issues were not really xfs' problem. However, xfs made dealing/coping with the issues more difficult in my situation.

So now that I have accepted, and nearly finished grieving over, the loss of 10TB worth of data, I will be moving and then rebuilding my media server with ext4. It is stable enough for production, has plenty of support and community behind it to help out with it, it handles large files just as well as xfs while maintaining better performance with those smaller files, and there's no reason not to use it.

Unless you are running a Solaris based OS, ZFS will require you to use it via FUSE. On top of that, ZFS makes expansion a bit more involved if you are using any iteration of raid-z (which is one of its biggest attractions). ZFS is great for enterprise use, sure. But home servers don't need the same things enterprise servers do. And that comes from someone that builds some pretty extreme configs and comes up with some outrageous use cases.

I would recommend ext4 for what you are looking to do. If you don't want it, I would recommend xfs over ext3.
 
I use EXT4 exclusively on my server and client systems. The server has 25 TB of raw disk (nine 2TB disks in RAID 6 and seven 1TB drives in RAID 10). If something were to happen to the filesystem, I want to be able to easily and quickly recover it. Since ZFS is new to Linux, I'd like to wait using that and EXT seemed to have better chances of recovery than XFS.
 
Back