• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
In the process of moving all my images from (crappy) Photobucket to my own server, let me know if you can see the images in this thread.
 
Well, quite a few things have happened as of recently. I started my own website (www.thideras.com) and will use that for my images, videos, etc. If you like WoW raiding, feel free to download, I have unlimited storage/bandwidth.

Besides that, I've been speaking with my father (who builds/orders/designs massive storage servers for the government), and he recently mentioned ZFS and all this amazing stuff it can do. I started researching today and found my new file server operating system: Solaris.

To quote a few places:

ZFS presents a pooled storage model that completely eliminates the concept of volumes and the associated problems of partitions, provisioning, wasted bandwidth and stranded storage. Thousands of file systems can draw from a common storage pool, each one consuming only as much space as it actually needs. The combined I/O bandwidth of all devices in the pool is available to all filesystems at all times.

ZFS introduces a new data replication model called RAID-Z. It is similar to RAID-5 but uses variable stripe width to eliminate the RAID-5 write hole (stripe corruption due to loss of power between data and parity updates). All RAID-Z writes are full-stripe writes. There's no read-modify-write tax, no write hole, and — the best part — no need for NVRAM in hardware. ZFS loves cheap disks.

Like ECC memory scrubbing, the idea is to read all data to detect latent errors while they're still correctable. A scrub traverses the entire storage pool to read every copy of every block, validate it against its 256-bit checksum, and repair it if necessary. All this happens while the storage pool is live and in use.

There are no arbitrary limits in ZFS. You can have as many files as you want; full 64-bit file offsets; unlimited links, directory entries, snapshots, and so on.


ZFS provides built-in compression. In addition to reducing space usage by 2-3x, compression also reduces the amount of I/O by 2-3x. For this reason, enabling compression actually makes some workloads go faster.
I've hilighted the important features above.


To NTFS: Go suck and die.
 
noticed something when looking at your first couple of pictures, are you not running the memory in dual channel on purpose in the AMD rig? it looks like you have it in a black and yellow slot, when it should be both in the same colours, unless i am wrong and that board doesnt matter.

anyways this looks friggen great
 
I'll check the manual again, but I'm sure that it is dual channel.
 
check cpu-z if it says dual it's dual. i had the problem when i setup a amd rig for a friend and put em in 1/3 slots, i thought it was like Intel boards and then i noticed it when i checked out cpu-z for the first time. changed em to the same coloured slots and was dual. might be different for your board, but never hurts to check
 
Server is sitting on a bookcase with nothing hooked up. I'm waiting to purchase the case I want ;)
 
rubbish, run it caseless :p

And just checked the manual online and A1 and B1 installed are Dual channel which are the yellow ones (Chapter 2.1) also assuming you can run in the black slots and acheive DC as well
 
Interesting, thanks for looking; was actually working :p

I'll change it when I get home.
 
Honestly, I'm not sure how to set it up. I would still like to use the Perc 5's since those seem to be the cheapest way to add SATA ports. But what kind of RAID level (if any) should I use?

I'm currently down one Perc 5, it was defective, had to send it back. So, I still have the option of selling my current Perc 5 and just go with other cards. Not sure what or where to get them though.

On Solaris, I got the "enterprise" version, what is the difference? I know a couple other operating systems (FreeBSD) have ZFS, but I just wanted to try Solaris since I have never used it.
 
Solaris is VERY difficult to get going downright PITButt.

I dont know how much *nix experience you have but you will definetely be hitting Google up hard.

How many Arrays are you looking at doing total?
How Many Volumes?

Personally, I would look at doing 1 array(per card), 2-3 Volumes (depending on what you want to do with it) and then do at least one Hot spare per Array.

You dont really have to do the OS on a seperate drive, I built a server for a customer on Friday (rebuilding in Monday AM) with 6 ES2 1 TB drives in Raid 5 with a Hot spare (4tb useable). 100 gb partition for OS etc and the rest is all storage.

My way may not be the best but short of the computer being destroyed, you wont lose data.
 
I'm going for at least 16 drives (8/card). I can always use the onboard SATA ports if I use ZFS to expand.

I think I figured it out (and know why I was confused). I thought that ZFS needed more than 1 drive to function, but it isn't RAID. I think I'll just go with my original plan of 2 8tb RAID 5's (1 per controller). I believe they are called "pools", but I'll use one card and its drives as 1 pool. Then I can use the on board controller as another pool if I want.

I got OpenSolaris up in a VM right now, was going to test the ZFS system later.
 
If you are going to make seperate arrays like that I would honestly suggest RAID 6, you lose 1 more drive worth of storage but with as much data as you will be putting on these the extra safety is well worth it IMO

(RAID 6 is like 5 but you can lose 2 drives at once without data loss, which is huge IMO because I have been unlucky enough to lose 2 drives in a RAID5 at once before lol)
 
I'll look into that, thanks for the suggestion. I'm almost tempted to just let the ZFS file system handle it and run the Perc's in passthrough...kind of defeats the purpose of those cards though :-/
 
Back