• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
Even if I could run ZFS on XCF, I wouldn't. I wouldn't gain any benefits because the system is just storing huge virtual machine files.

If you are talking about the file server, I'm leery since it is new to Linux.

He was just suggesting what should be the default answer for 75% of storage questions. ZFS. :D
 
I guess the problem is ZFS way of adding volumes to a RAID. You must add vdevs to a pool, and each vdev must be at least 3 drives for a RAID5.
The problem is that it hasn't been around long enough to be proven completely stable. I don't trust it enough to put my data on it. Adding disks in sets is a bit annoying, but less of an issue. Memory requirements will also get to be constraining when more disks are added. I'm running 25 TB of raw disk in the server right now, so it is suggested to have 25 GB just for ZFS. I have another 8x 1.5 TB disks that I'm adding shortly, which will bring it up to 37 TB and require 37 GB of RAM. That server only has 48 GB at the moment. If I add too much more disk, I'm probably going to see performance losses. It is much more easy to simply go with mdadm or a RAID card, which has nearly no memory overhead.

Does that mean FreeNAS wont work either?
While FreeNAS is good, it wouldn't last more than an hour in my setup. It doesn't have the features I need or give me the control I want.
 
The problem is that it hasn't been around long enough to be proven completely stable. I don't trust it enough to put my data on it. Adding disks in sets is a bit annoying, but less of an issue. Memory requirements will also get to be constraining when more disks are added. I'm running 25 TB of raw disk in the server right now, so it is suggested to have 25 GB just for ZFS. I have another 8x 1.5 TB disks that I'm adding shortly, which will bring it up to 37 TB and require 37 GB of RAM. That server only has 48 GB at the moment. If I add too much more disk, I'm probably going to see performance losses. It is much more easy to simply go with mdadm or a RAID card, which has nearly no memory overhead.

While FreeNAS is good, it wouldn't last more than an hour in my setup. It doesn't have the features I need or give me the control I want.

ZFS loves memory and SSDs, but it does not require all that memory. It is just suggested for the best performance. ZFS has a neat thing called ARC that tries to cache everything to RAM, in order to have a super-snappy experience.

If you installed ZFS on a dedicated VM with 1TB of RAM and a 1TB pool, it would eventually cache the whole pool to RAM. If you install it to a VM with just 8GB and a, say, 32TB pool, it will not deliver all the performance it can deliver, but it will still work properly.

I am no ZFS guru though, this is just what I understood from the ZFS ninja video.

EDIT: BTW, FreeBSD was released on 1993, that is 20 years of experience. I guess they had time to iron some bugs. OpenIndiana is a fork of OpenSolaris. It is nowhere near what OpenSolaris was, but Oracle killed it. The wiki is nowhere close to Arch's, but it can help you with some doubts. You can always use Solaris documentation, with some adaptations.

EDIT2: Oh, and there's also NexentaStor Community. It is pretty much OpenIndiana with Xen support and a pretty face.
 
Last edited:
EDIT: BTW, FreeBSD was released on 1993, that is 20 years of experience. I guess they had time to iron some bugs. OpenIndiana is a fork of OpenSolaris. It is nowhere near what OpenSolaris was, but Oracle killed it. The wiki is nowhere close to Arch's, but it can help you with some doubts. You can always use Solaris documentation, with some adaptations.

EDIT2: Oh, and there's also NexentaStor Community. It is pretty much OpenIndiana with Xen support and a pretty face.
I'm talking about ZFS on Linux, which is the only way to get ZFS running on CentOS. It hasn't been around that long. I'm not willing to run anything else as the host OS right now.

If the memory allocation isn't a base requirement, then I was told wrong in quite a few places. Additionally, if I hit the "memory limit" (if there is one) and I get degraded performance, why wouldn't I just run normal RAID? There are a few features I'd like to have, but I don't think it is worth the risk. I might try it out on a single array to see how it runs.
 
I'm talking about ZFS on Linux, which is the only way to get ZFS running on CentOS. It hasn't been around that long. I'm not willing to run anything else as the host OS right now.

If the memory allocation isn't a base requirement, then I was told wrong in quite a few places. Additionally, if I hit the "memory limit" (if there is one) and I get degraded performance, why wouldn't I just run normal RAID? There are a few features I'd like to have, but I don't think it is worth the risk. I might try it out on a single array to see how it runs.

All depends what you are using it for. I've been on OpenIndiana for quite a while and have yet to have any major issue to speak of. It just works, and it's quite faster than what I need for a home network.

Obviously the more RAM the better, but for certain workloads; much of the documentation for OpenIndiana seems geared more towards an enterprise environment than an average home environment.
 
Trays for the R710 arrived today, so that server is completely functional. I'm still deciding on whether I want to swap out the RAID card for something else. While the server was down, I installed the fiber cards in all the servers.

r710_trays_arrived_1.JPG


r710_trays_arrived_2.JPG


dell_r710_xencenter_1.png
 
Glad to see my minions got them packaged and delivered to you in one piece unbent.



We have almost 250 dell R series server from ranging from R310,410,510,610,710,720

Never have I see any issues like you describe. It sounds like you are running the servers in a poorly cooled environment with bad hot isle cold isle management.

Our fans all run at low speeds, the data center is mostly quiet.

I am basing my opinion of them off the current and 2 previous jobs where I was supporting these systems (admittedly they were being supported outside of a Data Center but still.) I love the R210-910s (Hell, I am hoping I can get a R300 that has been decommissioned) and as long as they have ideal cooling and power, they are awesome and virtually silent but if there are issues they are quick to complain if unhappy and they can get LOUD.

Visbits you have no idea how bad the server room I "inherited" at the current job is... It should be used as the example of EVERYTHING THAT IS WRONG to do in a server room. When I started, I figured out that my predecessors had decided redundant cooling was adding a second unit to the same drycooler on the roof (aka split loop NOT separate). It took a literal fire happening to finally get approved for better cooling but end result is that there is actual overkill on cooling now.

I still refuse to look up when I walk in the server room so I dont see the high voltage lines "braided" through the network cabling which is all strapped to Sprinkler pipes and hangars for florescent lighting...

"Luckily" they are downsizing and shutting my site down so the issues I couldn't get approval to get resolved will not matter anyways (Ill be jobless too but that's a separate issue.)

Thid, sorry to hijack your thread here. BTW, how goes the UPS research? Separate question, what if any KMM/KVM are you using?
 
Doesn't hurt my feelings to have other discussion here.

I'm still procrastinating on the UPS pretty badly. I want to know if the Dell UPS units can interact with the DRAC devices on the servers and shut them down over the network. Doing some searches online, I couldn't find much. Then I need to decide on what size UPS I want and if I should get two of them.
 
Doesn't hurt my feelings to have other discussion here.

I'm still procrastinating on the UPS pretty badly. I want to know if the Dell UPS units can interact with the DRAC devices on the servers and shut them down over the network. Doing some searches online, I couldn't find much. Then I need to decide on what size UPS I want and if I should get two of them.

If you remember, ping me before you buy. Might possibly have 2x 2200s (NEMA 5-20R plug) that will need batteries (IIRC ~$300 per unit) and new bezels (not required but gotta look good when you pimp the rack!) Im not positive I will but I am trying to find out.
 
I saw that as well, but you can't filter by ones that come with BBUs.

If you remember, ping me before you buy. Might possibly have 2x 2200s (NEMA 5-20R plug) that will need batteries (IIRC ~$300 per unit) and new bezels (not required but gotta look good when you pimp the rack!) Im not positive I will but I am trying to find out.
Cool, keep me updated then. :)
 
The R510 is running, but effectively on life support. There is nothing wrong with it, I just had to resort to hackish methods to get the server running for the Chimp Challenge. There is no RAID controller for the server, so the drives in the backplane have nothing to connect to. I could buy a RAID controller, but I still haven't decided what I'd like to do yet, and that will hurt how much it can do folding wise.

The problem is the R510 has no power available for drives outside of the backplane and there certainly is no space. Instead, I grabbed a long SATA cable and used the spare spot in Ruby. The file server is providing power and the SATA cable is ran externally to the R510.

r710_chimp_challenge_1.JPG


r710_chimp_challenge_2.JPG


r710_chimp_challenge_3.JPG




dell_r_servers_rack.JPG
 
Back