• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Project: Rackmount Overkill

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
The server is moving files back over to a temporary ZFS array. Since the SAS expander was full, I had to test the drives while sitting on the server (pictures previously) and copy the data to them. Once copied, I could then delete my existing nearly-dead ZFS array, remove the 1.5 TB drives from the SAS expander, and recreate a smaller array using the 2 TB Hitachi drives. I created a 6 drive RAIDz2 vdev as a temporary storage point so that I can move data to it while I test the remaining two disks and then build the final ZFS array. Then I'll be done.

Just one more data move to go. However, as the data is copied to the temporary ZFS array, it has a double disk redundancy, which makes it a lot safer than it was.

I've also found the throughput limits of the SAS expander. When connected outside of the SAS expander, each of the drives was reading/writing 150 mb/sec without an issue. In the SAS expander, they are reading data from the Reds and writing it to the temporary array at 250 mb/sec total. Once I started the badblocks test on the remaining two disks, the throughput dropped to around 200 mb/sec.

10 second averages
Code:
Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
xvda              7.50         0.13         0.00          1          0
scd0              0.00         0.00         0.00          0          0
sdb             201.70         0.00        49.13          0        491
sdc             228.10         0.00        48.84          0        488
sda             217.30         0.00        49.20          0        492
dm-0              7.50         0.13         0.00          1          0
dm-1              0.00         0.00         0.00          0          0
sdd             213.50         0.00        48.89          0        488
sde             211.30         0.00        49.54          0        495
sdf             205.90         0.00        49.13          0        491
sdg               0.00         0.00         0.00          0          0
sdh             527.90        65.96         0.00        659          0
sdi             512.20        64.03         0.00        640          0
sdj             517.00        64.61         0.00        646          0
sdk             278.50         0.00        69.62          0        696
sdl             279.60         0.00        69.90          0        699
The server is also a bit loaded on wa right now.
2013-08-27 23_00_21-root@vm-fileserver__mnt.png
 
250 millibits/sec seems awfully slow :p

How much data in total did you have to move?
According to `df`, I only moved over 4.3 TB of data. It includes a lot of big files and many many small files.

Stupid millibits emot-argh.gif
 
Ok, maybe it was more than 4.3 TB, NCDU was lying to me. I'm way over that and I still have a bunch left to copy.

e8wqOgT.png

I still have a folder with a whole bunch of really small files (rsync directory for laptop/desktop/roommate) and a bunch of really big files (media) to go. The fun part is when the copy is done, I create the real ZFS array with the WD Reds and do the copy again.
 
Yes, I am running the IT firmware to pass through the disks to the operating system.

Just wanted to quote this. I was running the stock firmware on my M1015, and i recently migrated over my file server to a virtualized one on an mdadm RAID 5, M1015 pass-thru (as JBOD). I was tearing my hair out because on my HTPC, XBMC was having trouble playing back streaming media. Stuttered, wouldn't load half the time if I seeked too far, etc. I thought it was NFS configured incorrectly as iperf showed 1gbit transfer like it should. Messed with NFS, SMB seemed to be a little better but it wasn't acting like it should. Thought it was an issue with the way proxmox handled traffic... turns out the stock firmware wasn't doing so hot.

Flashed the IT firmware for disk pass-thru and everything is right again. :thup:
 
Just updated the first two posts, which were last updated in Jan 2013.
 
Recently had a power outage that caused both servers to go down (along with the firewall), caused by a nearby lightning strike. It was really fun because I was working at the time and had a customer on the phone. He thought it was pretty funny. At very least, I need to get the firewall on battery backup, but the other servers should be on one as well.

I tried looking through my private messages to see who talked to me about them, but I can't find who I talked with.
 
Finally got Team Foundation Server 2012 backing up to the file server. Please excuse the jargon in the next paragraph, it is purely for Google in case someone else is looking to do this. I could find no information on anyone else attempting to do this sort of setup. I'm guessing that people that can afford this software aren't going to be running a Samba server or one outside of a domain and students aren't really going to be doing setups like this. However, if you are crazy like me, I hope this helps!

TFS is running on Server 2012 Standard with SQL Server 2012, which is part of a domain (another Server 2012 server). I want to back this up to my actual file server, which is CentOS 6.4 running Samba. The problem was, Microsoft expects you to do this totally in-house, meaning that you are backing this up to a domain-controlled file server running Windows. Or at very least, something that is controlled by an LDAP server (still in control of the domain).

Problem is, my file server isn't part of this domain and I'm not going to be putting it on. The backup would attempt to run and fail with a "username/password error" in the log and it wouldn't hit the samba log on the file server, which was a little confusing. To fix this issue, I created a new domain account (name/password irrelevant) and set the SQL and TFS services to use this new account. I restarted the server and made sure that everything came up ok and nothing was on fire. After that, I created a new Samba user on my file server specifically for doing backups on TFS, created a new ZFS share, and set this new user to the new share. Back on the TFS server, I logged in as the new domain user and set the credentials for the Samba account in the "Windows vault" so that it would remember it. Once I logged back in as my administrative account, I was able to do my backups.

This would have been a heck of a lot easier if it told me what account was failing to do the backup. Once I got the SQL server switched over to the new account, it kept failing the backup. I thought it was not using the stored credentials and wasn't going to work. As a guess, I switched the TFS server over as well and MAGIC HAPPENED. If the account name was given, it would have been a direct giveaway of what was wrong and I could have fixed it in minutes. Instead, it took me all last night and a few hours today.

However, I now have backups of our code repository, so I can't really complain.

2013-10-16 12_57_07-vm-windows2012s.png
 
I had some strange devices shipped to me today. I would what they could be for? Guess I'll have to wait and see. :shrug:

DSC_0350.JPG
 
Those mysterious cards that showed up a few days ago seem to fit into the UPS units. A network port on the UPS? Weird.

DSC_0370.JPG
 
Back