• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

I also built a Plex Server (but not as cool as the other thread)

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
yup, I'll pull the 2650Ls, drop in the 2680v2s and get it running. should be good for about 30k passmark LOL.

I'm still up in the air about what I want to do with the NAS.

I want to run all 8 drives. I want redundancy, I want speed, I want expandability.

I'll be running 8x2TB drives.

Raid 10
gets me 8TB space (50% efficient)
good read/write performance
single disk failure pretty quick rebuild from raid 1 level (copy)

Raid 5 (or SHR1)
gets me 14TB space (87.5% efficient)
what would read/read performance look like? I dont know how performance scales in this raid level
risk of second disk failure during rebuild
slow rebuild

Raid 6 (SHR2)
gets me 12TB space (70% efficient)
lower risk, second disk failure during rebuild covered
painfully slow rebuild time

any opinions on the matter?

trying to wrap my head around all the different options.
 
yup, I'll pull the 2650Ls, drop in the 2680v2s and get it running. should be good for about 30k passmark LOL.

I'm still up in the air about what I want to do with the NAS.

I want to run all 8 drives. I want redundancy, I want speed, I want expandability.

I'll be running 8x2TB drives.

Raid 10
gets me 8TB space (50% efficient)
good read/write performance
single disk failure pretty quick rebuild from raid 1 level (copy)

Raid 5 (or SHR1)
gets me 14TB space (87.5% efficient)
what would read/read performance look like? I dont know how performance scales in this raid level
risk of second disk failure during rebuild
slow rebuild

Raid 6 (SHR2)
gets me 12TB space (70% efficient)
lower risk, second disk failure during rebuild covered
painfully slow rebuild time

any opinions on the matter?

trying to wrap my head around all the different options.

im doing raid 5 speed doesnt scale that great but even with 3 disks it is plenty to cap a gigabit connection. im on 4x2tb drives about to expand it to 5, online raid expansion is painfully slow btw. took 3 days to expand it from 3 to 4 iirc lol. as for the rebuild failure on raid5 do some research and you will see that it is rare that it ever happens, and the "12tb 100% failure wall" is a joke
 
Maybe you should post in the storage area where you may get a few more hits. I wish I could comment but I'm using storage spaces and dunno if I would recommend it to anyone since no one seems to be running it, lol.
 
I run RAID 10 across 4x5 TB drives. I'm using a QNAP TS-453A NAS. The NAS has 4x1 Gbps network connections that you can gang together (with the correct type of switch) to get 1x4 Gbps connection.

A standard 1 Gbps connection will not be enough to max out a RAID 10 array. Using this setup, I can have 2 PCs writing (or reading) to the NAS at the same time without causing any throttle (maxes out the 1 Gbps network connection on each PC).

As far as your comments earlier about using handbrake...I still have to post my handbrake settings. While my raw BluRay rips are 40 GB or more, my typical converted file is under 8 GB...I'm not sure what you are doing to get 25 GB+ converted files. Also, there is a setting in Handbrake to match the output framerate to the source file framerate.

I also agree with using H.264 as the format. This is the most supported video format, and many of the newer players have hardware decoders for this format.

My objective for using H.264 and Handbrake was to use the "direct play" mode for Plex as much as possible (to avoid transcoding).

My daughter and son-in-law stream off my Plex server (running on my TS-453A NAS) all the time. Even thought my internet upload speed is about 15 Mbps (115 Mbps down), they don't see throttling, queuing, or stutter...even on high bit-rate BluRays.
 
yeah after looking at the options I think i'm going to do 8x2TB drives in a Raid-10 like config under FreeNAS (ZFS).

Advantages I see:
-Speed. this will essentially be 4x mirror vdevs (raid1), striped together (raid 0).
-Redundancy. I can have up to 4 disk failures IF AND ONLY IF they fail in separate vdevs
-Expandability. I could theoretically add more space by adding more mirror vdevs to the pool with more pairs of disks, OR (more similar to me since ill be populating all 8 of my drive bays) replace disks one by one to expand the individual vdevs.
-fast rebuilds. recovering from a drive failure only has to do a straight copy from the mirror on that single vdev. reducing workload on all other drives during rebuild.
-ZFS bonus - rebuilds only copy actual written data, not the whole drive bit for bit like other raid solutions.

Disadvantage:
-not great space efficiency (50% useable space)
-2 disk failures in a single vdev (taking out the whole vdev) kills the whole pool. (using RE4 drives however so i'm doing my best to mitigate that)

thoughts? am i correct on all this? I think i'll cross-post in the storage section
 
Just make sure you backup your important data off the RAID.

RAID is not a backup.
 
Best buy should have their 8tb external drives (WD Reds AFAIK) for $130 on black friday. I suggest you pick up one to keep as a backup.
 
I have a hodgepodge of spare disks at the moment

1x 2TB WD external drive
2x 2TB WD green internal 3.5" drives
1x 1TB WD black
1x 1TB WD external passport (2.5")
and a 500GB and a 640GB.

I currently have about 2.25TB of data on my Synology NAS, so i have enough spares for backups for now. but an 8TB single drive sounds great. thanks for the tip. in-store only? or can buy online?
 
It should be both, but they went super fast in their pre-sale (like in a matter of 20 minutes).
 
ok, so its been a long 2 weeks.

plan was to get the server, and swap out the 2650L(V1) low power 8-core chips for my 2680v2 chips.

already had:
1x E5-2680v2
1x Noctua NH-D9L 3U cooler

bought:
1x E5-2680v2 - eBay - $175
1x Noctua NH-D9L - eBay - $58
1x Cisco SG200-18 managed 18-port switch - eBay $70
1x CyberPower rack mount power strip/PDU/surge - Amazon $50
1x StarTech 12U mini rack (dont need much here)
1x 40GB Intel 320 SSD (for freeNAS) - eBay $19


When i got the server, got it mounted in the rack. Ran all the ethernet and power connections and wire managed it all.

powered it up and as expected, it was ungodly loud. 6x 7000RPM 80x38mm 72CFM fans will do that lol. each fan is like 55dB at full blast. probably over 60 with all of them going. even with the speeds ramped down by the onboard PWM control, they were still too loud for me, but damn do they push a lot of air. since the server is in my computer room, just adjacent to my living room, even with the door closed you can still hear it. so i replaced all 6 fans (4 center and 2 rear) with Noctua NF-A8 PWM fans. they are just 80x25mm fans but push 30+ CFM at 2200rpm, and are near silent when ramped down by PWM. at full load they are a little louder than my current server (which only has the 240mm AIO cooler on it). i loaded up windows on it, ran prime 95 for an hour with it all buttoned up. max temps on CPU2 were about 55C on the 2650Ls (will retest on the 2680v2s, more on that in a minute)

M5dnGo0.jpg
3hIAIZG.jpg
3bXXhuv.jpg

so now the bad news/setback.

swapped in my 2680v2 chips aaaaaaaaand no POST, tried a bunch of different stuff, but it wouldnt POST until i swapped the 2650L's back in. I researched this MB before buying and Supermicro's site lists that this board is compatible with V2 CPUs starting with BIOS version 3.0. I'm at 3.2 hmmm.

Some more research revealed that there are 2 different revisions of this board 1.10 and 1.20. Supermicro doesnt mention this difference in their specs, but most people online seem to confirm that that only the rev 1.20 boards will truly take the v2 chips. Of course my luck, the board in this server is rev 1.10 (board revision was not listed in the ebay listing, just the model). Soooo poop. guess i'll have to buy the right board and swap it in. will prob run me about $300 used

another slight annoyance is that the seller did not include the mini-sas -> 4x sata cable limiting me to just the 6x onboard sata ports. that doesnt jive too well with 8x drive bays, an SSD and an optical drive. i could get everything working with just the needed SAS cable, but i decided to just get an HBA and 2 more cables for cleanliness of install and future expansion.

Sooo off to ebay again.
1x new Dell LSI SAS 9207-8i - eBay $63
2x supermicro SAS-> 4xSATA cables - eBay $20

so in the mean time, i'll play around with FreeNAS 11 and see how it works and what i need to do to get it working. setting up LACP was a massive headache (all 4 LANs linked together). it was so easy to do on the Synology NAS.
 
Dang bummer about the board. I wonder what they changed in the revision? At least you could make some money selling the old board and cpus I hope
 
Ok, I'm needing some help of my own. Maybe one of you can help a brother out.

I've recently purchased the mobile app for Plex for $5 as I was unable to get my phone to sync to the new data on my pleasure Plex server(now that's a funny auto-correct). Now I have no access to the server from mobile (doesn't see it) and even the server was not accessing the data.

I updated to the latest Plex server 1.9.x iirc but it's stuck looking for servers on the install. I've also deleted the appdata folders prior to re-instalation.

I'll start a new thread if we don't come up with a quick solution.

Tia
 
So plex server config info is stored in the registry under
Computer\HKEY_CURRENT_USER\Software\Plex, Inc.

You could try to remove that and reinstall and see if your server works then?
 
Yeah I would just uninstall the server software and reinstall it. Especially if you don’t have too many files in its database. It will just rebuild when you reinstall the server software and resetup the libraries.
 
So plex server config info is stored in the registry under
Computer\HKEY_CURRENT_USER\Software\Plex, Inc.

You could try to remove that and reinstall and see if your server works then?

I haven't cleaned the registry, hopefully that's what I'm missing.

Yeah I would just uninstall the server software and reinstall it. Especially if you don’t have too many files in its database. It will just rebuild when you reinstall the server software and resetup the libraries.

I have tried this. But it is still looking for the server after re-install. Hopefully Custom's suggestion of cleaning out the registry takes care of it.
 
I haven't cleaned the registry, hopefully that's what I'm missing.



I have tried this. But it is still looking for the server after re-install. Hopefully Custom's suggestion of cleaning out the registry takes care of it.

Double check the UPnP/DLNA settings on both devices. While Plex should take care of that (and usually does to my knowledge) it's worth checking. You will have to run a search on your specific device to find where the UPnP stuff is, but it may have been turned off in an update or something.
 
I run RAID 10 across 4x5 TB drives. I'm using a QNAP TS-453A NAS. The NAS has 4x1 Gbps network connections that you can gang together (with the correct type of switch) to get 1x4 Gbps connection.

A standard 1 Gbps connection will not be enough to max out a RAID 10 array. Using this setup, I can have 2 PCs writing (or reading) to the NAS at the same time without causing any throttle (maxes out the 1 Gbps network connection on each PC).

As far as your comments earlier about using handbrake...I still have to post my handbrake settings. While my raw BluRay rips are 40 GB or more, my typical converted file is under 8 GB...I'm not sure what you are doing to get 25 GB+ converted files. Also, there is a setting in Handbrake to match the output framerate to the source file framerate.

I also agree with using H.264 as the format. This is the most supported video format, and many of the newer players have hardware decoders for this format.

My objective for using H.264 and Handbrake was to use the "direct play" mode for Plex as much as possible (to avoid transcoding).

My daughter and son-in-law stream off my Plex server (running on my TS-453A NAS) all the time. Even thought my internet upload speed is about 15 Mbps (115 Mbps down), they don't see throttling, queuing, or stutter...even on high bit-rate BluRays.

ye i have a similar setup just dual gb nics teamed pretty handy for multiple pc's reading / writing i get over 300MBs from my raid 5 on my el cheapo controller. even my raid 1 array keeps it busy when were having a lan party with my lancache setup when everyone is updating games.
i just cant justify raid 10 50% of the disk space poof before its even running, plus another equivalent because you need to have the capacity to backup externally (as we all know raid is not a backup). never been able to see the validity for r10 with as reliable as disks are these days.. ive been living on the edge till recently i just now bought a large external for backups because i wanted a good backup for when i expand my array in case something went wrong i have so much stuff that would take forever to rebuild that kind of a library. though, last time i did an online raid expansion it went fine luckily.
 
I'm going to do raid 10 across 8 disks. with 4x raid1
poolsize = vdev0+vdev1+vdev2+vdev3

whats good with zfs is that i can run different size vdevs. so i dont have to upgrade ALL drives, i can do 2 at a time.

So i'll start with 8TB across the 4x 2TB raid 1s
so in the future say I can pick up 2x 8TB drives at a good price
pop one drive out of vdev0, put in 8TB, then rebuild (copy, fast, no parity calc)
pop in second 8TB drive, rebuild, expand.

then ill be left with 2+2+2+8 = 14TB. and so on.

say instead i do 2x Raid5 with 8 disks. this needs 2x 4disk raid5 (raidz) vdevs. with both in the pool, it essentially becomes raid 50.
poolsize = 6TB+6TB = 12TB. i have 2 disk redundancy but only if they fail in separate vdevs.

rebuilds are significantly more time consuming calculating parity.
upgrading drives has to be done 4 at a time.
replace drive 1, rebuild whole array
replace drive 2, rebuild whole array
replace drive 3, rebuild whole array
replace drive 4, rebuild whole array
expand.

that a lot of stress on the drives. and that stress increases the probability of failure for the rebuild. then i lose everything.

and the striped 4xmirrors should be faster than the striped 2xraid5.

i dunno i'm still not set in stone but i think i like 10 better.
 
Back