- Joined
- Nov 10, 2005
Sorry, wasn't implying that you didn't know. Just wasn't sure what you were using.
Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!
No apology required. I appreciate the offer, but I created the testing methodology for the NAS server tests I was going to do before I got a bit more busy. To get real results, you would need to use real files, which is what I'm doing. Synthetic tests can tell you important information, but I find it best to use my own files as it is what I will be actually seeing when I use it, and it is a bit more random/realistic.Sorry, wasn't implying that you didn't know. Just wasn't sure what you were using.
capacity operations bandwidth
pool alloc free read write read write
zfstest 5.35T 87.0G 2.15K 95 275M 348K
zfstest 5.35T 87.0G 1.83K 116 234M 377K
zfstest 5.35T 87.0G 1.93K 121 248M 455K
zfstest 5.35T 87.0G 2.15K 138 275M 579K
zfstest 5.35T 87.0G 2.17K 64 277M 323K
zfstest 5.35T 87.0G 2.31K 0 295M 0
38999659736 bytes (39 GB) copied, 132.507 s, 294 MB/s
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sdb 0.00 0.00 0.00 0 0
sda 1.00 0.00 0.01 0 0
[B]sdc 421.80 79.27 0.00 396 0
sdh 423.80 79.21 0.00 396 0
sdd 438.00 79.21 0.00 396 0
sdf 408.60 79.62 0.00 398 0
sdi 406.00 79.74 0.00 398 0
sde 411.60 79.87 0.00 399 0[/B]
sdg 0.00 0.00 0.00 0 0
dm-0 2.00 0.00 0.01 0 0
dm-1 0.00 0.00 0.00 0 0
[root@ruby ~]# zpool status zfstest
pool: zfstest
state: ONLINE
scan: scrub in progress since Fri Apr 19 21:16:09 2013
50.9G scanned out of 5.35T at 343M/s, 4h30m to go
0 repaired, 0.93% done
config:
NAME STATE READ WRITE CKSUM
zfstest ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdh ONLINE 0 0 0
sdd ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
sdf ONLINE 0 0 0
sdi ONLINE 0 0 0
sde ONLINE 0 0 0
spares
sdg AVAIL
errors: No known data errors
[root@ruby ~]# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zfstest 5.44T 5.35T 87.0G 98% 1.00x ONLINE -
At the bottom of this picture, you can see my Omnistar (Rackable Systems) external SAS expander tray, which is currently holding eight 1.5 TB drives. I looked through the thread and see that I didn't post picture of just the unit installed in the rack, it seems. Ignore the mess please. It is temporary as I'm waiting for new parts and I'm participating in the folding competition.What "external" 1.5TBs?
If the power goes out, the unit does not turn back on. I'm fairly sure I can configure it via a serial connection, but I need a cable to do it. Plus, things can go wrong. I don't want to design the system assuming everything is working.Oh, that. Why would they get disconnected or shut off?
[root@ruby mnt]# zpool status
pool: StoragePool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
StoragePool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sdh ONLINE 0 0 0
sdf ONLINE 0 0 0
sdp ONLINE 0 0 0
sdq ONLINE 0 0 0
sdn ONLINE 0 0 0
sdo ONLINE 0 0 0
sdg ONLINE 0 0 0
sdb ONLINE 0 0 0
sdi ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
sdy ONLINE 0 0 0
sdx ONLINE 0 0 0
sdv ONLINE 0 0 0
sdu ONLINE 0 0 0
sdr ONLINE 0 0 0
sdt ONLINE 0 0 0
sdw ONLINE 0 0 0
sds ONLINE 0 0 0
raidz2-2 ONLINE 0 0 0
sdc ONLINE 0 0 0
sde ONLINE 0 0 0
sdd ONLINE 0 0 0
sdj ONLINE 0 0 0
sdm ONLINE 0 0 0
sdl ONLINE 0 0 0
sdk ONLINE 0 0 0
errors: No known data errors
[root@ruby mnt]# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
StoragePool 33.4T 4.02T 29.4T 12% 1.00x ONLINE -
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 0.00 0.00 0.00 0 0
sdy 195.80 13.45 10.36 67 51
sdx 195.20 13.48 10.32 67 51
sdv 202.80 13.49 10.54 67 52
sdu 199.40 13.52 10.17 67 50
sdr 205.20 13.62 10.80 68 54
sdt 197.00 13.35 10.38 66 51
sdw 187.60 13.44 10.84 67 54
sds 199.40 13.44 10.60 67 53
sdc 70.20 0.00 12.26 0 61
sde 72.40 0.00 12.20 0 61
sdd 64.00 0.00 11.71 0 58
sdj 63.40 0.00 12.33 0 61
sdh 265.20 13.09 11.26 65 56
sdf 170.80 16.10 11.82 80 59
sdm 64.00 0.00 12.10 0 60
sdp 260.80 13.31 11.27 66 56
sdq 267.80 13.13 11.26 65 56
sdl 79.60 0.00 12.11 0 60
sdn 164.80 16.05 11.53 80 57
sdk 63.00 0.00 12.19 0 60
sdo 266.80 13.04 11.21 65 56
sdg 265.80 13.14 11.25 65 56
sdb 170.60 15.94 11.56 79 57
sdi 264.60 12.99 11.30 64 56
dm-0 0.00 0.00 0.00 0 0
dm-1 0.00 0.00 0.00 0 0
Well, I am already offsite storage for Google, as per the Storage Megathread title.LOL maybe I should use you for my offsite backup
That didn't last long. I was configuring CrashPlan on the server now that all the data is in its final location and in the right pool when suddenly all my terminals pop up with "kernel: journal commit I/O error". Great, this again. Try to run a command that I haven't run before and I get an input/output error. I go down to the rack and it is covered in /dev/sda device errors (SAS drive). I have the disk cloning over to another disk right now, but it is cloning as "raw", which isn't a good sign. Not going to lose anything important other than configuration files, but that is still pretty annoying. The replacement should be here shortly.The file server is behaving better now with the SAS drive off of the SAS expander.