• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

NAS Software Reviews - Your input is wanted!

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
At least the company I work for, there really isn't that much more they do with their systems. If I had full access to the servers, you can bet I'd be doing a PXE server with tools I need for my job. But, I don't have that and I'm forced to use their crappy setup. :shrug:
 
Decided to put the server back in the rack tonight. Frees up some space and I don't have to listen to it.

server_racked.JPG
 
At least the company I work for, there really isn't that much more they do with their systems. If I had full access to the servers, you can bet I'd be doing a PXE server with tools I need for my job. But, I don't have that and I'm forced to use their crappy setup. :shrug:

Roger taht, thanks for the info.

Decided to put the server back in the rack tonight. Frees up some space and I don't have to listen to it.

server_racked.JPG

Schmexy bastahd ;)

snipped for off topic :)
 
Last edited:
The replacement hard drive arrived. It is the same model as the one I sent and a different serial number. I'll probably pop it in later tonight and get the system up and running.

I put my spare Intel gigabit PCIe NIC in the NAS test server and it seems to be working, so that may not have been a freeNAS issue.

The server has been given the name BlooP. I chose this because the server is experimental and so is the programming language.
 
Drives are behaving much better now that there is not a bad one. I'm over 1gb/sec with all the hard drives running.

drive_tests.png
 
NAS is up and running. I did some tests with IOMeter and this seems perfect for the job. I have everything setup to run in a line, so I can simply let it run. I've attached the save file if you want to look it over.

Here are the settings:
Code:
Workers = 1
Maximum disk size = 10,000,000 (5gb)

Read/write mixes: 100% read, 50/50% read/write, 100% write
Sizes: 32k, 1M, 10M
Pattern: 100% Sequential, 100% Random
Run time: 60 seconds
Stepping IO: Start - 1; End - 512; Power 2; (10 steps, 1, 2, 4, 8, 16, 32, 64, 128, 256, 512)
Total tests: 18
Total run time: 180 minutes (18 tests x 10 steps x 60 seconds)
Also attached is the xlsx file from Excel that has graphs. If you don't have Excel, you can view in Google Docs. Input? Comments?
 

Attachments

  • IOMeter.zip
    1.2 KB · Views: 37
  • Prelim tests.zip
    16 KB · Views: 34
I've updated the methodology document to reflect the changes to the tests.

I'm also running the first FreeNAS performance tests now. Screenshot for now and actual results later. I will be making a run against my (actual) file server to compare against an "ideal" scenario; otherwise my first review will have nothing to compare to.

freenas_testing.png
 
Wasnt sure how NCQ would affect NAS until you did the test :)

Last time I looked, 1,8,64 and 512 suffered right (8 powers)

I found that interesting.
have you updated the original excel file?
 
Wasnt sure how NCQ would affect NAS until you did the test :)
It is actually queue depth, but I didn't expect that large of a difference either. I started playing around with it when I found my 10M test doing 10 MiB/sec transfer.

Last time I looked, 1,8,64 and 512 suffered right (8 powers)
All the IO queue depth numbers are a power of two.
20 = 1
21 = 2
22 = 4
23 = 8
24 = 16
25 = 32
26 = 64
27 = 128
28 = 256
29 = 512

have you updated the original excel file?
I'll do that right now. I forgot the test would take 180 minutes to run, which put it past the time I was willing to stay up.
 
Cool looking forward to it.

As for the powers of two..I just found it odd that it was

1, 8, 8^2 and 8^3 that were low.
 
I keep forgetting to post information here, sorry. Those tests were done in a virtual machine on my main system, which I found gives wildly incorrect numbers. I restarted into Windows and ran the tests again, which gave much more sane values. I did notice a few trends and I'm going to test and see if my main server does the same thing. When writing values to the server ("mixed" or "write"), it stops sending data for two seconds, with an interval of exactly twenty seconds. "Read" tests don't have this issue. See attached picture. I'm using top on the FreeNAS system, but I don't see anything that would indicate it having a problem. Maybe it is writing data to the hard drives or doing something that temporarily suspends write operations?

The second issue may simply be too much data. In my "10m" tests, it shows "0" as the end result for queue depths of 128, 256, and 512. This only happens on "read" and "write" tests; mixed shows fine.

write_anomaly.png
 
I figured out why the large queues sometimes give erroneous results (10m Seq Read 512 queue got 320 megabytes a second!). IOMeter waits for the entire queue to be processed before it calculates the values. So, if the copy of one queue is longer than the test itself, it will report 0 for that test. If a queue finishes in the wrong test, which is very easy when the tests are longer than the time period, it reports on the next test, which gives wrong results.

Because of this, I'm removing the 10m tests. It it too difficult to get consistent results without making the test extremely long. This will allow me to run the tests for longer, which should give better results.
 
I've attached the first test run in ODS format and included graphs. I'm very happy with how the read tests are showing, along with a few of the mixed and write ones. I think some of the inconsistencies will be fixed with a longer run time (2 minutes) and a small ramp up time (10 seconds). I've attached the updated IOMeter configuration file.


[Attachments removed because results were inaccurate]
 

Attachments

  • IOMeter Configuration.zip
    1.1 KB · Views: 25
Last edited:
I found out that my server can cheat if I only use a 5gb file. Going based on my server monitoring tools, it is caching the entire thing in memory and updating the hard drives when it feels like it (i.e. never). This gave me very skewed results, since it is basically treating it as a RAM drive. That isn't a very fair setup to go against. So, I increased the size of the file by a hundred times, to make sure it can't cache it. After watching the random tests, it seems to be giving much more accurate results.

Since the test server is capable of caching it in memory as well, I'm going to just use the 500gb file across all reviews. Assuming that everything goes according to plan, I should have the first comparison test done tomorrow morning and the first FreeNAS performance test later tomorrow. I hope that I don't have to change the tests again. This is taking a lot of time. On the upside, I can run this while I'm at work.
 
Tests are done for the file server, but I'm going to run the "1M Seq Read" tests again as they seem incorrect. Blue is the server cheating and orange is the true speed. The random tests is where it was able to gain the most by keeping it in RAM, as seen in the graphs:

fileserver_comp_1.png

fileserver_comp_2.png
 

Attachments

  • file_server_comparison.zip
    66.3 KB · Views: 50
I love OCF :)

Apparently my thanking privileges have been revoked...

thanks Thiddey, keep doing what your doing :)
 
Last edited:
I have near complete results for FreeNAS. Attached is my ODS file showing all three runs, the average of the test, and how far off the average each run is. Please see the zip file if you want to see these.

Additionally, I have the near complete results for FreeNAS compared against my server in graph form. Please mind that I've only done one run of my server, where FreeNAS has done three. FreeNAS does ok with sequential reads, but anything that is random, mixed read/write, or write, is substantially less than an "ideal" setup. This isn't to say the results are bad, just that the numbers are lower than I expected.

freenas_comp_1.png

freenas_comp_2.png

Looks at those sexy graphs. Have some sexy spreadsheets:
 

Attachments

  • file_server_comparison.zip
    66 KB · Views: 43
Last edited:
Back