• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

File transfer speed descrepencies

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

Stratus_ss

Overclockix Snake Charming Senior, Alt OS Content
Joined
Jan 24, 2006
Location
South Dakota
I am wondering what would cause this.

Hardware

The desktop, media-centre and desktop are all connected via cat-6e to a 16 port gigabit switch.

media-centre

Code:
Base Board Information
	Manufacturer: ASUSTeK Computer INC.
	Product Name: P7P55D LE
I7 860
10G if ram
a raid-z with 3 x 3TB nas drives 
1 X 4TB regular drive
32G SSD for OS
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03)

The server

Code:
Base Board Information
        Manufacturer: ASUSTeK Computer INC.
        Product Name: KGP(M)E-D16
2 X 16 core AMD Opteron(tm) Processor 6xxx series
64G of ram
raid-z with 4 x 1.5TB 
2 X 2TB OS drives in Raid 1.
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection

Software

Server: CentOS 6.5 with OpenVZ fully up to date (64bit): 2.6.32-042stab108.2

Media centre: Ubuntu 14.04 (64 bit): 3.13.0-48-generic


I have benched the disks themselves and they look like this (all are writing a 32GB file):

Iperf tests

Code:
Desktop to Media Centre:  0.0-59.3 sec  6.50 GBytes   941 Mbits/sec
Desktop to Server: 0.0-60.0 sec  6.58 GBytes   941 Mbits/sec

Media centre:

Code:
4TB = 135 MB/s
Raid-z = 265 MB/s

Server:

Code:
Raid-1 = 72.5 MB/s
Raid-z = 258 MB/s

Now the weird thing is for network transfer. If I push from my desktop which has Raid-0 SSDs I get the following results (with a 24GB MKV) using RSYNC

Media Centre:

Code:
Desktop to 4TB = 111.95MB/s
Desktop to Raid-z = 111.55MB/s

Server

Code:
Desktop to Rad-1 = 42.64MB/s
Desktop to Raid-z =  43.01MB/s


I am wondering what would cause this? Network cards were my first thought but since iperf is almost identical I'm not sure. Would the OS disk cause this? I.E. the SSD can just push more throughput? I thought the NICs were more dependent on CPU, cabling and switches

EDIT: Just for S & G's I ran it against 2 machines that are using a power-to-ethernet adapter

Code:
laptop (EoP): 0.0-60.5 sec   292 MBytes  40.5 Mbits/sec
nuc: 0.0-60.7 sec   280 MBytes  38.6 Mbits/sec
laptop (10/100 ethernet):  0.0-60.2 sec   676 MBytes  94.1 Mbits/sec
 
Last edited:
Do a nc connection between the server and desktop to isolate the network devices. It should transfer at the max network speed all day long. If not, you know where to start.
 
Do a nc connection between the server and desktop to isolate the network devices. It should transfer at the max network speed all day long. If not, you know where to start.

Isnt this what iperf proves out? Not sure why one would use iperf over nc?
 
A few of your tests showed slower results. I would start by isolating the components. nc allows you to isolate the network entirely. Drop /dev/zero or /dev/urandom to the network, have another system read it into /dev/null. If the network is always showing full speeds, you know it is an issue with the shares or disk access.
 
From the following results it looks like iperf and netcat have the same results

Code:
stratus@stratus-desktop /big_space/git_projects/antergos-packages/deepin_desktop  $ dd if=/dev/zero bs=16000 count=50000 | nc -v stratus-server 12345
stratus-server [192.168.99.99] 12345 (italk) open
50000+0 records in
50000+0 records out
800000000 bytes (800 MB) copied, 6.79395 s, 118 MB/s

stratus@stratus-desktop /big_space/git_projects/antergos-packages/deepin_desktop  $ dd if=/dev/zero bs=16000 count=50000 | nc -v media-centre 12345
media-centre [192.168.99.195] 12345 (italk) open
50000+0 records in
50000+0 records out
800000000 bytes (800 MB) copied, 6.79211 s, 118 MB/s

Code:
stratus@stratus-desktop /big_space/git_projects/antergos-packages/deepin_desktop  $ dd if=/dev/urandom | nc -v media-centre 12345
media-centre [192.168.99.195] 12345 (italk) open
^CExiting.
1342791+0 records in
1342790+0 records out
687508480 bytes (688 MB) copied, 41.7882 s, 16.5 MB/s

stratus@stratus-desktop /big_space/git_projects/antergos-packages/deepin_desktop  $ dd if=/dev/urandom | nc -v stratus-server 12345
stratus-server [192.168.99.99] 12345 (italk) open
^CExiting.
3515459+0 records in
3515458+0 records out
1799914496 bytes (1.8 GB) copied, 107.796 s, 16.7 MB/s

How much do SATA cables make a difference? Further to this, I would assume that 3.0GB/s should be enough to handle the network transfer at full capacity (as opposed to the 6.0GB/s of the NAS drives). My basic understanding is that the 7200 RPM disks @ 3.0GB/s in a raid-Z arrangement should be able to push more data than the network can handle. Or is this incorrect?
 
Last edited:
Depends on the setup of the ZFS array. I'm seeing about half network speed transfers, but I only have one vdev. I need to order more disks...
 
Each computer has 1 pool with 1 vdev.

The server has 4 disks, the media centre has 3. So you think it may have something to do with the ZFS setup on the server. Some break down between network card and the hard drive (since each component individually test fine)
 
I see the same thing, so let me know if you find a solution. There seems to be a ton of activity on the disks when accessed through Samba, compared to the same actions done from a local command line. However, I recently went through the array and deleted millions of (tiny) files, which might have caused my speed issues in the first place.
 
Back