• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Fibre Channel NICs for Personal Use?

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

smoth

Member
Joined
Jul 26, 2009
I have recently taken up an interests in big boy toys after having nothing but problems with consumer tech. I am have been running finite element simulations and the demands have upped my memory usage from 4-6gb to up to 60GB on a sever at my university. I then go to run simple tasks on the same files on my personal computer and its just not cutting it and as such I am in the initial stages of planning a new rig for this summer, possibly based around server grade hardware as I don't game and I need to be able to run at 100% for days on end without errors.

In my searches of trying to find a work around for gigabit networking (will be transferring 10-150GB files around), I came across 4GB Fibre Channel NICs on eBay for about $30 each that fit into a PCIe 4x slot which seems like it could be an obvious solution.

Would I be able to use them as you do a standard NIC? I have read a lot about fibre channel specific drives, but havent been able to find any info about people using these older fibre channel cards for a higher performance home network. Ideally I would like to put one in my file sever and one in my workstation and just connect them directly (switches are very expensive even on ebay).
 
Well, here is the thing, do you have the bandwidth available to even saturate a Gb network? Perhaps if you can, then maybe that is a direction to go (if NIC teaming doesnt work on your server level hardware aspirations).

So you have to consider that your network is only as fast as its slowest part, so if you arent saturating a Gb network, upgrading to Fiber at the PC wont help a bit really.

If you had Verizon FIOS Quantum's highest plan (in MD anyway) thats 300Mbps or around 37MB.. far from what a Gb card can push (128MB I believe).
 
Last edited:
Initially I will be reading from a 5 x 2TB drive ZFS RaidZ pool into memory/SSD scratch on my workstation.

Estimates based on other results I can find online put sustained reads for such a configuration in the 400-600 MBps range...easily saturating a gigabit connection and boardline saturating a 4gb connection.

The output files I create with the data are on the order of a few MB at most, write performance is a non-issue.
 
Ok, so this is an internal transfer? I didnt get that from the first post. That said, isnt it transfering from that RAID array to another drive which uses the SATA lanes? What does the network have to do with it? Im a bit lost here...



Note: 600Mbps = 75MB - Perhaps I am wrong but that doesnt saturate a Gb NIC card either (1Gbps = 128MB)
 
The hard drive read is in MBytes

SSD write: 300MBps+ = 2.4 gbps
5 drive RAIDZ read: 400 MBps = 3.2 gbps
10 drive RAIDZ read: 700 MBps = 5.6 gbps
RAM write: 5 GBps = 40 gbps

correct me if I am wrong, but all network gear is done in bits, so when loading data sets larger than 10GB, by speed will be limited by the SSD scratch disk which still easily saturates a gigabit connection, and on smaller data sets that can be stored entirely in memory, I will be limited by the RAIDZ which will be borderline saturating the 4 gbps fibre channel bandwidth, and will exceed it as I add more drives.

RAID Z Sequential read tests:
http://bildr.no/image/779457.jpeg
 
Ok I see..MB not Mb... LOL! Still arent those transfers going over the SATA bus anyway or are we talking different PCs?
 
Based on my understanding of connections speeds throughout the read chain goes as follows:

request from workstation -no bottleneck->
CPU/Caching on ZFS File server -find/request sectors on array->
HDD -300 MBps (x5-10)->
CPUZ/ZFS RAM -decompress files->
network interface -128 MBps or 500 MBps->
Workstation RAM -5 GBps->
SSD scratch disk -300-400 MBps

ZFS file server -> network -> Workstation -> write new file on workstation (~10MB)

The processing requirements are low enough that they will allow higher throughput than the network interface which is the slowest step if you have array larger than a few disks while reading into memory or a high performance SSD. If investing a few thousand dollars in the file server and drives, it seems that if I can quadruple the bandwidth of the bottleneck for under $100, it would be foolish not to...
 
Last edited:
A 4gb fiber switch is going to cost you way more than $100. Try several thousand. You'd be much better off doing link aggregation (802.3ad), which would require an operating system that is capable of this (any Linux distro will do it) and a switch that is capable of being configured (such as a Dell Powerconnect or similar).

However, I would suggest keeping any files that you need now locally, as that is going to faster than any network connection and far cheaper, even if you have to double up on your solid state or mechanical drive storage. Then, when you are done processing a file, move it to the server.
 
Yes, I saw that switches were very expensive, which was the basis of my original question: Can you use a pair of Fibre Channel NICs as you could standard ethernet NICs and directly connect two machines or is additional hardware required?

In my browsing, I saw these cards:
http://www.ebay.com/itm/Qty-2-Emule...201?pt=LH_DefaultDomain_0&hash=item1e748efe81

and you can pick up a 3' cable for $10-20, so $80 total cost...if it will work...

Then I figured I could just connect both system to a gigabit network to connect to everything else.

Is this possible?
 
Honestly, I don't have experience enough to answer that as I've never tried. Doing a quick search online, it looks like it may be possible to do a point-to-point connection (FC-P2P).
 
Looks like I have some reading to do...


UPDATE:
I realized this morning that if I changed the ordering of some of my scripting I could avoid trying to read the entire data set into memory at once and then performing operations on it and rewrote some of my Matlab script so it calculates after reading smaller data sets. Not sure why I didn't do it like this originally as I usually only need the original data to generate the smaller output files then am done with it so this works great. However, now I am completely disk read limited as the calculations take less time than reading the data over my university's 100 mbit network. It takes anywhere from 20-120 min. So reading the data into memory is my limiting factor. I ran a test with a test data set on my local machine and read one element at a time and again it was limited to about 80 MBps (likely the read speed of the drive). So ideally I would be able to keep all my data local, but with several TB, that starts to become difficult if I want any room to expand and redundancy. The other option is temporarily moving my data to the workstation with an external SSD (still faster with 2 transfers (file server>SSD, SSD>workstation) than over gigabit), but half as fast as a single 4gb connection directly into memory (don't even need a scratch disk this way and it decreases my memory demands from 17GB to 800MB).
 
Last edited:
i bet its "doable" but keep in mind, you need the fibre channel connects (mostly a fiber patch if i recall) the longer you get the more expensive it will be :)


your better off, like thiddy said, to channel group your switch if possible. 2 intel 4 port nics will give you 4 gigs if its setup correctly.


but it sounds like you have it down a little more :) so good luck and let us know what you do.
 
The two boxes will be in adjacent slots in a rack, so at most I need a 1m cable (~$10 at newegg). Since they should offer similar performance, and both will probably be equally difficult to set up, I will just choose whichever I can find cheaper cards for. Right now the 4Gb fibre cards are running $30-40 each, while the 4 port gigabit NICs are running $50-80 each. So $70 vs $110. Will let you know what I end up doing...still have a lot of research to do on this build.
 
If performance outweighs cost considerations go 10Gb. NICs are more expensive (around 3-400) but will give you more than double FC performance.
 
Back