Results 1 to 5 of 5
  1. #1
    Member gorilly's Avatar
    Join Date
    Jul 2004
    Location
    uxbridge, London, UK
    Posts
    898

    NAS AND SAN storage

    i know this could probablly go in the storage section but i feel its more to do with networking.

    i've been looking all over the web for a good guide / explanation of NAS (network attached storage) and SAN (storage area network) but couldnt find one anywhere.

    i have a close contact at dell advanced server support and was asking him to explain it to me, expecting a 2 line email back but what i actually got is fantastic.

    i just thought i would put it on here for anyone else who is looking for answers!

    i would credit him but maybe dell wouldnt like that as this is more of a personal opinion...


    No problem at all Michael,

    I've been looking for a NAS v SAN document to send you but couldn't find any external facing docs that solely deal with that distinction. So, here it is: ;-)

    NAS - Network Attached Storage

    All systems which connect to the NAS system do so through the public network (ethernet etc). This is an easy solution to implement and is normally just a case of racking the system, connecting it up, configuring it on the network, creating your shares and away you go. It's basically adding another dedicated file server onto the network with extra storage. But there are other benefits, Storage Server 2003 comes with all the OS clients installed so Unix/Linux/Netware/Apple/Windows users can all access the storage. Also, there are no limits on the amount of clients that can attach to the NAS system so you do not need to worry about CAL's. Finally, the NAs systems have a web based management utility where you can configure the system over the network without needing to know Windows inside and out. (Good for Linux people who have strong opinions on using Windows ;-)

    SAN - Storage Area Network

    A SAN is where a dedicated high speed network interconnects the servers and storage and is not part of the public network. Generally, SAN's are fibre based which provide higher speed/bandwidth then ethernet. This enables System Administrators to consolidate their storage from multiple servers onto one storage system, which means less configuration, easier to manage and simpler backup strategies. However historically SAN implementations were a lot more expensive than NAS due to the cost of fibre hba's, switches, cabling etc etc.


    The AX100 mixes SAN/NAS technology, it comes in two flavours iscsi and fibre. The iscsi and fibre relate to how the AX100 connects to the network. To deploy a fibre AX100 (fibre=speedier/higher bandwidth) you would need to buy Fibre hba's for each server which will be using the server, Fibre switch (Not cheap!) and Fibre cables. Thats without mentioning backup solutions.

    The AX100i uses the iscsi protocol which means it can use existing Ethernet networks to attach to the servers. However, the ethernet switch must be a gigabit one and cat5e cables used, Dell also recommends that a dedicated gigabit switch is purchased and dual nics are installed in each server. This is so the data traffic between the ax100 and server is not on the public network which could reduce performance. (Though it is possible to use the AX100i on the same network as clients) This makes the AX100i a cheaper option as gigabit ethernet technology is cheaper than fibre. Finally, it is also possible for multiple sites to use the AX100i over a WAN once the network configuration allows it.

    There are documents on the AX100 specs located at:
    http://www1.euro.dell.com/content/pr...=uk&l=en&s=pad

    PV745n documents:
    http://www1.euro.dell.com/content/pr...=uk&l=en&s=pad

    Does this help at all? (I think I should have been a sales rep... ;-) In the end, the main considerations come down to:

    a) How much storage do I need (including a % extra for future proofing for a couple of years)?
    b) What performance level do I need?
    c) Any specific business requirements
    d) How much can I afford?


    If you need to get a quote for the AX100 for comparison let me know because, I think a Rep would need to issue one as the website doesn't at the moment. It may also be necessary to consider changing your backup solution as well, if all your data is going to be centralised.

  2. #2
    Member CyberFed's Avatar
    10 Year Badge
    Join Date
    Nov 2001
    Location
    Orlando Florida UCF Alumni! C/0 2005 Computer Engineering
    Posts
    1,386
    Thanks for the good read I just recently configured a 280 gig scsi fibre channel 4 hard drive SAN on an IBM server at work...really neat stuff...talk about a workhorse!
    I couldn't begin to list my computer specs here..I work as a systems engineer for NASA just imagine the toys I get to play with.. :)

  3. #3
    Member Benvanz's Avatar
    Join Date
    May 2005
    Location
    Victoria, BC, Canada
    Posts
    568
    Good read Thx
    Go bug some other sig!

  4. #4
    Member Thespis377's Avatar
    Join Date
    May 2002
    Location
    Starkville, MS
    Posts
    599
    SANs are extremely cool. We have several SANs on campus. I help manage 2 of them. Mirrored systems. We have 2 of these: Sun StorEdge 9970 The biggest difference between SAN and NAS is the speed of data transfer and seek time. A NAS is limited by the ethernet pipe, while a SAN is limited by almost nothing. At 15GB/s data transfer....it's very fast. Plus with the large amounts of cache in the system, you can get seek times down to 1ms. We also have several of these: ATABeast for systems that require large amounts of disk space and for our backup server to write to disk before writing to tape....kind of like a cache for tape. Very cool stuff. Wish I could afford either of these for my home. 9-)

  5. #5
    Senior Toilet Scrubber Ebola's Avatar
    10 Year Badge
    Join Date
    Jan 2001
    Location
    Rosemount, MN
    Posts
    2,511
    I've done windows 2003 (sql2000) clustering with an ax100. If you need any info on it let me know. Its been awhile but I think I could walk you through it. And your right, it is almost a networking issue. I think I used 10 IPs for all of the hardware in my cluster and virtuals.

    BTW the AX100 runs windows XP embedded. Its web interface is kind of crappy too. Your linksys routers have a better looking interface.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •