Results 1 to 11 of 11
  1. #1
    Member


    Join Date
    May 2010
    Location
    NYC
    Benching Profile

    Quest on Upgrading Raid

    So I have a Areca 1261 Raid Conroller with 15x2TB in Raid 6 with 1 Hot spare

    I'm sorta thinking ahead on how I will migrate to 3TB drives (or 4) Now I understand that

    1) Raid should use the same size/type of drives
    2) I belive that if I add say a 3TB drive as a replacement dirive it will only use 2TB's

    What I was wondering is if I swap 1 drive at a time with say a 3TB drive and let it rebuild once Its complete will the raid understand it now has all 3TB drives and allow me to expand it?

    I'm looking for guidence on how to migrate from 15x2Tb to 15x3Tb short of building a 2nd server of course Unless thats the only real way to do it
    Folding User Stats
    CPU: I7 3960X @5000 H2O
    MOBO: ASUS Rampage 4 Extream H20
    RAM: Corsair Dominator 4x2GB 2133 Cl8 (Hypers)
    Video: Evga Titan Tri Sli H20
    Case: Danger Den Custom DoubleWide
    PSU: Silverstone 1500
    LCD: Dell 30" 3007
    SSD: 4x Ocz Vertex 4 120 Raid 0
    H20: Dual D5 Pumps, EK SLi Serial, BP Pump Top, BP Pump Cover, Frozen Q Reservoir, Lamptron FC2, 3x Blackice GTX 480's 2x Blackice GTX 360's

  2. #2
    Destroyer of Empires and User Accounts, El Huginator
    Premium Member #3
    First Responders
    thideras's Avatar
    Join Date
    May 2006
    Location
    South Dakota
    Author Profile Benching Profile Heatware Profile
    With all the ports populated, that is going to be difficult to do a migration. How much space are you using on the array and does it allow you to reduce the size of the array by removing drives? If you can remove drives, make the array as small as you can, put the new drives in the configuration you want (RAID 6, etc), and copy the data to the new array. If everything fits on the new array, remove the old one, put all the disks in, and add them to the array.

    If your array is full, you could offload the data elsewhere and do what I mentioned above. Alternatively, you could buy another RAID card or SAS expander.

    The real risky route is to remove a 2TB drive, replace it with a 3TB drive, let the array rebuild, and repeat for every drive. I would highly suggest not doing this, however. There is a high chance of data corruption.
    Desktop: Gigabyte Z77X-UD5H | 3570k | 32 GB | GTX 770 Classified | 1 TB Samsung Evo & 2 TB HDD | Windows 3.1 | 4x 2560x1400 Monitors
    VM Server 1: Dell R710 | 2x L5630 | 96 GB RAM | 8x 300 GB Savvio | IBM M1015 | 34 TB Raw disk | XenServer
    VM Server 2: Dell R710 | 2x L5630 | 96 GB RAM |
    8x 300 GB Savvio | XenServer
    Router: Dell R410 | E5620 | 32 GB RAM | 3x 300 GB | pfsense
    "That's not overkill, or a lot. That's just thiderastic." -txus.palacios
    "Clouds are silent, cold, and wet. Servers are none of these things." -Bobnova

    Current projects: Rackmount Overkill (New) | Little Overkill (New)
    Articles: Rack Mounting 101 | Dell Perc 5/i Throughput Benchmarks
    My Website


    Want to talk directly to all the moderators at once? Call the Mod Hotline!

  3. #3
    Member


    Join Date
    May 2010
    Location
    NYC
    Benching Profile
    Quote Originally Posted by thideras View Post
    With all the ports populated, that is going to be difficult to do a migration. How much space are you using on the array and does it allow you to reduce the size of the array by removing drives? If you can remove drives, make the array as small as you can, put the new drives in the configuration you want (RAID 6, etc), and copy the data to the new array. If everything fits on the new array, remove the old one, put all the disks in, and add them to the array.

    If your array is full, you could offload the data elsewhere and do what I mentioned above. Alternatively, you could buy another RAID card or SAS expander.

    The real risky route is to remove a 2TB drive, replace it with a 3TB drive, let the array rebuild, and repeat for every drive. I would highly suggest not doing this, however. There is a high chance of data corruption.
    I'm using 12.8TB out of 21.8TB With 1 hot spare I'm pretty sure I can remove the hot spare and a few drives (need to confirm) Another option is to just add a few drives into my desktop in raid 0 copy it over and rebuild new array and copy back as long as I have no disasters in the 24-36 hours of building new array and moving data back may be the safest path.
    Folding User Stats
    CPU: I7 3960X @5000 H2O
    MOBO: ASUS Rampage 4 Extream H20
    RAM: Corsair Dominator 4x2GB 2133 Cl8 (Hypers)
    Video: Evga Titan Tri Sli H20
    Case: Danger Den Custom DoubleWide
    PSU: Silverstone 1500
    LCD: Dell 30" 3007
    SSD: 4x Ocz Vertex 4 120 Raid 0
    H20: Dual D5 Pumps, EK SLi Serial, BP Pump Top, BP Pump Cover, Frozen Q Reservoir, Lamptron FC2, 3x Blackice GTX 480's 2x Blackice GTX 360's

  4. #4
    Destroyer of Empires and User Accounts, El Huginator
    Premium Member #3
    First Responders
    thideras's Avatar
    Join Date
    May 2006
    Location
    South Dakota
    Author Profile Benching Profile Heatware Profile
    If you are going to put some drives in your desktop, I'd avoid RAID. Simply copy files to fill up the drive and move to the next one. Don't complicate the switch any more than you need to. You are moving a lot of data.

    Since the array is half full, I would give it a look through to see if there is anything you can delete, resize the partition as small as you can, and remove as many disks as you can. You should be able to easily do this with two resizes (shrink on existing, expand on new). Keeping the data transfer local is going to make this go really fast. If you were to use a gigabit network, one copy of 12.8 TB of data is going to take 35.5 hours and 76 hours if you had to do it two ways. If your array can do 300 MB/sec, it would take around 10 hours.

    What operating system is the server?
    Desktop: Gigabyte Z77X-UD5H | 3570k | 32 GB | GTX 770 Classified | 1 TB Samsung Evo & 2 TB HDD | Windows 3.1 | 4x 2560x1400 Monitors
    VM Server 1: Dell R710 | 2x L5630 | 96 GB RAM | 8x 300 GB Savvio | IBM M1015 | 34 TB Raw disk | XenServer
    VM Server 2: Dell R710 | 2x L5630 | 96 GB RAM |
    8x 300 GB Savvio | XenServer
    Router: Dell R410 | E5620 | 32 GB RAM | 3x 300 GB | pfsense
    "That's not overkill, or a lot. That's just thiderastic." -txus.palacios
    "Clouds are silent, cold, and wet. Servers are none of these things." -Bobnova

    Current projects: Rackmount Overkill (New) | Little Overkill (New)
    Articles: Rack Mounting 101 | Dell Perc 5/i Throughput Benchmarks
    My Website


    Want to talk directly to all the moderators at once? Call the Mod Hotline!

  5. #5
    Member jmdixon85's Avatar
    Join Date
    Oct 2008
    Location
    Cumbria (UK)
    Quote Originally Posted by thideras View Post
    If you are going to put some drives in your desktop, I'd avoid RAID. Simply copy files to fill up the drive and move to the next one. Don't complicate the switch any more than you need to. You are moving a lot of data.

    Since the array is half full, I would give it a look through to see if there is anything you can delete, resize the partition as small as you can, and remove as many disks as you can. You should be able to easily do this with two resizes (shrink on existing, expand on new). Keeping the data transfer local is going to make this go really fast. If you were to use a gigabit network, one copy of 12.8 TB of data is going to take 35.5 hours and 76 hours if you had to do it two ways. If your array can do 300 MB/sec, it would take around 10 hours.

    What operating system is the server?
    Makes me wonder after 1Gb/s being around for so long surely we should see 10Gb's onboard LAN soon

    Anyway don't mean to bring the thread off topic. But if anyone can help you Thiddy is our local Storage/RAID GURU around here
    Main System/File Server/IP-Fire Router
    CPU: I2500K @ 4.7Ghz (H2o cooled)//AMD Athlon 215 x2/Celeron 1.1Ghz
    Mainboard: AsRock Z77 Extreme 4/Gigabyte 78LMT-USB3/Gigabyte GA-C847N
    RAM: 8GB DDR3 1600Mhz/12GB DDR3 1333Mhz//2GB DDR2
    Video: 2x nVidia GTX660 3GB sli/Radeon 3000//iGPU
    HDD: 256GB SSD (System)/WD VR 300GB(Games)/ 80GB HDD (TV Recordings)/WD VRaptor 160GB (Sytem), Maxtor 80GB (Sys Backup), 5x 1TB (RAID5 Data)/3TB (Backup) HighpointRR 3520 RAID Card/160GB
    PSU: Corsair TX-M 750W/Enermax 495W//FSP 300W
    Case: Coolermaster HAF932/CM Cosmos S/
    Audio: Toslink -> Pioneer VSX-820-K (Dolby TrueHD/DTS Master Audio)
    i7/i5/i3 Overclocking Guide/Intel Core2 overclocking guide/AMD AM2 overclocking Guide

  6. #6
    Member


    Join Date
    May 2010
    Location
    NYC
    Benching Profile
    Quote Originally Posted by thideras View Post
    If you are going to put some drives in your desktop, I'd avoid RAID. Simply copy files to fill up the drive and move to the next one. Don't complicate the switch any more than you need to. You are moving a lot of data.

    Since the array is half full, I would give it a look through to see if there is anything you can delete, resize the partition as small as you can, and remove as many disks as you can. You should be able to easily do this with two resizes (shrink on existing, expand on new). Keeping the data transfer local is going to make this go really fast. If you were to use a gigabit network, one copy of 12.8 TB of data is going to take 35.5 hours and 76 hours if you had to do it two ways. If your array can do 300 MB/sec, it would take around 10 hours.

    What operating system is the server?
    The Server is running on 2008R2 banthwith wise I'm mor elimited by gig nextwork then the drives. I get 100% network utilization when doing copies accross the network. I figured raid 0 to just speed things up and not have to break up the folder structure.
    Folding User Stats
    CPU: I7 3960X @5000 H2O
    MOBO: ASUS Rampage 4 Extream H20
    RAM: Corsair Dominator 4x2GB 2133 Cl8 (Hypers)
    Video: Evga Titan Tri Sli H20
    Case: Danger Den Custom DoubleWide
    PSU: Silverstone 1500
    LCD: Dell 30" 3007
    SSD: 4x Ocz Vertex 4 120 Raid 0
    H20: Dual D5 Pumps, EK SLi Serial, BP Pump Top, BP Pump Cover, Frozen Q Reservoir, Lamptron FC2, 3x Blackice GTX 480's 2x Blackice GTX 360's

  7. #7
    Destroyer of Empires and User Accounts, El Huginator
    Premium Member #3
    First Responders
    thideras's Avatar
    Join Date
    May 2006
    Location
    South Dakota
    Author Profile Benching Profile Heatware Profile
    Why not just do everything locally, though? It is going to be faster.

    A recent hard drive should be able to saturate a network connection by itself. There is no need to run RAID.
    Desktop: Gigabyte Z77X-UD5H | 3570k | 32 GB | GTX 770 Classified | 1 TB Samsung Evo & 2 TB HDD | Windows 3.1 | 4x 2560x1400 Monitors
    VM Server 1: Dell R710 | 2x L5630 | 96 GB RAM | 8x 300 GB Savvio | IBM M1015 | 34 TB Raw disk | XenServer
    VM Server 2: Dell R710 | 2x L5630 | 96 GB RAM |
    8x 300 GB Savvio | XenServer
    Router: Dell R410 | E5620 | 32 GB RAM | 3x 300 GB | pfsense
    "That's not overkill, or a lot. That's just thiderastic." -txus.palacios
    "Clouds are silent, cold, and wet. Servers are none of these things." -Bobnova

    Current projects: Rackmount Overkill (New) | Little Overkill (New)
    Articles: Rack Mounting 101 | Dell Perc 5/i Throughput Benchmarks
    My Website


    Want to talk directly to all the moderators at once? Call the Mod Hotline!

  8. #8
    Member


    Join Date
    May 2010
    Location
    NYC
    Benching Profile
    Quote Originally Posted by thideras View Post
    Why not just do everything locally, though? It is going to be faster.

    A recent hard drive should be able to saturate a network connection by itself. There is no need to run RAID.
    Well I'm out of power ports on the psu and no room to really mount the hard drives I could sorta just mount them outside the box. I was trying to figure out if I can use the "blue" 8 pin power ports on the corsait 1000hx with some sort of adapter


    PS do you know of any decent gpu's that I could put into a pci port so i can use the pcie for a 2nd raid controller?
    Folding User Stats
    CPU: I7 3960X @5000 H2O
    MOBO: ASUS Rampage 4 Extream H20
    RAM: Corsair Dominator 4x2GB 2133 Cl8 (Hypers)
    Video: Evga Titan Tri Sli H20
    Case: Danger Den Custom DoubleWide
    PSU: Silverstone 1500
    LCD: Dell 30" 3007
    SSD: 4x Ocz Vertex 4 120 Raid 0
    H20: Dual D5 Pumps, EK SLi Serial, BP Pump Top, BP Pump Cover, Frozen Q Reservoir, Lamptron FC2, 3x Blackice GTX 480's 2x Blackice GTX 360's

  9. #9
    Destroyer of Empires and User Accounts, El Huginator
    Premium Member #3
    First Responders
    thideras's Avatar
    Join Date
    May 2006
    Location
    South Dakota
    Author Profile Benching Profile Heatware Profile
    I don't understand how you wouldn't have enough power connectors for them. If you have 16 drives and remove some, doesn't that leave room for the new ones?
    Desktop: Gigabyte Z77X-UD5H | 3570k | 32 GB | GTX 770 Classified | 1 TB Samsung Evo & 2 TB HDD | Windows 3.1 | 4x 2560x1400 Monitors
    VM Server 1: Dell R710 | 2x L5630 | 96 GB RAM | 8x 300 GB Savvio | IBM M1015 | 34 TB Raw disk | XenServer
    VM Server 2: Dell R710 | 2x L5630 | 96 GB RAM |
    8x 300 GB Savvio | XenServer
    Router: Dell R410 | E5620 | 32 GB RAM | 3x 300 GB | pfsense
    "That's not overkill, or a lot. That's just thiderastic." -txus.palacios
    "Clouds are silent, cold, and wet. Servers are none of these things." -Bobnova

    Current projects: Rackmount Overkill (New) | Little Overkill (New)
    Articles: Rack Mounting 101 | Dell Perc 5/i Throughput Benchmarks
    My Website


    Want to talk directly to all the moderators at once? Call the Mod Hotline!

  10. #10
    Member


    Join Date
    May 2010
    Location
    NYC
    Benching Profile
    Quote Originally Posted by thideras View Post
    I don't understand how you wouldn't have enough power connectors for them. If you have 16 drives and remove some, doesn't that leave room for the new ones?
    ohh your saying to shrink current array first remove them from raid and use thse spots. I'm sorta scared to mess with the current array without having it copied over first. SO I was thinking I could use onboard sata just need to get power to them.

    Side project is to add a 2nd raid controller with a bunch of vertex 3 maxiops for my vm's so I still need to find some way to add more power.
    Folding User Stats
    CPU: I7 3960X @5000 H2O
    MOBO: ASUS Rampage 4 Extream H20
    RAM: Corsair Dominator 4x2GB 2133 Cl8 (Hypers)
    Video: Evga Titan Tri Sli H20
    Case: Danger Den Custom DoubleWide
    PSU: Silverstone 1500
    LCD: Dell 30" 3007
    SSD: 4x Ocz Vertex 4 120 Raid 0
    H20: Dual D5 Pumps, EK SLi Serial, BP Pump Top, BP Pump Cover, Frozen Q Reservoir, Lamptron FC2, 3x Blackice GTX 480's 2x Blackice GTX 360's

  11. #11
    Member ziggo0's Avatar
    10 Year Badge
    Join Date
    Apr 2004
    Location
    La Porte, Indiana
    Might be a small investment but you could purchase another psu/cpu/mobo/ram/sata controller and use it as a temporary rig to migrate your data to your new raid array. If you have any spare rigs in the house that a wife/kid could do without for a few days you could use that. You can turn around and sell the hardware here or elsewhere to make up your loss...how important is your data?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •