• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Areca Raid Expansion/Upgrade Help

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

thobel

Member
Joined
May 22, 2010
Location
NYC
I currently have an Areca-1261 with 15 2TB drives attached in Raid 6 with 1 hot standby drive. I'm looking to change those to 15 4TB drives however I'm trying to figure out the best way to do it.

my plan v1.0 alpha

get the 15 drives

step 1
add 5 to one of my desktops copy data over the network

step 2 remove all 2TB drives and replace with the 10 4TB drives

step 3 Setup raid 6 with 1 hot standby

step 4 copy data back to new array

step 5 add 5 4TB drives and "expand" array

step 6 done

I would really like to know if anyone see's a problem with this? Will it work? I'm 99% sure expanding the Areca will work fine? But I sorta recall it showing up as 2 volumes in Windows Disk Manger?
 
15 drives in RAID 6 is pushing it. I normally suggest no more than 5 drives in RAID 5, and I wouldn't suggest more than 10 in RAID 6. The problem is when a drive fails and the array is rebuilding. 15 4 TB drives is 60 TB of data to read through. Those drives likely have an uncorrectable bit read error of approximately once every 12 TB, so parity is the only thing saving corruption. In the event of a drive failure, you have a single disk of parity. Additionally, the increased load on the disks could increase the chance of another one dropping out. Making the rebuild longer (or with less parity disks), is not a good idea.

Regarding your question, you will have to do what I did recently. Hook up the drives individually to another system and move the data from the existing array to them. From here, you have two options since you have the old array free.

1) You could hook up some of the 4 TB drives, move the data to the temporary small array, then expand it.

2) Alternatively, you could hook up the 2 TB drives elsewhere and move the data on to those, freeing up all the 4 TB drives. However, this is more data transfer and slightly more risky. It will be likely be faster than expanding the array, though.

Expanding an array should not show two drives. If it does, that is a Windows Thing.

Is this in a server? Have you considered moving to another setup, like ZFS? You really are running too many disks for a single RAID 6 array.
 
15 drives in RAID 6 is pushing it. I normally suggest no more than 5 drives in RAID 5, and I wouldn't suggest more than 10 in RAID 6. The problem is when a drive fails and the array is rebuilding. 15 4 TB drives is 60 TB of data to read through. Those drives likely have an uncorrectable bit read error of approximately once every 12 TB, so parity is the only thing saving corruption. In the event of a drive failure, you have a single disk of parity. Additionally, the increased load on the disks could increase the chance of another one dropping out. Making the rebuild longer (or with less parity disks), is not a good idea.

Regarding your question, you will have to do what I did recently. Hook up the drives individually to another system and move the data from the existing array to them. From here, you have two options since you have the old array free.

1) You could hook up some of the 4 TB drives, move the data to the temporary small array, then expand it.

2) Alternatively, you could hook up the 2 TB drives elsewhere and move the data on to those, freeing up all the 4 TB drives. However, this is more data transfer and slightly more risky. It will be likely be faster than expanding the array, though.

Expanding an array should not show two drives. If it does, that is a Windows Thing.

Is this in a server? Have you considered moving to another setup, like ZFS? You really are running too many disks for a single RAID 6 array.

Its a custom server yes. I have had it running for something like 2 years as it is now.

in option 1 "1) You could hook up some of the 4 TB drives, move the data to the temporary small array, then expand it."

Would I be correct in assuming you mean remove some data then I can shrink array and remove a few 2TB disks then add a few 4TB disks in a 2nd array on the same raid card? Then move data and continue to shrink eject disks?

I had been thinking about doing 2 Arrays this time 7x4TB x2 I just very much love

1) The easy of working with a single volume
2) The speed of the dedicated raid card

as an old school IT guy its hard to get over hardware raid vs software raid
 
Shrinking the array is another option, but I think that adds more work/risk than is needed. You could remove some data, shrink the array, create the initial array with the 4 TB disks, move some data over, and repeat the process. However, this is going to take a long time. Shrinking/growing arrays is not a fast process. Going from four to nine 2 TB disks on a LSI 8708EM2 took 88 hours.

I was also had the same mindset as you in regards to hardware and software RAID. When comparing software RAID (mdadm) to hardware RAID, I'd go with hardware every time. You get the added benefits of increased speed, added features that the card will give, it will be much easier to expand, and it will be faster. No questions there.

Speed is a bit different, though. If you access your share only through the network, do you really need it to be faster than a gigabit connection? Do you truly saturate your connection as it is?

---

However, ZFS is a totally different beast when it comes to software RAID and as with everything, there are upsides and downsides. The first is that it only runs on Linux, which is the most likely to scare people away. The second is that you don't expand the "pool" with individual drives, but rather "virtual devices" (sets of arrays is the best way to think of it). This means you have to plan ahead.

The features, though, outweigh the negatives by a substantial amount.

When you add a virtual device (vdev) to a "pool", it adds to the total storage of the pool. You want to add like-devices in the same vdev, but there is no problem or risk with running different disks in the same pool. This would allow you to run your 2 TB and 4 TB drives in the same pool, for example, and have one huge storage folder, seen as a single device. Your RAID card can't do that.

When writing to the array, ZFS creates checksums of files and checks them periodically to make sure they are valid; this is not the same as parity. It lets the filesystem know if there was silent corruption of data in the case that normal RAID will not catch it.

Snapshots are built into the file system and can be scheduled automatically. Compression is a feature that can (and should) be enabled.

Subpools can be created in the main pool, which can have different properties from the pool they belong. For example, you could create a "media" pool where you store movies and music that has compression and snapshots turned off.

There are many more features, but those are the only ones I can think off the top of my head.
 
Shrinking the array is another option, but I think that adds more work/risk than is needed. You could remove some data, shrink the array, create the initial array with the 4 TB disks, move some data over, and repeat the process. However, this is going to take a long time. Shrinking/growing arrays is not a fast process. Going from four to nine 2 TB disks on a LSI 8708EM2 took 88 hours.

I was also had the same mindset as you in regards to hardware and software RAID. When comparing software RAID (mdadm) to hardware RAID, I'd go with hardware every time. You get the added benefits of increased speed, added features that the card will give, it will be much easier to expand, and it will be faster. No questions there.

Speed is a bit different, though. If you access your share only through the network, do you really need it to be faster than a gigabit connection? Do you truly saturate your connection as it is?

---

However, ZFS is a totally different beast when it comes to software RAID and as with everything, there are upsides and downsides. The first is that it only runs on Linux, which is the most likely to scare people away. The second is that you don't expand the "pool" with individual drives, but rather "virtual devices" (sets of arrays is the best way to think of it). This means you have to plan ahead.

The features, though, outweigh the negatives by a substantial amount.

When you add a virtual device (vdev) to a "pool", it adds to the total storage of the pool. You want to add like-devices in the same vdev, but there is no problem or risk with running different disks in the same pool. This would allow you to run your 2 TB and 4 TB drives in the same pool, for example, and have one huge storage folder, seen as a single device. Your RAID card can't do that.

When writing to the array, ZFS creates checksums of files and checks them periodically to make sure they are valid; this is not the same as parity. It lets the filesystem know if there was silent corruption of data in the case that normal RAID will not catch it.

Snapshots are built into the file system and can be scheduled automatically. Compression is a feature that can (and should) be enabled.

Subpools can be created in the main pool, which can have different properties from the pool they belong. For example, you could create a "media" pool where you store movies and music that has compression and snapshots turned off.

There are many more features, but those are the only ones I can think off the top of my head.

The last time I used Linux was for the folding team. within a week I stopped folding and sold a 15k HP server for 50% off in anger :)

I just remembered I have a 8 port arc-1882 in the server for the OS/VM's on SSD have 4 ports left so can do 4x4TB raid 0 as a swap space then add 7 back to the 1261 in raid 6 and make 2nd array with the last 7. do you know if the hot spare can cover both arrays?
 
Fair enough, just wanted to make sure that you considered all the options.

I can't speak about all RAID cards, but all the ones I've used have had the option. I'd be surprised if a high end Areca did not.
 
Last edited:
Fair enough, just wanted to make sure that you considered all the options.

I can't speak about all RAID cards, but all the ones I've used have had the option. I'd be surprised if a high end Areca did not.

Thank you much sir all the info has been very helpful
 
Back