Results 1 to 10 of 10
  1. #1
    Honeybadger Moderator cw823's Avatar
    Join Date
    Dec 2000
    Location
    Earth
    Posts
    6,666

    MDADM RAID Guide (via Ubuntu)

    Installing mdadm and related utilities
    Install ubuntu, desktop version.
    Login and go to System, Administration, Update Manager. Install all updates and reboot.
    Open firefox and go to www.webmin.com Click the Downloads tab. Click on the blue link under “Debian package suitable for……”. When the download box pops up choose the “Save File” option. We will use this a little later on.
    Open a Terminal window (Applications, Accessories, Terminal) and type (without the quotes):
    “sudo apt-get install gparted” (you will have to hit enter, and then will get prompted for your password)
    “sudo apt-get install samba” (you may have to type in a “Y” at some point)
    “sudo apt-get install mdadm” (Type a Y. Another screen will pop-up, use “tab” button to select “Ok” and use up arrow on next screen to select “No Configuration”, and “tab” button to “Ok”)
    “sudo apt-get install perl libnet-ssleay-perl openssl libauthen-pam-perl libpam-runtime libio-pty-perl apt-show-versions” (hit enter and type in a “Y”)
    "cd Downloads" (this is case sensitive, one important thing to remember in linux)
    "dir" (this will let you see that file we downloaded, “webmin_1.540_all.deb”
    "sudo dpkg --install webmin_1.570_all.deb" (hit enter, may have to type yes at some point)

    Installing a later version of mdadm (than the one that comes in repository) - edit, I think ubuntu11 includes this later version 12/2011
    We are going to download mdadm 3.1.4 for this example. You can browse all available versions at http://mirror.nexcess.net/kernel.org...ls/raid/mdadm/
    "cd /tmp"
    "wget http://mirror.nexcess.net/kernel.org/linux/utils/raid/mdadm/mdadm-3.2.2.tar.gz"
    "tar –xvf mdadm-3.2.2.tar.gz"
    "cd mdadm-3.2.2"

    "make & make install
    "
    ***No reboot necessary but might be a good idea if you have some impending mdadm tasks***

    Creating a new RAID array
    Go to System, Administration, and click on GParted. This is a partition mgmt. tool. Also go to System, Administration, and click on “Disk Utility”.
    All of your hard drives should be listed on the Disk Utility screen. Click on a drive, and you will see a “Device: /dev/sd-“ entry. Note which drives are which. For this setup I have 4x 4Gb drives, listed as /dev/sdb /dev/sdc /dev/sdd /dev/sde (with /dev/sda being my boot drive). Close the disk utility
    Now using the GParted utility that we opened, choose your first drive from the drop-down box at upper right of the utility window. Most likely when you run the program it will default to /dev/sda (drive) and show partition /dev/sda1 (partition). I’m going to change it to /dev/sdb as that is the first of my raid drives.
    If the drive is empty it should show as unallocated. That’s fine. If there is already a partition on the drive, however, go to Partition, and delete that partition. Now go to “Device”, and then “Create Partition Table…” Click the “+” sign next to “Advanced” and select “gpt” from the drop-down box. Then click on “Apply”. You can do this for all the drives at the same time. Then click on the green “checkmark” and hit “Apply”. Hit “Close” when it completes.
    Now click on the partition, should be a grey window with yellow/white border once selected. Go to “Partition”, “New”. The only thing we’ll change here will be the File system. I am now using ext4, so that’s what I would recommend changing it to (from ext2 to ext4). You can type in a label if you want to, but you don’t need to. Click “Add”
    Follow those same instructions for each of the other raid drives that we will be setting up – creating the partition table and creating the partition.
    Close GParted

    Time to create the array. Open a Terminal Window – Applications, Accessories, Terminal. Type in the following (again without the quotes):
    Original source: http://ubuntuforums.org/showthread.php?t=408461
    "sudo –sH" (hit enter and type in your password. – this is case sensitive)
    “mdadm --create /dev/md0 --chunk=X --level=Y --raid-devices=Z devices"

    1. /dev/md0 – we are essentially building a new drive, so we have to specify what we want it to be named. md0, md1, md2, etc…. depending on how many arrays you build. I would recommend one array, then you can just expand it when you add drives.
    2. X – chunk size in kilobytes. Default is 64 and is probably ok, so type “--chunk=64”
    3. Y – raid level (“0” – RAID0; “1” – RAID1; “5” – RAID5; “6” – RAID6) – there are other raid level options but no reason to even consider
    4. Z – raid devices – if you have 4 raid drives, you would type “--raid-devices=4”
    5. Devices = this is where we will specify which drives. See below:

    So for a 4 drive raid6 with the first drive being /dev/sdb, here is what you would type: “mdadm --create /dev/md0 --chunk=64 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1” (since we already create the partitions that’s why the “1” follows each drive)
    Hit enter
    It will tell you there is already a file system. Type a “Y” and hit enter. It should say “array /dev/md0 started”.
    “mkfs.ext4 /dev/md0” – this will format the file system on our new RAID drive /dev/md0 to ext4.
    Open up Firefox and type in this address: https://localhost:10000 and hit enter
    Click “I understand the Risks” and then “Add Exception”. Another window will pop-up, give it a few seconds then click “Confirm Security Exception”
    You should be at a login screen. Go up to Edit, Preferences, and click the “Use Current Page” button. Then click “Close”
    Type in your username and password. You can tell it to remember your login permanently, you can also have Firefox save your login credentials. Click “Login”
    Once on your network, you can access this via http from any other workstation. Just type in https://serveripaddress:10000 and hit enter.
    To look at our raid array, click on “Hardware”, and “Linux RAID”. If it’s not listed then click on “un-used modules” and find “Linux RAID” in that expanded list.
    Now you should see /dev/md0. It should be active, listed as the appropriate raid level. Click on the blue /dev/md0
    It will say “Acive but not mounted”, we will mount it later. If a drive had failed it would be listed here. Anytime your array is rebuilding or resyncing it will be listed here, and will show a percentage progress indicator. Easiest way to view what is going on with your raid array.
    Close Firefox
    Use the same Terminal window we used before if the process is done, if not, just open another Terminal window. You can open as many at once, but I would stick with two or less or it gets confusing
    If you open a new Terminal window, then type “sudo –sH” and hit enter. If you don’t do that (this step gives us elevated rights the entire time the terminal window is open), then you have to type “sudo” before ever command. You’ll know if you get an access denied that you don’t have elevated priviledges.
    Now type “nano –w /etc/mdadm/mdadm.conf” and hit enter. This is the mdadm (RAID mgmt software on linux) configuration file.
    For some reason when creating the array it fails to update this file. What will happen on next reboot is your raid array won’t start, unless it’s listed here. If you scroll down you should see a line “# definitions of existing MD arrays” If you don’t see your RAID drive listed here, then to a Ctrl-X to close and go back to terminal. If it is already listed in there, skip this next step
    “mdadm –Es | grep md0 >>/etc/mdadm/mdadm.conf” - this step looks at the array we’ve created, and adds it to mdadm.conf. Now do another “nano –w /etc/mdadm/mdadm.conf”. You’ll see it copies it to the end of the file, I normally copy and paste and put it directly under “# definitions of existing MD arrays” but you don’t have to. If you do that, expand your Terminal window to be sure you have the whole “UUID”, it’s a long number. If you didn’t make changes – Ctrl/X to exit. If you did make changes, then do a Ctrl/O and hit enter which will save it, then Ctrl/X to exit.
    Now our array will start automatically after reboot. Now we can mount the array so we can write to it.
    “mkdir /mnt/md0” (that’s /mnt/md’zero’)
    “nano –w /etc/fstab”. Another window full of things. Use your down arrow to go all the way to the bottom. Type in the following:
    “/dev/md0 /mnt/md0 ext4 defaults 0 0”
    First is our device name (/dev/md0), tab, mount point (/mnt/md0), tab, file system (ext4), tab, defaults, tab, 0, tab, 0 - this will mount our array anytime the server is rebooted.
    Ctrl/O and hit enter to save, Ctrl/X to exit.
    “mount /dev/md0”
    Reboot. Login.
    Open Firefox – it will come up to the webmin page. You may have to login again.
    On the main screen you should now see that your “local disk space” has increased to show the additional space provided by your array.
    Click on System, and “Disk and Network Filesystems”. You should now see a listing for /mnt/md0 ext4 file system, RAID Device 0. It will tell you how much of the drive is used. There is a small percentage that is automatically used by the FS, that’s ok
    Close Firefox
    Click On “Places”, and “Computer”. Double-click “File System” and the “mnt” folder. You should see a folder called “md0”, that’s where your raid array is mounted. There will be a lost+found folder inside md0, that’s ok.

    Tweaking build/rebuild times
    Original source: http://www.cyberciti.biz/tips/linux-...ild-speed.html
    This tweak will increase the minimum and maximum rebuild speed as the defaults are quite low; this step is IMHO absolutely necessary
    Open a terminal window, elevate priviledges via “sudo –sH”
    “sysctrl dev.raid.speed_limit_min” and/or “sysctrl dev.raid.speed_limit_max” will give you the current minimum and maximum rebuild speed in K/sec
    To set the parameters manually:
    “echo 50000 > /proc/sys/dev/raid/speed_limit_min” – Increases minimum to 50,000K/sec
    “echo 200000 > /proc/sys/dev/raid/speed_limit_max” – increases maximum to 200,000K/sec
    To make changes permanent:
    “nano –w /etc/sysctl.conf”
    Scroll all the way to the bottom and add the following three lines (single-spaced):
    “# RAID rebuild speed parameter tweak”
    “dev.raid.speed_limit_min = 50000”
    “dev.raid.speed_limix_max = 200000”

    Ctrl/O and hit enter to save, Ctrl/X to exit.. Reboot

    Tweaking “read ahead cache” and “stripe_cache_size”
    Original source: http://www.stress-free.co.nz/tuning_..._software_raid
    This tweak will increase array performance by adjusting the read ahead cache and stripe_cache_sizes as linux defaults to very conservative settings
    Open a terminal window, elevate priviledges via “sudo –sH”
    “echo 8192 > /sys/block/md0/md/stripe_cache_size”
    This setting is specific per array. If you have three arrays (md0, md1, md2), you would need to enter that command three times, substituting for the correct array name.
    “blockdev” --setra 4096 /dev/md0”
    This setting is specific per array. If you have three arrays (md0, md1, md2), you would need to enter that command three times, substituting for the correct array name.
    To make changes permanent:
    “nano –w /etc/rc.local”
    Scroll all the way to the bottom and add the following lines (single-spaced); be sure to add #these lines BEFORE the “exit 0” entry:
    “# RAID cache tuning”
    “echo 8192 > /sys/block/md0/md/stripe_cache_size” – add a separate line for each raid array as specified by “mdx
    “blockdev --setra 4096 /dev/md0” – add a separate line for each raid array as specified by “mdx
    If uncertain navigate to the http link above to see a copy of /etc/rc.local
    Ctrl/O and hit enter to save, Ctrl/X to exit. Reboot

    ***At any time you can open a terminal window and do a “watch cat /proc/mdstat” to see current status of arrays; Ctrl/C to close the cat /proc/mdstat window and go back to terminal.. You may need to “sudo” prior to typing that command. Normally I like to use webmin to check status on rebuild progress.***
    Last edited by cw823; 12-28-11 at 07:37 PM.
    I can explain it to you, but I cannot understand it for you

    honeybadger moderator smacks the **** out of people that cause issues in the classified section

    Heatware: cw823
    Paypal: paypal @ cw823 dot com

    Adak - My spreader bar was shaped using a pair of vise grips, a hammer, and the tow receiver on my pick up

    Support your forum:
    http://www.newegg.com/

  2. #2
    Honeybadger Moderator cw823's Avatar
    Join Date
    Dec 2000
    Location
    Earth
    Posts
    6,666
    Resizing the array

    Adding a single drive to an array
    I am going to add one drive to the RAID6 that I have created in previous steps. The new drive will be drive /dev/sdf with partition /dev/sdf1. I am going to simplify the steps as they will refer to previous sections in this tutorial.
    Original source: http://scotgate.org/2006/07/03/growi...5-array-mdadm/
    Partition the drive via gparted.
    Open a terminal window, elevate priviledges via “sudo –sH”
    “mdadm --add /dev/md0 /dev/sdf1” (variables for this command would be your raid device name “dev/mdx” and your new drive name “/dev/sdx). My example uses raid device /dev/md0 and drive device partition /dev/sdf1)
    “mdadm --detail /dev/md0” – this checks to see that it was added as a spare
    “mdadm --grow --raid-devices=5 /dev/md0” ( I am growing my raid device /dev/md0 by one device from 4 drives to 5.
    “watch cat /proc/mdstat” will let you monitor progress on the rebuild, or you can use webmin Linux Raid tab to monitor progress as well. Ctrl/C to close the cat /proc/mdstat window and go back to terminal.
    We need to update mdadm.conf with the new drive info:
    “nano –w /etc/mdadm/mdadm.conf” – change the raid-devices=x number to include your new drive (for my example I changed it from 4 to 5) Ctrl/O and hit enter to save, Ctrl/X to exit
    Once the rebuild completes we need to expand the file system to include the space added by the additional drive:
    “umount /dev/md0” – unmounts raid device /dev/md0
    “fsck.ext4 /dev/md0” since my file system is ext4 and my raid device is /dev/md0, the command is fsck.ext4 /dev/md0
    “resize2fs /dev/md0” resizes the file system of my raid array /dev/md0
    It may balk at this step and want you to run e2fsck, at which point the command is “e2fsck –f /dev/md0”
    Reboot

    Adding multiple drives to an array
    I am going to add three drives to the RAID6 that I have created in previous steps. The new drives will be /dev/sdg /dev/sdh /dev/sdi with partitions /dev/sdg1 /dev/sdh1 /dev/sdi1. I am going to again simplify the steps as they refer to previous sections in this tutorial
    Original source: http://scotgate.org/2006/07/03/growi...5-array-mdadm/
    Partition the drives via gparted.
    Open a terminal window, elevate priviledges via “sudo –sH”
    “mdadm --add /dev/md0 /dev/sdg1 /dev/sdh1 /dev/sdi1” (variables for this command would be your raid device name “dev/mdx” and your new drive names “/dev/sdx /dev/sdx etc…). My example uses raid device /dev/md0 and we are adding device partitions /dev/sdg1, /dev/sdh1, and /dev/sdi1)
    “mdadm --detail /dev/md0” – this checks to see that the drives were added as spares
    “mdadm --grow --raid-devices=8 /dev/md0” ( I am growing my raid device /dev/md0 by three devices from 5 drives to 8.
    “watch cat /proc/mdstat” will let you monitor progress on the rebuild, or you can use webmin Linux Raid tab to monitor progress as well. Ctrl/C to close the cat /proc/mdstat window and go back to terminal.
    We need to update mdadm.conf with the new drive info:
    “nano –w /etc/mdadm/mdadm.conf” – change the raid-devices=x number to include your new drives (for my example I changed it from 5 to 8). Ctrl/O and hit enter to save, Ctrl/X to exit
    Once the rebuild completes we need to expand the file system to include the space added by the additional drives:
    “umount /dev/md0” – unmounts raid device /dev/md0
    “fsck.ext4 /dev/md0” since my file system is ext4 and my raid device is /dev/md0, the command is fsck.ext4 /dev/md0
    “resize2fs /dev/md0” resizes the file system of my raid array /dev/md0
    It may balk at this step and want you to run e2fsck, at which point the command is “e2fsck –f /dev/md0”. Once this completes, run the resize2fs command again
    Reboot

    Replacing a failed drive from a RAID array
    For this example I have failed drive/partition /dev/sdg1 (via command “mdadm /dev/md0 --fail /dev/sdg1”)
    “cat /proc/mdstat” shows my array is online, and has an “(F)” next to sdg1 meaning that it has failed.
    “mdadm /dev/md0 –remove /dev/sdg1” removes the drive from the raid array.
    Shutdown your server. Remove the failed drive - /dev/sdg1. If you’re not sure which is which, open disk utility prior to shutting down – it should give you the serial number of the drive in question just to be safe
    Install the replacement drive. Power server back on.
    Follow previous instructions to add a drive. It will automatically begin the rebuild of your raid array.

    Growing your RAID array
    This example is going to use a single drive replacement method – I will be replacing all (8) 4Gb drives with (8) 8Gb drives, one at a time.
    For this example I will be replacing all drives /dev/sdb1-/dev/sdi1 with drives /dev/sdj1-/dev/sdq1. There are at least two methods that can be used here, I have detailed both. METHOD2 requires less user input.
    ***Begin METHOD1***
    “mdadm /dev/md0 --fail /dev/sdb1” – this will fail our first 4Gb drive, you could do them in any order you like.
    “mdadm /dev/md0 --remove /dev/sdb1” – this will remove our first 4Gb drive. At this point you would shut down the server, remove the first drive, put a new drive in it’s place (in our case 8Gb), and boot the server. If you had enough ports you could have already connected all of the 8Gb drives, which I have done for this tutorial
    “mdadm /dev/md0 --add /dev/sdj1” – this will add our first 8Gb drive
    “watch cat /proc/mdstat”, then a Ctrl/C when it’s completed.
    *** If you try to do a resize2fs now, it will report “The filesystem is already xxxxxx blocks long. Nothing to do!” – this is because it is only using the capacity of the disk that you replaced, it is not yet using the full capacity of the new drive.***
    Complete the above steps as needed for each drive that you will remove/replace.
    ***In theory with a RAID6 you could do two devices at once, but I would advise against that in case you suffer a drive failure at some point during the rebuild. Having tried this, it seems to do the rebuild separate per drive. The array recovered once, then started over and recovered a second time.***
    Skip METHOD2 section
    ***Begin METHOD2***
    This method will depend on you having enough ports to connect all of your old drives, and all of your new drives at the same time. It will also be a little safer and allow you to replace two drives at once, but I would still advise against that.
    “mdadm /dev/md0 --add /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1” – this will add all 8 new drives as hotspares.
    “mdadm /dev/md0 --fail /dev/sdb1” – this will fail our first 4Gb drive, and then linux will automatically use a hotspare device to rebuild the array. Oddly enough when I did this it rebuilt with my last hotspare drive (/dev/sdq1) as opposed to my first hotspare drive (/dev/sdj1). This is fine, just thought it was worth noting.
    Follow the above two steps for all drives that are being removed/replaced.
    Once METHOD1 and/or METHOD2 have completed and you are done adding/replacing drives:
    Now we need to resize the array, and the volume.
    I had some issues with when I could run the array resize command, so go ahead and reboot now.
    “umount /dev/md0”
    “mdadm --grow /dev/md0 --size=max” - this resizes the array to the maximum size of all of the hard drives attached; if you had 1x 4Gb drives and 3x 8Gb drives, however, the array would only calculate 4Gb drives. If all drives are now 8Gb, it would then resize appropriately (our 4drive RAID6 – 16Gb)
    If you use webmin to look at the Linux Raid section, you’ll see your new RAID drive size listed.
    “e2fsck –f /dev/md0” – drive checker
    “resize2fs /dev/md0” – this will expand the volume to (for our example) 16Gb

    Shrinking a RAID array
    For this example I have removed two drives from /dev/md0. It is a RAID6 of 8 drives, so it is now 6 drives with no redundancy. We are going to shrink the partition and array and make it a RAID6 of 6 drives, so it will again have redundancy. This is definitely a do at your own risk, but I HAVE successfully done this with no loss of data.
    *** please note that you MUST have a newer version of mdadm than the default that is installed when using apt-get install mdadm, I used mdadm 3.1.4***
    umount /dev/md0
    e2fsck –f /dev/md0 - internet guides will tell you to run it without the –f, but that won’t work
    resize2fs /dev/md1 5800G - we are resizing our file-system to 5.8tb
    mdadm –grow /dev/md1 –array-size 6400000000 – this will make the array 6.4tb (6.4 billing MB)
    mdadm –G –n 6 –backup-file=/filepath/file.bak /dev/md1 – (did a mkdir /mnt/tmp then used /mnt/tmp/file.bak for the file path – it won’t work without this step for backup file)
    resize2fs /dev/md1 – this will resize the file system to the size of the array
    e2fsck –f /dev/md1 – just to be safe.
    ***You can now fail/remove one or two drives and still have all of your data. Obviously if you have 8tb of data you don’t want to resize the file system or array to less than 8tb***
    Remember to nano –w /etc/mdadm/mdadm.conf and change the number of drives. For the above example we’d just change the 8 to a 6 for the number of drives, so after reboot mdadm knows how to start the raid array correctly.
    Last edited by cw823; 10-19-11 at 07:58 AM.
    I can explain it to you, but I cannot understand it for you

    honeybadger moderator smacks the **** out of people that cause issues in the classified section

    Heatware: cw823
    Paypal: paypal @ cw823 dot com

    Adak - My spreader bar was shaped using a pair of vise grips, a hammer, and the tow receiver on my pick up

    Support your forum:
    http://www.newegg.com/

  3. #3
    Honeybadger Moderator cw823's Avatar
    Join Date
    Dec 2000
    Location
    Earth
    Posts
    6,666
    I have created a RAID5 with only two drives. This leaves you with no parity drive but let's you have access to the total size of both drives if migration and size of data you already have is an issue.
    I can explain it to you, but I cannot understand it for you

    honeybadger moderator smacks the **** out of people that cause issues in the classified section

    Heatware: cw823
    Paypal: paypal @ cw823 dot com

    Adak - My spreader bar was shaped using a pair of vise grips, a hammer, and the tow receiver on my pick up

    Support your forum:
    http://www.newegg.com/

  4. #4
    Honeybadger Moderator cw823's Avatar
    Join Date
    Dec 2000
    Location
    Earth
    Posts
    6,666
    Input?

    It's easier than you think.
    I can explain it to you, but I cannot understand it for you

    honeybadger moderator smacks the **** out of people that cause issues in the classified section

    Heatware: cw823
    Paypal: paypal @ cw823 dot com

    Adak - My spreader bar was shaped using a pair of vise grips, a hammer, and the tow receiver on my pick up

    Support your forum:
    http://www.newegg.com/

  5. #5
    New Member
    Join Date
    Feb 2012
    Location
    /dev/null
    Posts
    3

    Smile kompilation kernel on raid vs. ramdisk

    thanks for this quick howto. just trying to install RAID 0 with 4 partitions. Will try to compare speed of compilation kernel on raid and ramdisk and see how will it speed up.

  6. #6
    man /dev/goat \dev\goat's Avatar
    Join Date
    May 2003
    Location
    South Carolina
    Posts
    1,128
    Why not sticky this ? I occasionally have to google "cw823 mdadm" when I need to copy/paste your tweak section for new installs. I'm sure some others would like to see this stickied as well.
    Watercooled (build log)
    i7-2600k 5ghz 1.5v -- 2x MSI GTX570 SLI
    8gb Samsung LV -- Silverstone Raven RV02
    128gb Crucial m4 -- 240gb mushkin chronos -- 60gb mushkin chronos -- 8tb server
    Corsair TX850 -- Dell u2711 2560x1440

  7. #7
    Destroyer of Empires and User Accounts, El Huginator
    Premium Member #3
    First Responders
    thideras's Avatar
    Join Date
    May 2006
    Location
    South Dakota
    Posts
    31,378
    Desktop: Gigabyte Z77X-UD5H | 3570k | 32 GB | GTX 770 Classified | 1 TB Samsung Evo & 2 TB HDD | Windows 3.1 | 4x 2560x1400 Monitors
    VM Server 1: Dell R710 | 2x L5630 | 96 GB RAM | 8x 300 GB Savvio | IBM M1015 | 34 TB Raw disk | XenServer
    VM Server 2: Dell R710 | 2x L5630 | 96 GB RAM |
    8x 300 GB Savvio | XenServer
    Router: Dell R410 | E5620 | 32 GB RAM | 3x 300 GB | pfsense
    "That's not overkill, or a lot. That's just thiderastic." -txus.palacios
    "Clouds are silent, cold, and wet. Servers are none of these things." -Bobnova

    Current projects: Rackmount Overkill (New) | Little Overkill (New)
    Articles: Rack Mounting 101 | Dell Perc 5/i Throughput Benchmarks
    My Website


    Want to talk directly to all the moderators at once? Call the Mod Hotline!

  8. #8
    Member moz_21's Avatar
    Join Date
    Jul 2002
    Location
    MN
    Posts
    1,900
    Is a partition necessary? I've done this using raw drives and all via cmd line. I'm not sure which is better. Currently I have MBR on each drive with 1 partition all marked as "Linux raid autodetect". This will be relevant when I upgrade next time around!
    [Win7 x64 Desktop] Asus P8P67 w/ i7 2600k @ 4.5GHz 1.37v, CM 212 EVO, 16GB Mushkin DDR3 (991996) @ 2133MHz 9-11-10-28-39 2T, Sapphire AMD Radeon 290X, OCZ Vertex3 240GB, Antec P183 gunmetal grey, SeaSonic 850W PSU
    [Ubuntu 14.04 LTS Server] Supermicro H8DME-2, 2x Opteron 2419 (6 core 1.8GHz), 20GB DDR2 ECC, 12x 1.5TB Samsung MDADM RAID6 array, wrapped in an AIC 24bay chassis
    [Win7 x64 Media PC] Asus M4A88T, Phenom 2 965 X4 @ stock, 8GB RAM, ATI HD-5450, OCZ Onyx 32GB, Seagate 7200.11 1TB, Ceton InfiniTV 4
    [Ubuntu 14.04 LTS x64 Laptop] Acer Aspire 5733Z, Intel P6200 2.13GHz, 4GB DDR3 ram, I-GMA, Hitachi 320GB
    [Win7 x64 Laptop] Acer Aspire 5530, AMD Turion X2 RM70 2GHz, 3GB ram, ATI Radeon HD3200, OCZ Vertex 2 50GB
    [Smoothwall 3.0] Compaq Deskpro, PIII 833 on an i810 chipset, 512MB CL2 ram, 74Gb WD Raptor, 2x Realtek NIC
    [My].Heatware

  9. #9
    Destroyer of Empires and User Accounts, El Huginator
    Premium Member #3
    First Responders
    thideras's Avatar
    Join Date
    May 2006
    Location
    South Dakota
    Posts
    31,378
    Quote Originally Posted by moz_21 View Post
    Is a partition necessary? I've done this using raw drives and all via cmd line. I'm not sure which is better. Currently I have MBR on each drive with 1 partition all marked as "Linux raid autodetect". This will be relevant when I upgrade next time around!
    Is it required? No. You can RAID the raw drive itself. The downside is that you can't do much for management since there are no partitions. This isn't really a downside unless you want to do multiple RAID arrays per drive (such as RAID 0 and RAID 5 using the same disks).
    Desktop: Gigabyte Z77X-UD5H | 3570k | 32 GB | GTX 770 Classified | 1 TB Samsung Evo & 2 TB HDD | Windows 3.1 | 4x 2560x1400 Monitors
    VM Server 1: Dell R710 | 2x L5630 | 96 GB RAM | 8x 300 GB Savvio | IBM M1015 | 34 TB Raw disk | XenServer
    VM Server 2: Dell R710 | 2x L5630 | 96 GB RAM |
    8x 300 GB Savvio | XenServer
    Router: Dell R410 | E5620 | 32 GB RAM | 3x 300 GB | pfsense
    "That's not overkill, or a lot. That's just thiderastic." -txus.palacios
    "Clouds are silent, cold, and wet. Servers are none of these things." -Bobnova

    Current projects: Rackmount Overkill (New) | Little Overkill (New)
    Articles: Rack Mounting 101 | Dell Perc 5/i Throughput Benchmarks
    My Website


    Want to talk directly to all the moderators at once? Call the Mod Hotline!

  10. #10
    man /dev/goat \dev\goat's Avatar
    Join Date
    May 2003
    Location
    South Carolina
    Posts
    1,128
    Partitioning them before you create the array lets you install the OS to the array too. One of them has to have /boot or it won't work. I did this yesterday with Sabayon
    Watercooled (build log)
    i7-2600k 5ghz 1.5v -- 2x MSI GTX570 SLI
    8gb Samsung LV -- Silverstone Raven RV02
    128gb Crucial m4 -- 240gb mushkin chronos -- 60gb mushkin chronos -- 8tb server
    Corsair TX850 -- Dell u2711 2560x1440

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •