• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

MDADM Superblock Recovery

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.
I think i may have dug my own grave when i ran that --create and gave up on --assemble. Perhaps the last resort is to force a fsck on md0? Or start looking for ways to recover a few files...
 
I'm not sure you are in a recoverable state. I'm not an expert on this by a long shot, but from what I'm gathering, if its not already unrecoverable, fsck on md0 would put you there. (I've read about MDADM, but haven't got the gumption to implement it yet)

You did back up your RAID right?
 
You did back up your RAID right?

Yeah I have most of the important documents off-site. But I basically lost 4 TB of multimedia files if this is donezo. Which basically means my internet is going to be maxed for the next 6 months trying to re-download what I can. ::sad::

WTF How could this happen with a single power cycle. I thought software raid was rock solid. The ultimate irony is that this occured during a shutdown while i plugged my system into a new battery backup. Sigh.
 
Last edited:
I'm out of ideas at this point. If there was a way to rebuild/fix the file system without the superblocks, that is the path you would need to go. We are now into territory where I have no experience.

I think the failure of the RAID array is from an underlying factor. I would check drive health and closely monitor them in the future. RAID arrays don't just go away for no reason.
 
Well thanks for your help thideras. You got me half way at least. I installed magicrescue and am trying to recover some rar files.

Code:
$ sudo magicrescue -d Rescue/ -r /usr/share/magicrescue/recipes/rar /dev/md0
 
You're welcome. Let me know if any other issues arise. I, or someone else, might find themselves in the same position in the future.
 
I just went through this..


STOP STOP STOP STOP STOP

Su

mdadm --stop /dev/md0
mdadm --stop /dev/md127
mdadm --stop --scan

Cat /proc/mdstat

fdisk -l /dev/sd[a-e]

does it show all your drives if you do the above?

cat /etc/mdadm/mdadm.conf (assuming Ubuntu based distro)
what shows if you read the .conf file?

mdadm --stop --scan
mdadm --examine -v /dev/sd[a-e]
mdadm --assemble -v -f --config= /etc/mdadm/mdadm.conf /dev/md0 /dev/sd[a-e]
cat /proc/mdstat


gedit /etc/mdadm/mdadm.conf
 
mdadm --stop /dev/md0
mdadm --stop /dev/md127
mdadm --stop --scan
Cat /proc/mdstat

Code:
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
unused devices: <none>

fdisk -l /dev/sd[a-e]
does it show all your drives if you do the above? Yes

Code:
$ sudo fdisk -l /dev/sd[a-e]

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x353cf669

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1              63   476327249   238163593+  83  Linux
/dev/sdb2       476327250   488392064     6032407+   5  Extended
/dev/sdb5       476327313   488392064     6032376   82  Linux swap / Solaris

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sde doesn't contain a valid partition table

cat /etc/mdadm/mdadm.conf (assuming Ubuntu based distro)
what shows if you read the .conf file?

new one based on the mdadm --detail --scan command:
Code:
$ cat /etc/mdadm/mdadm.conf
DEVICE partitions
ARRAY /dev/md/0 metadata=1.2 name=Servbot:0 UUID=de69ab93:0182738d:988ac187:7cb59e32
MAILADDR root
old one pre-crash:
Code:
$ cat /etc/mdadm/mdadm.conf.2011precrash 
DEVICE partitions
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=0.90 UUID=fd522a0f:2de72d76:f2afdfe9:5e3c9df1
MAILADDR root

mdadm --stop --scan
mdadm --examine -v /dev/sd[a-e]
Code:
$ sudo mdadm --examine -v /dev/sd[a-e]
/dev/sda:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de69ab93:0182738d:988ac187:7cb59e32
           Name : Servbot:0  (local to host Servbot)
  Creation Time : Wed Oct 19 10:44:00 2011
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3907025007 (1863.01 GiB 2000.40 GB)
     Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : aacf9954:bdf30262:e443344d:534de453

    Update Time : Wed Oct 19 21:36:27 2011
       Checksum : fb8c4f7b - correct
         Events : 17

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAA ('A' == active, '.' == missing)
mdadm: No md superblock detected on /dev/sdb.
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de69ab93:0182738d:988ac187:7cb59e32
           Name : Servbot:0  (local to host Servbot)
  Creation Time : Wed Oct 19 10:44:00 2011
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3907025007 (1863.01 GiB 2000.40 GB)
     Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 44573ca9:eb49401f:398af633:a743e1da

    Update Time : Wed Oct 19 21:36:27 2011
       Checksum : 7b2b69ee - correct
         Events : 17

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAAA ('A' == active, '.' == missing)
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de69ab93:0182738d:988ac187:7cb59e32
           Name : Servbot:0  (local to host Servbot)
  Creation Time : Wed Oct 19 10:44:00 2011
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
     Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 345e5fc4:a3906119:2ce416a7:1e5c5de4

    Update Time : Wed Oct 19 21:36:27 2011
       Checksum : d0c3244 - correct
         Events : 17

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AAAA ('A' == active, '.' == missing)
/dev/sde:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : de69ab93:0182738d:988ac187:7cb59e32
           Name : Servbot:0  (local to host Servbot)
  Creation Time : Wed Oct 19 10:44:00 2011
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 3907025007 (1863.01 GiB 2000.40 GB)
     Array Size : 11721071616 (5589.04 GiB 6001.19 GB)
  Used Dev Size : 3907023872 (1863.01 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 21ea1581:fa2cbf7c:626f9ec0:aea2c2fd

    Update Time : Wed Oct 19 21:36:27 2011
       Checksum : 600d240c - correct
         Events : 17

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing)

mdadm --assemble -v -f --config= /etc/mdadm/mdadm.conf /dev/md0 /dev/sd[a-e]

Code:
$ sudo mdadm --assemble -v -f --config=/etc/mdadm/mdadm.conf.2011precrash /dev/md0 /dev/sda /dev/sdc /dev/sdd /dev/sde
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 2.
mdadm: added /dev/sdc to /dev/md0 as 1
mdadm: added /dev/sde to /dev/md0 as 2
mdadm: added /dev/sdd to /dev/md0 as 3
mdadm: added /dev/sda to /dev/md0 as 0
mdadm: /dev/md0 has been started with 4 drives.
cat /proc/mdstat
Code:
$ cat /proc/mdstat 
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid5 sda[0] sdd[3] sde[2] sdc[1]
      5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
      
unused devices: <none>


gedit /etc/mdadm/mdadm.conf
?

But when i try to mount it gives me the same thing:
Code:
$ sudo mount -t ext3 /dev/md0 /var/mediamount: wrong fs type, bad option, bad superblock on /dev/md0,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Did you have something else in mind happening?
 
mdadm --stop /dev/md0
mdadm --stop /dev/md127

madadm --assemble -f -v /dev/md0 /dev/sd[a-e]

What happens if you do that? does it come back online?

I am looking now for the one command I am missing. You should be able to get the array back online even with a different uuid IF all the drives are in the correct order.

the /dev/md127 is coming from the name=servbot0 that you have.
 
mdadm --stop /dev/md0
mdadm --stop /dev/md127

madadm --assemble -f -v /dev/md0 /dev/sd[a-e]

What happens if you do that? does it come back online?

I am looking now for the one command I am missing. You should be able to get the array back online even with a different uuid IF all the drives are in the correct order.

the /dev/md127 is coming from the name=servbot0 that you have.

I literally posted the output of this in the above reply. The drive assembles but cannot be mounted and I can't find any superblocks.
 
dont worry about that...

I had the same thing and had to force mine online. You need to make sure that you have ALL the /dev/md### stopped before you try to force it online.

If you can get it online, have storage online and available to offload to, preferably through eSATA or SATA as the rebuild/resync will probably have it drop offline afterwards.

It shows that the array is alive
mdadm: /dev/md0 has been started with 4 drives. BUT you are trying to mount it with the wrong FS or similar issue.

try just doing

su
mkdir /media/testing
mount /dev/md0 /media/testing

and see if you can access the data

suggestion, look at running su apt-get install -gnome-disk-tools and see if that will help some as well.

Currently waiting to find out WTF just happened to my system. Array is building right now, 20 drive total (18+2 spare) RAID6 array. The system is merrily rebuilding the array... I think. It currently doesnt want to allow me to log in, UN/PW entered and then nothing...

Bah, worst case, I have to replace two HDDs tonight.
 
I have gone over this in my head a few times now. My massive mistake was following the suggestion of somebody to do a --create on my broken array when assembly didn't work. It resynced and destroyed everything. That was the end of 4 years of files.

Thats the thing about using linux. Live and learn and carry on.
 
I have gone over this in my head a few times now. My massive mistake was following the suggestion of somebody to do a --create on my broken array when assembly didn't work. It resynced and destroyed everything. That was the end of 4 years of files.

Thats the thing about using linux. Live and learn and carry on.

You MIGHT be able to recover SOME..

http://kevin.deldycke.com/2007/03/how-to-recover-a-raid-array-after-having-zero-ized-superblocks/

He recreated the array...
 
Last edited:
Back