• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Can't boot Ubuntu

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

hop5uk

Registered
Joined
Aug 24, 2012
I am running Ubuntu 12.04 server ediition with the the OpSys located on a pair of Raid 1 disks and it won't boot. I can boot off the live CD and also get to the grub rescue prompt but boot-repair does not work because of Raid.
Help desperately needed-alot of data at stake!!!!!
 
So the first thing we are going to need is more information. What happens when you tried to boot?

Have you used the liveCD to view logs on the system? If so what do the logs say?
 
So the first thing we are going to need is more information. What happens when you tried to boot?

Have you used the liveCD to view logs on the system? If so what do the logs say?

I can easily boot to the live USB stick (desktop) and get into the terminal. I followed the various boot-repair posts that i found on the ubuntu forum but i run into difficulties.My Operating system is on a pair of Raid 1 disks and the first thing that came up was "RAID detected. You may want to retry after installing the [mdadm] packages (sudo apt-get install-y-forcemdadm-no-install-recommends)". I typed this command and it allowed me to proceed. At this point i got an icon in the toolbar for the Raid 1 array as well as one of my Raid 5 arrays that i use for storage. It is important to note that these two arrays are connected via the SATA ports on the motherboard. I also have 2 more raid 5 arrays that are connected to an IBM m1015 controller card which i am desperate to get access to but cannot see now.
I continued waiting for the process to complete and then i got another pop up box which says "GPT detected. Please create a BIOS-Boot partition (>1MB, unformatted filesystem, bios_grub flag). This can be perfomed via tools such as Gparted. Then try again"
I managed to get install GParted butit wont really let me create anything and to be honest i am a little nervous to relly push it further.
I also used the boot-repair programme to create the url for diagnostics.
It is http://paste.ubuntu.com/8963156

After a lot of forum searching, i had a few people mention that the boot repair might not work when the boot folder is on a Raid array but to be honest, i have had alot of opinions since i started looking. At the end of the day, i just want my data back.
I a have tried one or two things with reinstalling grub and right now i am at a position that if i go into the bios and set one of the disks as boot priority, the boot sequence only gets as far as the grub rescue prompt. If i chose the other it ends with a message error file '/boot/grub/i386-pc/raid.mod.

Thanks for the quick reply Stratus_ss. I appreciate any help
 
Well you have a couple of options

1) try and retrieve the Raid 5 data from a live environment. If this is the most important data, concentrate on this first
2) get another small hard drive or use the USB drive you have an install the linux of your choice on it. Then use it to boot/mount your drives and recovery your data

Ultimately I would use option 2 particularly if I had somewhere to dump my data too. There are many guides online about how to use a usb hard drive as a persistent linux OS. I would say for the time being, don't worry about repairing grub on the current system if you need to get at your data.

Additionally the logs I was talking about are system logs. Usually grub doesnt just give up on you, there has to be some event that triggered the failure. Using the logs to determine this may actually lead you to a fix.
 
Well you have a couple of options

1) try and retrieve the Raid 5 data from a live environment. If this is the most important data, concentrate on this first
2) get another small hard drive or use the USB drive you have an install the linux of your choice on it. Then use it to boot/mount your drives and recovery your data

Ultimately I would use option 2 particularly if I had somewhere to dump my data too. There are many guides online about how to use a usb hard drive as a persistent linux OS. I would say for the time being, don't worry about repairing grub on the current system if you need to get at your data.

Additionally the logs I was talking about are system logs. Usually grub doesnt just give up on you, there has to be some event that triggered the failure. Using the logs to determine this may actually lead you to a fix.

If the solution is as simple as option 2) this is definately the option that i will use. I was planning on building a new server anyway. The only thing that i was concerned about in doing this is that 'Is all the Raid configuration data held on the Raid disk themselves? Will it simply be a case of installing a new OS disk and then mounting the original Arrays or will i have to keep the original OS disks connected for the arrays to work?'
 
If the solution is as simple as option 2) this is definately the option that i will use. I was planning on building a new server anyway. The only thing that i was concerned about in doing this is that 'Is all the Raid configuration data held on the Raid disk themselves? Will it simply be a case of installing a new OS disk and then mounting the original Arrays or will i have to keep the original OS disks connected for the arrays to work?'

Having thought about this a bit more-'Is this basically the same as using the live CD' or are there more things that you can achieve with a full installation. The reason i say this is beacuse i have used the Live CD and installed mdadm. It only gave me access to the Arrays that were connected to sata ports on the Motherboard and no access to the ones connected to the controller card. I assumed that was because the drivers for the controller chipset were not included onlive cd whereas they would be on a full installation.
 
Having thought about this a bit more-'Is this basically the same as using the live CD' or are there more things that you can achieve with a full installation. The reason i say this is beacuse i have used the Live CD and installed mdadm. It only gave me access to the Arrays that were connected to sata ports on the Motherboard and no access to the ones connected to the controller card. I assumed that was because the drivers for the controller chipset were not included onlive cd whereas they would be on a full installation.

The reason I suggested not using a livecd is because its not persistent... if you need to reboot, or you loose power or whatever you have to go through everything all over again. If you need to insert a kmod, or drivers etc, its much better to have a persistent environment rather than redoing it each time you need to boot
 
The reason I suggested not using a livecd is because its not persistent... if you need to reboot, or you loose power or whatever you have to go through everything all over again. If you need to insert a kmod, or drivers etc, its much better to have a persistent environment rather than redoing it each time you need to boot

OK. I have a spare SSD disk so i will try and install Ubuntu onto it. I am hoping that once i install mdadm, i will be able to see all my arrays again. Thanks again for the help. I'll let you know how i get on
 
I had a bit of progress over the weekend. Downloaded 14.04 Server edition and used a usb stick to load it onto a spare SSD disk that i had. It loaded correctly and then booted. When i went to install mdadm it was already installed which i found strange but nevertheless the cat /proc/mdstat revealed this:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md4 : inactive sdd1[0](S) sdf1[3](S) sde1[1](S)
5860147414 blocks super 1.2

md2 : inactive sdb1[7](S) sda1[6](S) sdc[5](S)
8790404199 blocks super 1.2

md1 : active raid1 sdi2[1] sdh2[2]
46864256 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdi1[1] sdh1[2]
15615872 blocks super 1.2 [2/2] [UU]

md3 : active raid5 sdm1[4] sdk1[5] sdj1[0] sdl1[3]
4395018240 blocks super 1.2 level 5, 1024k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

md0 and md1 are the original raid 1 array where ubuntu was stored. md2, 3 & 4 are my 3 raid 5 arrays where all my data is stored. i started to look at reassembling the arrays that have become inactive and configuring various config files and then i realised that something was not right. When i looked closer i realised that my new home directory only has one directory in it which is "hostname". There is absolutely nothing else there. I am confused as it looks as though it might have installed ubuntu someplace else but i am now stuck on how to proceed. Can you offer any assistance?
 
I am not quite clear as to what happen

It sounds like you installed Ubuntu somewhere, you saw your raid disks, and started to assemble them and something went wrong?

Can you expand upon the problem? I wouldnt worry about rescuing your install at this point, concentrate on the data. If you ran into problems attempting to assemble the boot partition, get rid of that to reduce complexity and focus on the data which is what is important
 
I am not quite clear as to what happen

It sounds like you installed Ubuntu somewhere, you saw your raid disks, and started to assemble them and something went wrong?

Can you expand upon the problem? I wouldnt worry about rescuing your install at this point, concentrate on the data. If you ran into problems attempting to assemble the boot partition, get rid of that to reduce complexity and focus on the data which is what is important

Weirdest thing happened but a good thing. When i was trying to mount the disks i made a directory (as i could not find the /mnt directory) to mount my arrays in. I deleted that directory and all of a sudden, all the standard directories appeared.
I do not have any real interest in rescueing the original install but i thought that i had better leave the original disks connected as i was unsure whether or not they contained ant raid configuration data that might be required.
I made some good progress last night. I stopped md3 and md4 and then used the assemble command and i now have 2 of the three arrays functioning!!!!!!!!! So now i can concentrate on my 3rd array.
I tried to use the same method to get it 'active' but it failed:
richard@hserv:/$ sudo mdadm --assemble --force /dev/md2 /dev/sda1 /dev/sdb1 /dev/sdc1
mdadm: cannot open device /dev/sdc1: No such file or directory
mdadm: /dev/sdc1 has no superblock - assembly aborted.
I am going to start some research on this but it was always the array that i was most worried about. Just before the original booting problem this was a 4 disk Raid 5 array. One disk failed and i tried to replace it with a new one-Then i could not boot. At the moment i have the new disk disconnected and so the three that you can see are originals.
Will start forum searching today. Thanks for your help so far
 
the problem with ZFS or raid and using /dev/sdXX is that these device addresses can and do change. You are better off assembling with UUID instead of device letters
 
the problem with ZFS or raid and using /dev/sdXX is that these device addresses can and do change. You are better off assembling with UUID instead of device letters

I will give that some serious thought when i start building my new server. I thought that i had a really solid design but i've learnt a few things. The main one being that putting Ubuntu Op Sys on a pair of Raid disks is pointless, seeing as though if you put it on one disk and it fails-slot in a new disk and rebuild.
Secondly-Give some serious thought to your storage disks. I have been re-using the 1TB and 2TB with no failures for years. I bought 4 seagate 3 TB disks because they were on sale and within 22 months , 2 of them have failed. I have returned them under warranty (just!) but when i get them back, i don't particularly want to use them in my new build.
I have my eye on this. What do you think?
http://www.newegg.com/Product/Product.aspx?Item=N82E16811219033
I have found a few server grade MBs with SAS connectors so that i can do away with my controller card
 
It all depends on your use case. Myself I went with NAS drives, they spin slower, produce less heat and have a longer MTF (mean time to failure) . But they spin slower. For what I have though, I am using them in ZFS Z-1 (raid 5 equiv for zfs) and when they are all going at the same time its plenty fast enough for me

By way of comparison, my 4TB 7200 RPM drive can write a 10G file"

Code:
10485760000 bytes (10 GB) copied, 65.4346 s, 160 MB/s

On my ZFS array:

Code:
10486808576 bytes (10 GB) copied, 35.5214 s, 295 MB/s

295MB/s is more than sufficient to saturate my network and provide my local needs
 
Back