[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Now won't boot (was: Re: Squeeze assembles one RAID array, at boot but not the other



On Mon, 07 Jan 2013 00:14:31 -0700, Bob Proulx wrote:

> Hendrik Boom wrote:
>> I have two RAID arrays on my Debian squeeze system.  The old one, which 
>> still works, and has worked for years, is on a pair of partitions on two 
>> 750GB disks.  THe new one is not recognized at boot.
> 
> Does at boot time mean at initrd initramfs time?
> 
>> boot is *not* on any of these RAIDs; my system boots properly. 
> 
> Good.
> 
>> The new one, whih I build today, resides on similar (but larger) 
>> partitions on two 3TB disks.  I partitioned these drives today, using 
>> gparted for gpt partitioning, then created a RAID1 from two 2.3GB 
>> partitions o these disks, set up LVM2 on the RAID drive, created an LVM 
>> partition, put an ext4 file system on it and filled it with lots of 
>> data.  The partition definitely exists. 
>> 
>> But it is not recognized at boot.  The dmesg output tells me all about 
>> finding the old RAID, but it doesn't even notice the new one, not even to 
>> complain about it.
> 
> Did you update /etc/mdadm/mdadm.conf with the new data for the new
> array that you just now built?
> 
> See the outout of --detail --scan and edit it into that file.
> 
>   mdadm --detail --scan

I had to mdadm --assemble /dev/md1 /dev/sdd2 /dev/sdb2 first.

> 
> Did you rebuild the initrd images for the booting kernel after having
> done this?
> 
> Example:
> 
>   dpkg-reconfigure linux-image-3.2.0-4-amd64

dpkg-reconfigure linux-image-2.6.32-5-amd64

Different kernel version (this is squeeze, after all)
but it seemed to work.

Next to reboot.

OOPS

Won't boot.  Gets stick at the initramfs prompt after complaining 
that it can't run /sbin/init 

I look with ls, and discover that /sbin exists, bit is totally
empty.  that seems a good reason to be unable to run /sbin/init.

And ls / clearly shows me a root directory which is *not* my
system's root directory. Presumably it's at the stage where it
hasn't gotten around to mounting the real root partition yet.

It looks to me that dpkg-reconfigure linux-image-2.6.32-5-amd64
has somehow built me an unusable initrd.  Puzzling, because doesn't
is do that every time it upgrades the kernel anyway?  And this is 
Debiann stable I'm running.

I tried booting with an old kernel, but this made no difference.
Puzzling, because why would dpkg-reconfigure linux-image-2.6.32-5-amd64
mess with the initrds of other kernels?

Maybe that's not the problem.

Nor did it make a difference whether I booted with grub2 or lilo.
Annoying, since I maintain these two independent boot methods just
in case.


Yes, it has recognised both RAIDs.

Once I had to tell initramfs

    vgchange -a y

to make sure it saw the logical volumes inside my RAIDs.

Help!

s
> 
>> Any ideas where to look?  Or how to work around the problem?
> 
> At one time in the past Debian (and some other distros too) would look
> at the partition type and see 0xFD as an AUTORAID partition and
> automatically mount it.  This was reported as a bug because if someone
> were trying to recover a disk problem and attached a random disk to a
> system then at boot time the init scripts would try to automatically
> attach to it.  That was undesirable.
> 
> Due to that complaint the system was changed so that raid partitions
> must be explicitly specified in the mdadm.conf file.  And since for
> the root partition they must be mounted at early boot time this action
> is pushed into the initrd to do so that if the root partition is on
> raid it can be done early enough.
> 
> Also I know that RHEL/CentOS at least also moved from an autoraid of
> 0xFD to an explicitly mounted system too for the same reasons.  But
> they do it by specifying the UUIDs on the kernel command line from
> grub.  It makes for some very long command lines.  I like the Debian
> choice better.
> 
> In summary:
> 
> In Debian after creating a new raid add the new raid info to
> /etc/mdadm/mdadm.conf and then dpkg-reconfigure linux-image-$(uname -r).
> 
> In CentOS after creating a new raid edit the grub config and add the
> new rd_MD_UUID values to the grub boot command line.  Or use rd_NO_DM.
> 
> Bob

And let me preapologise for my future slow responses on this mailing list 
Without my server I have to go to a local coffee shop to read and post.

-- hendrik 


Reply to: