[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: install-mbr on amd64?



Hello,

I now tested RAID1 together with grub by unplugging the
1st disk (to be sure, I tested with the 2nd as well).

As you can read further up in this thread I did
run 'grub-install --no-floppy /dev/sdb' to install
the boot-loader in the MBR of the 2nd disk /dev/sdb.

Then I halted the computer and unplugged the disk
connected to the SATA0 port.
The system booted without problem and /proc/mdstat
displays only 1 active device in the RAID1:
# BEGIN-CLI
deb64a:~$ cat tmp/mdstat_unplug0.txt 
Personalities : [raid1] 
md3 : active raid1 sda6[1]
      106896384 blocks [2/1] [_U]

md2 : active raid1 sda5[1]
      46877568 blocks [2/1] [_U]

md1 : active raid1 sda2[1]
      1951808 blocks [2/1] [_U]

md0 : active raid1 sda1[1]
      64128 blocks [2/1] [_U]

unused devices: <none>
# END-CLI

After halting the system I plugged the SATA0 disk in again
and booted.
I had to hot add the sda? partitions to the RAID1 with mdadm:
# BEGIN-CLI
deb64a:~# mdadm /dev/md0 -a /dev/sda1
mdadm: hot added /dev/sda1
deb64a:~# mdadm /dev/md1 -a /dev/sda2
mdadm: hot added /dev/sda2
deb64a:~# mdadm /dev/md2 -a /dev/sda5
mdadm: hot added /dev/sda5
deb64a:~# mdadm /dev/md3 -a /dev/sda6
mdadm: hot added /dev/sda6
# END-CLI

It takes about 40 min to rebuild the 100 GB /dev/md3.
After that /proc/mdstat shows a clean RAID1:
# BEGIN-CLI
deb64a:# cat tmp/mdstat_ok.txt 
Personalities : [raid1] 
md3 : active raid1 sda6[0] sdb6[1]
      106896384 blocks [2/2] [UU]

md2 : active raid1 sda5[0] sdb5[1]
      46877568 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      1951808 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      64128 blocks [2/2] [UU]

unused devices: <none>
# END-CLI

Then I repeated the same, but this time I unplugged the disk
connected to the SATA1 port. The system did boot without problems.

Comparing the output of dmesg after unplugging SATA0 vs. SATA1
shows that I unplugged different disks each time:
# BEGIN-CLI
deb64a:~$ diff tmp/dmesg_unplug[01].txt | grep 'sd[ab]' 
#< SCSI device sda: 312581808 512-byte hdwr sectors (160042 MB)
#> SCSI device sda: 312579695 512-byte hdwr sectors (160041 MB)
#< SCSI device sda: 312581808 512-byte hdwr sectors (160042 MB)
#> SCSI device sda: 312579695 512-byte hdwr sectors (160041 MB)
#< sd 1:0:0:0: Attached scsi disk sda
#> sd 0:0:0:0: Attached scsi disk sda
# END-CLI

Another proof is the output of mdadm -D /dev/md0' after the reboot
with readded disk but before doing the hot add:
# BEGIN-CLI
deb64a:# cat tmp/mdadm_replug0.txt | tail -n 3
    Number   Major   Minor   RaidDevice State
       0       0        0        -      removed
       1       8       17        1      active  sync   /dev/sdb1
deb64a:# cat tmp/mdadm_replug1.txt | tail -n 3
    Number   Major   Minor   RaidDevice State
       0       8        1        0      active  sync   /dev/sda1
       1       0        0        -      removed
# END-CLI

As written before, the MBRs on /dev/sda and /dev/sdb are not identical.
# BEGIN-CLI
deb64a:~# dd if=/dev/sda bs=512 count=1 | od -v | head -n 8
1+0 records in
1+0 records out
512 bytes (512 B) copied, 2.1e-05 seconds, 24.4 MB/s
0000000 044353 150220 000274 175574 003520 017520 137374 076033
0000020 015677 050006 134527 000745 122363 136713 003676 002261
0000040 067070 076000 072411 101423 010305 172342 014315 172613
0000060 143203 044420 014564 026070 173164 132640 132007 001003
0000100 000377 020000 000001 000000 001000 110372 173220 100302
0000120 001165 100262 054752 000174 030400 107300 107330 136320
0000140 020000 120373 076100 177474 001164 141210 137122 076577
0000160 032350 173001 100302 052164 040664 125273 146525 055023
deb64a:~# dd if=/dev/sdb bs=512 count=1 | od -v | head -n 8
1+0 records in
1+0 records out
512 bytes (512 B) copied, 2.2e-05 seconds, 23.3 MB/s
0000000 044353 010220 150216 000274 134260 000000 154216 140216
0000020 137373 076000 000277 134406 001000 122363 020752 000006
0000040 137000 003676 002070 005565 143203 100420 177376 072407
0000060 165763 132026 130002 135401 076000 100262 072212 001003
0000100 000377 020000 000001 000000 001000 110372 173220 100302
0000120 001165 100262 054752 000174 030400 107300 107330 136320
0000140 020000 120373 076100 177474 001164 141210 137122 076577
0000160 032350 173001 100302 052164 040664 125273 146525 055023
# END-CLI


On Sun, May 21, 2006 at 05:21:17PM +0200, Goswin von Brederlow wrote:
> Alexander Sieck <alexander.sieck@web.de> writes:
> 
> > That is, also after changing sda to sdb in device.map, the
> > two MBRs are not identical.
> >
> > Maybe Goswin, or somebody else who enabled booting from both
> > disks with RAID1 and grub, can give the output of
> > 'dd if=xxx[ab] bs=512 count=1 | od' on their system.
> 
> I just dded the mbr from sda to all raid devices making them identical
> the last time I installed.
> 
After knowing that both disks are bootable with non identical
MBRs, now it would be interesting to see, whether dd'ing
the MBR of the 1st disk to the other disks also works.
I leave this test for somebody else:-)

For the sake of completeness. The Software-RAID HOWTO on
http://www.tldp.org/HOWTO/html_single/Software-RAID-HOWTO/
states as well, that RAID(1) works together with grub.
# BEGIN-QUOTE
7.3 Booting on RAID
...
If you are using grub instead of LILO, then just start grub and
configure it to use the second (or third, or fourth...) disk in the
RAID-1 array you want to boot off as its root device and run setup. And
that's all.

For example, on an array consisting of /dev/hda1 and /dev/hdc1 where
both partitions should be bootable you should just do this:

grub
grub>device (hd0) /dev/hdc
grub>root (hd0,0)       
grub>setup (hd0)
...
# END-QUOTE

Alexander



Reply to: