[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: extending LVM on SW RAID



Marek Podmaka wrote:
> Hello all,
> 
> We have 2 160GB disks in software raid1 with 3 partitions - so we have
> md0 (root fs), md1 (swap) and md2 (PV for LVM). One disk has failed
> and because will need more space in near future I'm thinking of buying
> 2 bigger disks. The question is how to resize the raid & lvm.
> 
> In some FAQ I have found that I can resize the last partition in fdisk
> and then run mdadm --grow on the md2. There is pvresize command for
> LVM, but is it safe to use it? And last question is from where to get
> this command, because it is not in LVM 2.01 in sarge and only in etch.
> 
> Another possibility would be to create another partition & md3 on the
> free space of bigger disks and use it as second PV in LVM. But as far
> as I know kernel's read optimization for RAID1 (choosing from which
> disk read which data) assumes only one md device on each disk, so that
> would hurt performance even more than my todays' 3 md devices
> (although md0 and md1 are not used much).

There's one other strategy not involving pvresize (or mdadm --grow btw.).

Let's consider following steps:
1. adding one of the new disks to all arrays (with third partition
expanded to all available space - for convenience later on),
2. replacing 160GB disk with the second new one,
3. on the second disk adding first two partitions to existing md0/md1
arrays and creating new md3 occupying all available space with second
disk initially in failed state,
4. now lets use standard procedure of replacing old disk in LVM:
pvcreate /dev/md3 => vgextend your_vg /dev/md3 => pvmove /dev/md2
/dev/md3 => vgreduce your_vg /dev/md2,
5. finally lets remove md2 and add freed partition as second to the md3
array.

Only one minor (aesthetic mostly ;) ) problem with such procedure is
that you have md3 and no md2 at the end of the day.

Regards,
  Robert Tasarz



Reply to: