2020-07-16
Time to grow the diskspace for the home server
There were some ideas for one or more new virtual machines in the homeserver conway 2017 and the current volume group is almost full. Time to order some new diskspace because there's also some upcoming Devuan upgrades where I'd like to keep a snapshot of the 'before' situation so I can go back if everything breaks. So I ordered 2 960 Gb SSDs. They will run in a mirror anyway. I was wondering whether to add them to the current volume group or take the 2 256 Gb SSDs out of the volume group. I decided to take those two out: there will be enough space after the upgrade and it can save some power. This does mean the new SSDs will also be set to be bootable and I will have to do a move of the volume group. The order of changes:Quite a number of steps, this will take some time. Update 2020-07-19: In the end I did the steps in a somewhat different order because I took some time to understand and update the boot menu configuration with efibootmgr. And I did the hardware work in a different location than where the server normally stands so it was some moving of the heavy case too. The order of changes in the end:
- Shut down machine
- Install 2 new disks
- Boot up machine
- Partition 2 new disks with boot partition, make bootable with UEFI
- Test boot from new disk
- Make raid-1 device from the rest of the space on both disks
- Add new raid-1 to volume group
- Move volume group away from old raid-1
- Remove old raid-1 from volume group
- Unlink old raid-1
- Shut down machine
- Remove 2 old disks
- Boot up again
With moving hardware and waiting for synchronization processes this took over 3 hours. I did log everything I did.
- Shut down machine
- Install 2 new disks
- Boot up machine
- Partition 2 new disks with efi partition, boot partition and raid partition
- Clone efi and boot partition from old disks
- Make raid-1 from raid partitions on new disks (this starts a long synchronization process)
- Add new raid-1 to volume group
- Move volume group away from old raid-1 (this starts another long process, moving the logical volume group)
- Update boot configuration to boot from either one of the new disks with EFI (I found out the old setup would probably have failed to boot when the original default disk would have been missing)
- Wait for all the I/O processes to end
- Remove old raid-1 from volume group
- Remove lvm labels from raid-1
- Shut down machine
- Disconnect old disks
- Test boot: worked fine
- Shut down machine and remove old disks
- Put new disks in the right disk trays
- Boot again to full production