Sometimes it can be useful to be able to manually install an Ubuntu system – either from an existing running system, or a live installation media environment. Fortunately this process is rather straightforward.
Motivation
Approximately once per month, I find myself in need of reinstalling an Ubuntu server. These systems can be spread across the world, with no means of physical access and sometimes limited IPMI KVM capabilities.
It might be a time consuming process to obtain KVM access and when I finally get access to the interface, the virtual media facility doesn’t work correctly.
What’s the solution? Well, assuming the system is not completely borked, we still have SSH access! Servers typically have at least two drives, which also plays an important role in crafting alternative solutions.
(sidenote: This process is also helpful when subiquity crashes and we want to carry on without a long reboot by just dropping into the shell)
Prerequisites
We will need to have at least one empty disk where we will put the new installation. If the server has multiple drives but they are all in a redundant RAID configuration, one option is simply to kick one drive out (mark failed, remove) and zero the superblock to make sure the array metadata is cleared out.
Manual installation
In the steps outlined below, I’m working with a system with two blank drives, sdb
and sdc
. If only one drive is available, one option is to skip the RAID creation altogether and work directly with the disk.
Alternatively, you can create a degraded array with one drive missing. Once the new system is fully installed and booted into, the new array can simply be expanded with the remaining drives from the old array.
There are two ways to install Ubuntu, either with EFI or Legacy/BIOS booting. Steps that differ for each one are marked with “EFI ONLY” or “BIOS BOOT ONLY” accordingly.
Preparing the system and storage
# Install dependencies required to bootstrap the system apt install debootstrap arch-install-scripts mdadm xfsprogs
### EFI ONLY ###
# Create the EFI partitions
fdisk /dev/sdb # (repeat for /dev/sdc)
# ➝ [n]ew partition, 512MB
# ➝ [t] ➝ "ef"
#
mkfs.fat -F 32 /dev/sdb1
mkfs.fat -F 32 /dev/sdc1
### END EFI ONLY ###
# Prepare the root filesystem block devices fdisk /dev/sdb # (repeat for /dev/sdc) # ➝ [n]ew partition # ➝ [w]
# Create a root filesystem RAID1 device # ! Only one option needs to be selected here ! # We do NOT want to create a RAID over the EFI partitions ### BIOS BOOT ONLY ### mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 ### END BIOS BOOT ONLY ### ### EFI ONLY ### mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2 ### END EFI ONLY ###
# Create a filesystem and mount it to /mnt mkfs.xfs -K /dev/md0 mount /dev/md0 /mnt
Bootstrapping Ubuntu
In this case we are installing Focal, or in other words, Ubuntu 20.04 LTS.
# Deboostrap Ubuntu Focal debootstrap focal /mnt http://us.archive.ubuntu.com/ubuntu/
# Generate an fstab for the target system, # getting rid of the auto-created swap in the process genfstab -U /mnt | grep -iv swap >> /mnt/etc/fstab
# Before we can chroot into the target system, # we need to mount special filesystems to paths # they should be present in mkdir -p /mnt/{proc,sys,dev,dev/pts} mount --bind /proc /mnt/proc mount --bind /dev /mnt/dev mount --bind /dev/pts /mnt/dev/pts mount --bind /sys /mnt/sys
# Chroot into the newly installed target chroot /mnt
Basic configuration
What we have now is a completely barebones system that will not boot by itself and won’t be overly useful either. Let’s add some basic configuration.
# Enable typical APT sources - restricted, universe, multiverse, backports cat <<EOF > /etc/apt/sources.list deb http://us.archive.ubuntu.com/ubuntu/ focal main restricted deb http://us.archive.ubuntu.com/ubuntu/ focal-updates main restricted deb http://us.archive.ubuntu.com/ubuntu/ focal universe deb http://us.archive.ubuntu.com/ubuntu/ focal-updates universe deb http://us.archive.ubuntu.com/ubuntu/ focal multiverse deb http://us.archive.ubuntu.com/ubuntu/ focal-updates multiverse deb http://us.archive.ubuntu.com/ubuntu/ focal-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu focal-security main restricted deb http://security.ubuntu.com/ubuntu focal-security universe deb http://security.ubuntu.com/ubuntu focal-security multiverse EOF
# Install minimal utilities apt update apt -y upgrade apt -y install curl nano mdadm initramfs-tools \ ubuntu-minimal openssh-server xfsprogs xfsdump parted gdisk
# Install a kernel - we can pick which one we want (pick one)
#NORMAL: # apt -y install linux-image-generic
#HWE: # apt -y install --install-recommends linux-generic-hwe-20.04
# Generate a locale configuration
locale-gen en_US.UTF-8
locale-gen en_GB.UTF-8
update-locale LANG=en_US.UTF-8
# Enable root to SSH into the machine
sed -i 's/^#PermitRootLogin .*/PermitRootLogin yes/' /etc/ssh/sshd_config
# Set the root password
passwd root
# Set the hostname to something distinctive, so it is immediately # apparent which system the server booted into echo "newsystem" > /etc/hostname
# Create a persistent mdadm configuration /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
# Update the initramfs update-initramfs -u
# Generate keys for SSH ssh-keygen -A
Installing the bootloader
The system is almost configured, it’s time to install GRUB so we can actually boot into it.
### EFI ONLY ###
# Mount both EFI partitions
mkdir /boot/efi
mkdir /boot/efi2
mount /dev/sdb1 /boot/efi
mount /dev/sdc1 /boot/efi2
# Mount EFI vars
mount -t efivarfs none /sys/firmware/efi/efivars
# ... and install grub
apt install grub-efi-amd64
dpkg-reconfigure grub-efi-amd64
### END EFI ONLY ###
# Install grub on the target drives grub-install /dev/sdb grub-install /dev/sdc # Just in case update-grub
Network configuration
Now it’s time to create a network configuration so our new server can reach the internet. If we’re running this on a server that already has a working network connectivity, great, we don’t need to do almost anything.
If not, we need to recreate it from scratch.
# Exit the chroot exit # Copy the existing network configuration if present cp /etc/netplan/* /mnt/etc/netplan/
Now, there is one thing that I will highly suggest. While Linux in theory has persistent device naming and the device names should remain the same, in practice this has proven to not be extremely reliable on some platforms.
The simplest fix to make sure we don’t get locked out of the reinstalled server by having a non-working network is to simply add matching based on interface MAC addresses into the Netplan configuration.
So for example, the resulting Netplan file might look like this:
network: version: 2 renderer: networkd ethernets: eno1: match: macaddress: ac:1f:6b:1a:1b:1c addresses: [ 192.168.1.100/24 ] gateway4: 192.168.1.1 nameservers: addresses: - "8.8.8.8"
Closing thoughts
We can now reboot the server and if all goes well (fingers crossed!), it will boot into the new install. If it does not, it might be that it’s booting from the wrong drive for example. That can be remediated by overwriting GRUB on that drive (grub install /dev/sda
) or changing the boot order (efibootmgr
).