How to reset an Ubuntu VM root password? Can not access the early boot stage :(

Somehow the root-user password for one of my VM’s does not work any more, so I have to reset it.

On a physical machine that is not a big issue. Just reboot the machine press escape and got to advanced options for Ubuntu.

However its a VM and I can access the console screen, however not in the early boot phase needed to go to ‘advanced options’

I tried spice and I tried serial shell. But I did not manage. I just can not access the VM early enough in the startup process.

Is there a way to get access to the machine in the early boot process !!??

You could add an Ubuntu live ISO as a CD-ROM device, boot from that, mount the root filesystem on the virtual hard drive and edit …/etc/shadow deleting the password hash or inserting a prepared hash value of a known password.

I did create ‘a rescue VM-zvol’ based on Ubuntu desktop LTS (ubuntu-24.04.4-desktop-amd64.iso). Added gparted and ssh.

After doing so I did add that ‘rescue-zvol’ as extra disk to the VM which does need a password change for the admin. I gave the ‘rescue-zvol’ the highes prio.

So when booting up the problematic VM, in fact the ‘rescue-zvol’ is booting.

So far so good. But now I have to modify the shadow file on the disk of the ‘VM to repair’.

First step is start gparted. Here the main disk of the VM to repair.

Then I did start a terminal and entered sudo -i to promote it to an administrator shell.

sudo mkdir /mnt/vmname
sudo mount /dev/vdc3 /mnt/vmname => unkown file system => did now work

not possible because it did not work, however my intention for next steps where

cd /mnt/vmname/etc
sudo nano shadow and modify the line related to the admin user

So it did not work / I am making mistakes here.

I suppose /dev/vdc3 is the base of the Ubuntu to repair, but not 100% sure there

/dev/vdc3 is not a a file system but an LVM volume group. Hope I used the right term. You need to activate lvm2 - possibly, maybe it is already - and then use some other device presented by the volume manager. I don’t know from the top of my head but that should be easy to find on stackoverflow and friends. “LVM2 mount volume”

Good luck,
Patrick

1 Like

Close enough. It’s a physical volume (PV).

PVs are like “vdevs”.
VGs are like “pools”.
LVs are like “datasets”.

PVs are combined (or single) to create a VG, which can then offer multiple LVs. Total capacity depends on the PVs included together. Unlike ZFS, parity and redundancy is optionally done at a lower level with Linux’s “MD software RAID” (mdadm).

Ever since using ZFS and Btrfs, I found LVM unnecessary and redundant. It’s outdated compared to what ZFS and Btrfs offer.


@louis that seems like a complex setup. Using LVM on top of ZFS? Why not a simple Ext4 or XFS filesystem on a standard partition, without the need for an extra layer?

You can use pvscan, vgscan, and lvscan to automatically find and list PVs, VGs, and LVs.

You can use vgchange -ay to activate all VGs. From here, you can mount individual filesystems that exist on the LVs. Use lvscan to know their names, and then mount them in this format:

mount /dev/<vgname>/<lvname> /mnt/ubunturoot
1 Like

I am considering another option

What if I could delay of the startup of the VM having the password problem!?

So I discovered that grub is installed on the second partition of the VM to repair /dev/vdb2 as example. I can mount the grub partition of the VM to repair.

However no idea how to force that grub to wait les say 10 seconds befor the boot continues. Do not know if that is a real option

The answer relate to the ‘complex data structure setup’, is quite simple. I am not a Linux expert. When installing Linux I get a couple of questions during setup. And the one related to the disk structure is the one I hardly dare to touch.

So I take the default. On the next page of the Ubuntu installer I have the option to modify the default config. And looking at that config there are two things I do not like:

  • the vg is only using half the disk. So I change that to the whole disk
  • the default format option is ext4, which looks old. I have two (perhaps sometimes three) other options there:
    • btfrs
    • xfs
    • and for the desktop version I think also ZFS

I do not choose:

  • xfs, since I ones experienced that I could not change the ‘partition’ size => NO GO
  • I do not choose ext4 since that is old
  • I often choose btfrs since that is a decent filesystem
  • Only recent and still experimental zfs, what is probably the best filesystem. For the recovery VM based on the desktop OS-version, I did choose ZFS

As simple as that.

So the main reason of the ‘partition structure’ is …… that I do not have enough knowledge to change that. And above the reason why I choose a certain filesystem.

A better understanding of the Linux ‘partition layout’ would probably a good thing. But with my actual knowledge I do not dare to touch the structure base.

For filesystems, old doesn’t mean bad. Ext4 is a tested and stable filesystem.


So you have a ZFS pool with a zvol that is used as a virtual disk for your Ubuntu VM, which is then used as a PV for an LVM setup, and the LV for the Ubuntu root filesystem is Btrfs or ZFS? A zvol in a CoW filesystem (ZFS), abstracted through LVM, with another CoW filesystem (Btrfs, ZFS) formatted in the logical volume? :face_with_spiral_eyes:


Were you able to mount the LV? After activating the VG, does the filesystem on the LV show up with lsblk?

lsblk -o NAME,TYPE,SIZE,PARTTYPENAME,FSTYPE    

And I would recommend to use it by default inside a virtual machine if the backing store is a CoW filesystem like ZFS.

I have come to use XFS more frequently in situations when I explicitly do not want ZFS after reading this series of articles (The article’s in English, the whole blog website is mixed En/De):

Btrfs … yeah … meh. Don’t see a need.

1 Like

I do not choose:

  • I do not choose ext4 since that is old

So are ZFS, NTFS, UFS, etc.
C programming language is so old, but is used virtually by most OS’s and compilers.
Old does not equal bad.

  • I often choose btfrs since that is a decent filesystem

It’s a newer filesystem than ZFS, yet inferior and less stable. It still has yet to solve the write hole problem, which is why you’re likely not using it for your RAID array and using TrueNAS with ZFS instead.

  • Only recent and still experimental zfs, what is probably the best filesystem. For the recovery VM based on the desktop OS-version, I did choose ZFS

ZFS is NOT new nor experimental, it’s older than BTRFS by quite a number of years.