Move TrueNAS Core (with disks) from Proxmox VM to bare metal

I am running TrueNAS Core as a VM in Proxmox. The VM has three 4 TB HDD attached to it (pass-through) plus a 32GB virtual disk for the OS.

Proxmox runs on a Supermicro X10SLM±F and I have bought another motherboard, X11SSH-F.
I have installed Proxmox 8 on the X11SSH-F and will move the VMs over and then use X10SLM±F for TrueNAS Core bare metal. I will use a 480 GB SSD for that, a bit big maybe but that is what I have.

Can I just backup the TrueNAS config, remove the disks, install TrueNAS core (by removing Proxmox), restore the config and add the disks again to get my pool back?
Anything else to think about?

I found this thread which seem to be pretty much the same thing, but just making sure.

Edit: Corrected some factual errors.

First question. How did you pass the disks to the TrueNAS VM? With an HBA?

1 Like

No, they are connected directly to the motherboard, no raid.

Then each one is passed like /dev/disk/by-id/ata...

Then what you wrote should be all you need to do. I’m just not 100% sure if the pool will properly be shown given the way you passed them through to the VM. It is always recommended to pass through an HBA. I’ve never played with passing the individual disks beyond just a testing environment.

3 Likes

Ok, thanks.

I downsized an earlier server but wanted to use the 4 TBs disks with TrueNAS instead of the 6 TB just for future replacement. If a 4 TB breaks I can replace it with a 6 but not the other way around.

I can use the 6 TB to backup the data on TrueNAS, then try what I wrote above.

Let us know if you succeed. I’m wondering about this myself.

2 Likes

I did a similar all be in an exercise with my setup as I was doing some testing with some hardware before putting it in to production. I had a VM setup with my SATA controller passed through to a TrueNAS VM and moved to the Same VM with the drives passed through individually using QEMU and adding the serial numbers for each disk to the configuration. I had no issues as all with this migration.

I then rebuilt that VM changing nothing on the hardware side but upgraded the OS to the new TrueNAS RC. I then for fun tried moving the system back to using the SATA controller pass-through vs individual disks on the new VM and also had no issues.

While I only did this with this particular system and only 8 storage drives it seems that you can do it either way without issues, though this was not a long term test and I did not experiment with any DR situations.

1 Like

Nice.

(I do have the serial numbers passed through, if that makes any difference.)

That’s good news. That’s how the drives get recognized with the HBA.

Btw, if you pass through a physical boot disk, add it as a mirror to the existing virtual boot disk, you will then be able to boot off that disk on metal.

When booting from the hypervisor, Zfs will repair the virtual boot disk with any changes made on the physical and restore redundancy.

I used this method when I used to virtualize TrueNAS to enable switched between virtual and metal TrueNAS on the same hardware.

@Stux - That is a neat and smart idea, even to convert between one and the other this could work as well too. Not sure why i never thought of this idea. Though i guess the only downside would be that the physical and virtual disks would have to match. I typically assign 10 to 20GB to the virtual boot disks of my TrueNAS systems.

The physical disk and the virtual disk sizes do not have to match.

But the 2nd disk attached must be >= the size of the first.

This might be a little off topic @Stux but could that in theory allow you to say install TrueNAS using a virtual disk say 20GB in size and then ad a second disk via passthrough say 120 GB disk and then use the rest of that disk for something else such as a data partition in TrueNAS. If that would work you could take it a sept further and use this method to move the setup to baremetal and add a second 120GB disk and have that replace the original 20Gb virtual disk and now you have redundant OS paritions of 20GB and 2 100GB partions you could use for say apps or other data within TrueNAS.

I might try such an experiment minus moving it to barenetal but playing around in my testing environment to see if this is an option as it would allow for higher capacity drives for boot devices wirhout loosing all the space and without a overly complex install method i have seen before.

You can do the above with less convolution by hacking the installer script, but then you’re off the reservation and all bets are off, not a supported configuration, your mileage may vary, etc.

Ok, so this seems to be working.
I’ll give it a couple of days to see if any problems emerge but as for now the disks are found by TrueNAS and the shared data is accessible from other clients just like before.

What took most of the time was backing up the data on TrueNAS, I have backups of the important stuff, and also a backup of the Proxmox server disk.

But all the VMs are moved to the new server, all except TrueNAS off course.
I downloaded the config file from TrueNAS, shut down, disconnected the three disks, installed TrueNAS Core with a USB stick on the 480 GB SSD.
When TrueNAS booted up I restored the config file and let it reboot, it then showed that the three disks were missing. Shutdown and re-connection of the three disks, then boot and they appeared in the interface like before.

As I did mention before, the serial numbers of the disks where previously added to the VM config.
/etc/pve/qemu-server/100.conf

scsi0: local-lvm:vm-100-disk-0,size=32G
scsi1: /dev/disk/by-id/ata-xxx,backup=0,serial=1234,size=12345
scsi2: /dev/disk/by-id/ata-xxx,backup=0,serial=2345,size=12345
scsi3: /dev/disk/by-id/ata-xxx,backup=0,serial=3456,size=12345

So TrueNAS knew the serial numbers, might have helped.

Anyways, thanks for the support.

Glad to hear that. Hope it keeps working.

2 Likes