Assistance Required with VM Configuration Export/Import in TrueNAS SCALE

Hello TrueNAS Community,

I’m currently working on a project with my TrueNAS SCALE system (version: Dragonfish-24.04.0) on a PowerEdge R730xd and facing a bit of a challenge regarding virtual machine (VM) configurations. I’m planning to destroy and then recreate pools that currently store several VMs. I’ve already replicated the VM images to another pool for backup, but I’m unsure about the best way to handle the VM configurations.

Here are the specific issues I’m facing:

  1. Exporting VM Configurations: I need to export the VM configurations before I destroy the pools. Is there a straightforward method within TrueNAS SCALE to do this? I couldn’t find an option in the UI to directly export VM configurations.
  2. Importing VM Configurations: Once I recreate the pools and replicate the VM images back, how can I import the VM configurations? Are there specific steps or precautions I should follow to ensure everything is restored properly and the VMs function as intended?
  3. Best Practices and Tools: If there are any best practices, community scripts, or tools for managing and restoring VM configurations in TrueNAS SCALE, I would greatly appreciate learning about them.

Any advice, instructional links, or guidance on how to proceed would be extremely helpful. I want to ensure that I manage and restore these configurations correctly to avoid any potential issues.

Thank you for your assistance!

Hi Adam, welcome to the forums.

Unfortunately we do not have any methods in place to facilitate what you are looking to do at this time. We have investigated a number of virtual machine improvements, including some import/export options, but those improvements do not have an estimated inclusion date at this time.

I am unaware of any community scripts or tools to facilitate the process. Our best suggestion currently would be to carefully record the settings and the current VMs and recreate them after the pool recreation. We have some folks internally who have gone through the manual process and have successfully completed the process.

Stay tuned, we know the VM functionality is lacking in some ways and hope to give it it the love it deserves in future releases.

Hello and thank you for the reply., a little more context.

The primary reason I’m considering dismantling the VM and the associated pool is due to an error I made in adding an HDD to a pool originally configured with SSDs. This decision was initially driven by the need to expand storage capacity to accommodate snapshot data, as snapshots could not be directed to a separate drive. Unfortunately, integrating an HDD into an SSD pool has led to a significant slowdown in read/write speeds because HDDs inherently operate slower than SSDs. My goal is to reconfigure the HDDs into either a RAID 0 array or a standard ZFS raid setup to enhance their speed, leveraging SSDs for caching in the new setup. The plan also includes replicating this data to another pool.

As I navigate through these technical challenges, I’m learning that any misstep in managing this hypervisor often requires starting from scratch, which is a tough but valuable learning experience. This situation underscores the importance of careful planning and execution in system administration, particularly in environments as unforgiving as this one.

Thanks for the update either way. It’s unfortunate to hear that there’s no straightforward method in TrueNAS SCALE for exporting and importing VM configurations, similar to what’s possible in VMWare with .vmdk files. However, there may still be a couple of approaches we can explore.

  1. Manual VM Configuration Backup and Recreation:
  • As TrueNAS SCALE currently doesn’t support direct VM configuration exports like VMWare, the primary method remains manually recording all VM settings (CPU, memory, network settings, etc.) and then recreating them after your pool operations. This process is admittedly tedious but ensures that you can accurately restore each VM.
  1. Using Linux Virtualization Tools:
  • Although TrueNAS itself does not offer a built-in export feature for VM configurations, if you’re technically inclined, you might consider using virsh, a tool for managing VMs under the KVM hypervisor (which TrueNAS uses under the hood). The virsh dumpxml command can be used to export a VM’s configuration to an XML file, which you can then edit as needed and import on another system or after changes using virsh define from the exported XML file​ (Server Fault)​.
  1. Disk Imaging:
  • For the VM’s disks, you might consider using disk imaging tools to create copies of the VM disks before destroying the pool. Tools like dd or qemu-img could be useful here to create a complete backup of the disk states, which you can restore onto the new pool setup​ (Server Fault)​.

Would those options work?

I mean, I can remake the vol from scratch, but I’ll need to mount pools the VM’s are actively using. Is it feasible to mount a pool that is currently assigned to another VM if that VM is powered off? This would allow me to accurately replicate the setup during a rebuild, ensuring that everything is configured correctly. Additionally, if a VM is detached from a ZVol, can another VM simply connect to that ZVol and access all its data? Or is it necessary to reformat everything, including ZVols that contain large amounts of data?

Reading forums in other places I’ve found:

  • Mounting a Pool Used by Another VM: Yes, you can mount a pool that is used by another VM, provided that the VM is shut down. This would allow you to access the data and ensure configurations match when recreating VMs.
  • ZVol Data Persistence: When a VM is removed, the ZVol itself remains intact with all its data. You can attach it to another VM without needing to format or lose data. This flexibility should help in managing VM disk storage without data loss.

About my scenario with mixing HDDs and SSDs in a pool, people can see why I’m wanting to adjust that setup. Reconfiguring the pools as planned and using SSDs for performance-critical tasks while relegating HDDs to less speed-sensitive storage (like backups or less frequently accessed data) is a wise move.

A few things to be aware of future pool creation:

  • For block storage scenarios mirrors are always recommended over RAIDZ. Performance will be far greater with mirrors whether HDD or SSD.

  • Using SSDs for caching often does not have the expected impact on performance. For synchronous writes a SLOG can be beneficial, but the vast majority of uses will result in asynchronous writes. Also, ensure a proper power protected device is used, otherwise you are leaving your self more exposed to potential data loss. It is often suggested to analyze ARC hit rate before adding an L2ARC device, as most users will not benefit from an L2ARC device, and it may decrease performance by consuming RAM to populate L2ARC metadata.

  • Pools are not directly relevant to VMs. VMs use zvols. There should be no issues importing an existing zvol from a previous VM to a new VM as long as the VM configuration matches.

We have not tested any potential migration options listed. Users, including internal users, have had success manually restoring VM configuration.

1 Like

Thank you for the guidance, iXChris.

In addition to your previous advice, I have a few follow-up questions that would help me optimize my setup further:

  1. Analyzing ARC Hit Rate: Could you provide some direction on how to analyze the ARC hit rate before considering the addition of an L2ARC? Understanding the specifics of this analysis would help me make an informed decision about whether L2ARC would be beneficial for my system.
  2. Utilizing a 2TB SSD: Considering I have a 2TB SSD available, how would you recommend utilizing it in my setup? Should it be dedicated to logging, especially for the volume and dataset holding the VMs, or would there be a more effective use for it?
  3. Alternative Solutions: Regarding the suggestion to recreate VM configurations from scratch, which I understand is often the best solution, what alternative strategies would you recommend if I were to explore different approaches? I’m open to suggestions that might not be the typical route but could still provide a reliable outcome.

Another user shared their experience with migrating virtual machines from ESXi to TrueNAS, which involved cloning the VMs and copying them into TrueNAS seamlessly. They described creating a ZVOL large enough to accommodate the VM, converting the ESXi VHD to RAW format using qemu-img, and then transferring the RAW image into the ZVOL with dd (using scp for the transfer). They suggested that this process could be reversed to transfer VMs out of TrueNAS.

Thanks again for your expertise. Your insights are invaluable as I navigate these adjustments.

  • arc_summary from shell is the most useful tool to analyze ARC, specifically ARC total accesses. Here is an example of a system that would not benefit, and would likely suffer from L2ARC addition, there isn’t a hard rule for when L2ARC is beneficial, but I wouldn’t consider on anything with greater than 95% hit rate.
    Screenshot from 2024-05-10 15-08-03

  • 2TB SSD is likely only going to practical in a mirrored pool with another 2TB SSD. It likely does not have power protection and does not have the endurance for a SLOG device, 16GB is all that is needed for SLOG devices. Users often suggest something like a high endurance low latency Intel Optane device for SLOG purposes

  • We have not explored any alternatives at this point. Community users have, so their guidance will likely be better than anything I could suggest.

Thank you, iXChris, for the detailed explanation and recommendations regarding the use of ARC and L2ARC in my setup.

From your description, it seems clear that adding an L2ARC to a system with a high ARC hit rate like mine would not be beneficial and might even detract from performance. I understand now why an ARC hit rate above 95% indicates that most data requests are being successfully served from the ARC, diminishing the utility of an L2ARC.

Regarding the use of the 2TB SSD, your advice to use it in a mirrored pool with another SSD makes sense, especially considering its lack of power protection and endurance for SLOG purposes. However, I’ve utilized all available space in my server, so I only have the 2x 10TB HDDs and the 1x 2TB SSD to work with. Initially, I expanded storage capacity to accommodate additional snapshots, which are crucial for my VM operations. I’m not overly concerned about drive failures since the snapshots and the data on these drives are replicated to an off-site pool via TrueNAS Scale’s data replication feature. Additionally, the Docker containers running on the VMs have their data backed up from their respective /opt folders.

Nino J (another user outside this community) suggested a sequence of actions to manage data and VMs more effectively:

  1. System Configuration Backup: Start by backing up the TrueNAS system configuration.
  2. Preparation of New Disks: Introduce new disks that can accommodate all the current data.
  3. Graceful Shutdown of VMs: Shut down the VMs from the guest OS gracefully to ensure data integrity.
  4. Snapshot and Replication: Make a snapshot of the existing datasets and replicate them to a new “backup dataset” on the new disks.
  5. Pool Reconfiguration: Destroy the original pool and reorganize the disks into the desired vdevs/pool configuration.
  6. Data Restoration: Execute a replication task to transfer data back from the backup disks to the newly created pools.

He also emphasized the importance of ensuring that the disks are connected via an HBA card without any RAID logic to avoid data corruption.

Given these insights and the limited space for adding more drives, my primary goal is to enhance speed rather than redundancy, as the increased I/O from running multiple VMs necessitates better performance. Would raiding the HDDs and utilizing the SSD for cache be a practical approach to achieving this? I’m open to suggestions on the best RAID configuration to use in this scenario to optimize speed without compromising data integrity.

Thanks again for your help and guidance.