Windows VMs on TrueNAS. Performance?

I am running a TrueNAS Scale server with the specs:
CPU: Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz (4 cores, 8 threads)
RAM: 64GB ECC DDR3
VM pool: 6 (3 mirrors) 2.5 SSD drives, Samsung 250gb
Storage pool: 1 vdev, 6x 4TB WD RED drives.
OS: Truenas Scale ElectricEel-24.10.2
NIC: onboard 1gb intel nic.

I have been running a couple of Ubuntu VMs for a long time, and it´s been very hassle free. Now, for the sake of learning, I have started setting up Windows VMs aswell. There are currently four of them.
Domain controller 1 (dc1)
Domain controller 2 (dc2)
Issuing CA server (ica01)
Jump host (jump01)

There is also a Root CA server, but that´s turned off, so probably not a part of this problem.

The thing is that the interface of the VMs are kind of sluggish, and the CPU usage seems very high.

In the Device specs on the Windows Vms, they have this for a CPU:
Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz 3.69 GHz

With all servers 4 running for over 48 hours, and no RDP sessions open, the CPU on the TrueNAS dashbboard shows a usage of 22 - 30%.
Memory on the dashboard says: Free 2.8GB, Cache 30.3GB, Services 29.7GB
If I then start an RDP session to jump01, it goues up to 80%. For a short while, and then it camls down to about 30%
When I start chrome on jump01, CPU jumps to 99%, both looking in Task manager on the VM, and on the TrueNAS dashboard.

My first guess was that the CPU had to deal with all of the graphics for the windows desktop. So I installed a second GPU (AMD FirePro W2100). Nothing fancy, just what I had lying around. It is isolated and used only with jump01. The desktop feels a little smoother after that. But the CPU usage remains at the same levels.

After I shut all my Windws VMs down, just for comparison, the CPU usage drops to 2 - 7 %.

I am wondering about the best CPU settings for my CPU. I have been trying a lot of different settings, selecting and deselecting things like “Ensure Display Device”. No I have all the VMs configured the same, regarding CPU:

cpu: 1
cores: 2
threads: 2
ram: 4gb

With the exception of jump01, which has 8gb of RAM.
Also, the VMs seem slow in networking. Doing an online speedtest from jump01, I get about 20mbit download, and 120 mbit upload. This also ramps up the CPU usage to 100%.
Doing the same test from my laptop on the same network, I get about 300mbit in both directions. The NIC in each VM says it is 10 Gbps. My physical NIC is 1Gbps. Is that normal?
I am using VirtIO for both networking and disks on all of the VMs.

What haven´t I thought of? Or is my hardware simply not good enough?

Interested in seeing responses. I will be doing similar testing soon, but curious as to what others have/are seeing.

I have five Windows Server 2019 VMs running (3 domain controllers/DHCP/DNS servers, app server, sql server) on my Scale server. There’s also about a dozen docker containers, but they all essentially idle at 0% so I’ll ignore them. I don’t see what you’re experiencing when you fire up a remote desktop.

Physical hardware is a SuperMicro X13SAE-F, i5-13600K, 128GB DDR5 ECC

All machines are set up with a single vCPU and have Hyper-V entitlement configured. No GPU has been assigned, and they all show up with the “Microsoft Basic Display Adapter” or however it’s worded in Device Manager.

  • dc1, dc2, dc3 - 2 cores, 4GB memory
  • app - 8 cores, 32gb
  • sql - 8 cores, 16gb
    All their storage is ran on a NVME RAIDZ1 pool with the exception of an additional data drive for the SQL server which is on a separate spinning rust RAIDZ2 pool.

None of the servers have a production load, they’re all for development or “home lab”. Speedtests from any of them can fully utilize my gigabit internet connection, or at least within acceptable norms.

Opening a RDP connection to any of them, firing up Chrome, and playing a YouTube video will momentarily cause a spike, but it immediately settles back down to 5% or less CPU utilization. If the RDP connection is just sitting there, CPU idles at 1%.

I have moved some windows VMs from hyper-v to truenas and they work correctly.

The most of the speed issue is related to the way VM uses its data (the driver used to simulate hard drive), there are two options in Truenas KVM. My VMs are using AHCI which works slowly regardless of the type of disk it runs from (this comes from the forum post statements).

Some articles tell you that you should use virtio which are way faster, but I have not managed to make my imported VMs to use it, so I am stuck with AHCI. If anyone has idea on how to change hard drive access driver on already installed VM, I would be interested in trying (my attempts so far failed).

Here is a link talking about this performance differences (related to proxmox, but I guess it is same or similar with truenas):

https://www.reddit.com/r/Proxmox/comments/wvq8ht/perfomance_benchmarking_ide_vs_sata_vs_virtio_vs/?rdt=45169

So the challenge is to make your windows use virtio drivers.

All my Windows VMs are already on VirtIO, both for network and disks. So no improvement to be made there, right?.
I read somewhere it could be related to block size? On my dataset(s) the block size is 16K. Windows is using 4K it seems. Could that be an issue?

@ LipsumIpsum I guess your hardware has way more push than mine.

Any adjustments I can make to improve this? I also noticed my CPU temp is very high, och 60 C. Any tips on how to lower that? Extra fans I guess? I built my NAS out of an old HP Z420 years ago.

I also imported some windows VMs to a TrueNAS system. I was able to get the virtio disk drivers working.

It involved a few trips to safe mode to get the drivers installed into the boot.

Don’t have my notes with me, but just wanted to confirm that it could be done.

1 Like

Do the following at your own risk. Make sure you have a good backup. Take a snapshot too. Maybe also a backup of the snapshot and a snapshot of the backup. Blah blah blah.

Note: Step 2 may delete your network settings and/or switch your DHCP address. You may need to connect to the VM via the the display console through TrueNAS Scale to reconfiguring the settings if you are connecting via RDP.

  1. Download virtio-win-gt-x64.msi
  2. Install it on the VM.
  3. Shut down the VM.
  4. Create a dummy disk if this is the first time through these steps. Size doesn’t matter. Skip this step if a dummy drive was already created.
  5. Add dummy disk to the VM as a VirtIO disk.
  6. Boot the VM. Log in.
  7. Bring the drive online, initialize, format, assign a drive letter (as needed)
  8. Open an admin command prompt. Run the following command:
    bcdedit /set "{current}" safeboot minimal
  9. Reboot the VM. Log in. It should be running in safe mode.
  10. Open an admin command prompt again. Run the following command:
    bcdedit /deletevalue "{current}" safeboot
  11. Shut down the VM.
  12. Edit boot drive device and change mode to VirtIO. Save.
  13. Remove the dummy drive device. Check the box to delete the zvol device if no additional drives will be switched, otherwise leave it and reuse it.
  14. Boot VM. Log in. It should no longer be running in safe mode.
  15. Windows will probably give an alert that it needs to reboot to finish setting up the device. Reboot. Log in.
  16. Verify the drive was successfully switched to VirtIO by checking Device Manager → Disk Drives. The drive should appear with VirtIO in the name.
2 Likes