Hi, I have a single box running Proxmox with a TrueNAS VM (with HBA-pass-through) presenting storage back to other VMs running on the host. The main zvol is a 5 disk raidz1 file system using 5x 8TB SAS spinning rust drives.
One of the VMs is a windows VM with GPU pass-through that my kids use for gaming. This has a “d” drive on it and this comes from an 8TB iSCSI blob storage presented from TrueNAS - currently using the Microsoft ISCSI initiator client insider the windows VM.
If I detach that existing 8TB block storage from the VM and then go into TrueNAS and present the same blob as through the TueNAS NVMe/TCP layer and then run the Starwind NVMe/TCP Initiator for Windows, will this give an improvement on the performance of the windows “d” drive compared to iscsi and would I be likely to be able to do this while retaining the contents of the existing block storage (formatted as NTFS inside the windows VM itself)?
Just to answer my own question here. I installed Starwind NVMe/TCP initiator in the VM and was able to mount a 1TB test drive presented to the VM, but there was no appreciable difference in performance between iSCSI and NVMe-TCP connectivity from within the VM with both iSCSI and the NVMe-TCP preented blob stores coming off the same zvol.
I’m sure given some time it would be possible to improve the performance, but because there appears to be no windows NVMe/TCP initiator (apart from Starwind) and the fact the Starwind one isn’t really “free” as it requires a corporate email address to get access to the “Free” license rather than it being open source, in may case its not worth me pursuing this unless things change in future.
Drive C is from storage presented to the VM directly connected to the Proxmox host with a Samsung NVMe drive acting as the storage on an LVM backed datastore. (left picture)
Both D and Z drives use block file based presentation off the same zvol from the TrueNAS VM.
Drive D is an 8TB block device presented via iSCSI. (middle picture)
Drive Z is a 1TB block device presented via NVMe/TCP. (right picture)
To get the Starwind Initiator working within windows 11 required an older version of the initiator as the current version failed to discover the TrueNAS presented storage. But through some search of the Starwind forums I found a link to “StarWind NVMe-oF Initiator.1.9.0.596(rev 598).Setup.462.exe”
Starwinds forum post 40585
and once installed I was able to do the following from an administrative command prompt (form within the Starwinds installer directory);
StarNVMeoF_Ctrl.exe discovery_tcp :4420
StarNVMeoF_Ctrl.exe insert_all_tcp :4420 MIP of client>
What is staggering to me is the sheer relative performance of native NVMe drives though compared to a 5 disk raid z1 volume (using 5x8TSAS disks).
Sadly It appears that even though i’ve been using TrueNAS for years I’m not worthy to be able to paste a friggin URL or screen shot so if you want to see the screen shot you’d need to got to;
https: // postimg.cc / 0rKJYGWk
if you want the url for the post on Starwinds with the NVMe version that works go;
https: // forums . starwindsoftware . com / viewtopic.php?p=40585#p40585
just remove the spaces. sigh
Thanks, yeah, the reason why I ran this test was because I’m effectively running this as a hypercoverged unit (where windows VM and truenas VM are on same physical host) all inter VM traffic will be via the KVM loopback so I had hoped I’d get some better performance but nope.
Sadly the cost of NVMe drives in a raid config will never be cost effective compared to rust and there just arent enough PCI lines on consumer cpus to handle the bandwidth without using PCI switches.