My all-flash pool with eight PCIe Gen4 7.68TB enterprise SSDs has been functioning flawlessly for over a year. This pool works withthe autotrim feature very well.
I recently installed TrueNAS SCALE on Windows 10’s Hyper-V and assigned a physical drive, a Samsung 990Pro 4TB, to TrueNAS. While this isn’t PCIe passthrough, it’s similar to ESXi RDM disk mapping, so TrueNAS recognized the 990Pro as a traditional SCSI device.
I format the 990Pro into ZFS and enable the auto-trim function using zpool set autotrim=on POOL. However, Hyper-V physical drive passthrough doesn’t support the TRIM command.
Windows can read the smart info, but Windows doesn’t know how to trim because Windows 10 knows nothing about ZFS;
TrueNAS can’t send trim commands because physical drive mapping doesn’t support it.
Is someone who knows how to save me from this dilemma?
The patient says, “Doctor, it hurts when I do this.”
The doctor says, “Then don’t do that!”
You’ll probably get scolded for attempting this stack, but I’m curious. Why?
As you can see, it isn’t a real passthrough. Hyper-V pass-through drives aren’t recommended very often any more. Is controller passthrough not an option?
If this is for testing or playing, you’re probably better off just using a VHDX.
With trim the SSD can maintain a larger set of erased cells to write to.
Without trim the SSD doesn’t erase the cells that are now fully unused, and it can only use the cells of permanently unused areas of the disk plus the normal over-provision that is comes with the disk.
The impacts of this are:
Non-bulk writes - little if any impact - as new cells are used the ones they replace are erased ready for reuse.
Bulk writes - if the SSD runs out of erased cells, then writes need to be paused until a new cell is erased, and this causes significantly slower writes.
Wear - decent (enterprise grade) SSD firmware has more over-provisioning and will track how many times a cell has been erased and substitute a heavily worn cell with a cell that has light wear and has not been rewritten for a long time in the past (in the reasonable assumption that it won’t need to be rewritten for a long time in the future either). But poor quality (consumer grade) SSD firmware has less over-provisioning and doesn’t track and substitute to even out the wear, resulting in the SSD reaching end of life a little sooner.
(My USB SSD doesn’t support trim, specifically the USB → SATA / NVMe bridge doesn’t support it, so I needed to set autotrim off.)
I have no monitor so I cannot install PVE or TrueNAS SCALE directly on my machine. If I had a monitor, I would still be skeptical about the compatibility between the AMD 7840HS and Linux kernels.
I access my Windows using Remote Desktop.
I’ll receive my monitor shipped from China which takes around a month since my order was cached.
If all the attempts fail, I would have to change to another intel platform.
While perhaps ridiculous, it’s also understandable.
In the extremely distant past I found myself with multiple servers but only one working PCI video card. I couldn’t disable the requirement for video in the BIOS.
So I would boot each machine with the card installed, and then yank it out (hot unplug, completely unsupported by the graphics card or motherboards) while the machine was running.
Generated some errors in the logs, worked just fine.
Anyway, the NVMe trim command is different to the SATA one is different to the SAS one.
The issue is most likely that HyperV’s SCSI virtual driver is not mapping SAS TRIM to the underlying NVMe drive… OR its not exposing the drive as a SolidStateDrive (can you tell the RPM in the VM, that’s normally how the SSD/HD difference is communicated)