How to enable the automatic trimming feature of the pool?

Hi there,

My all-flash pool with eight PCIe Gen4 7.68TB enterprise SSDs has been functioning flawlessly for over a year. This pool works withthe autotrim feature very well.

I recently installed TrueNAS SCALE on Windows 10’s Hyper-V and assigned a physical drive, a Samsung 990Pro 4TB, to TrueNAS. While this isn’t PCIe passthrough, it’s similar to ESXi RDM disk mapping, so TrueNAS recognized the 990Pro as a traditional SCSI device.

I format the 990Pro into ZFS and enable the auto-trim function using zpool set autotrim=on POOL. However, Hyper-V physical drive passthrough doesn’t support the TRIM command.

  • Windows can read the smart info, but Windows doesn’t know how to trim because Windows 10 knows nothing about ZFS;
  • TrueNAS can’t send trim commands because physical drive mapping doesn’t support it.

Is someone who knows how to save me from this dilemma?

The patient says, “Doctor, it hurts when I do this.”
The doctor says, “Then don’t do that!”

You’ll probably get scolded for attempting this stack, but I’m curious. Why?

As you can see, it isn’t a real passthrough. Hyper-V pass-through drives aren’t recommended very often any more. Is controller passthrough not an option?

If this is for testing or playing, you’re probably better off just using a VHDX.

1 Like

Use PCIe pass through to pass through the drive.

Whilst trim is beneficial it is not essential.

With trim the SSD can maintain a larger set of erased cells to write to.

Without trim the SSD doesn’t erase the cells that are now fully unused, and it can only use the cells of permanently unused areas of the disk plus the normal over-provision that is comes with the disk.

The impacts of this are:

  1. Non-bulk writes - little if any impact - as new cells are used the ones they replace are erased ready for reuse.

  2. Bulk writes - if the SSD runs out of erased cells, then writes need to be paused until a new cell is erased, and this causes significantly slower writes.

  3. Wear - decent (enterprise grade) SSD firmware has more over-provisioning and will track how many times a cell has been erased and substitute a heavily worn cell with a cell that has light wear and has not been rewritten for a long time in the past (in the reasonable assumption that it won’t need to be rewritten for a long time in the future either). But poor quality (consumer grade) SSD firmware has less over-provisioning and doesn’t track and substitute to even out the wear, resulting in the SSD reaching end of life a little sooner.

(My USB SSD doesn’t support trim, specifically the USB → SATA / NVMe bridge doesn’t support it, so I needed to set autotrim off.)

It’s my understanding that everything even remotely modern includes wear leveling.

USB memory sticks might only do remap-on-failure, but you have to go back to the dawn of consumer SSDs (or hybrids) to find devices without leveling.

I’d be curious to learn if this isn’t correct - are there any bottom-basement controllers that don’t perform at least dynamic leveling?

At this point I would even assume they all do static leveling, but I wouldn’t bet as much money on that.

The reason is very ridiculous:

  1. I have no monitor so I cannot install PVE or TrueNAS SCALE directly on my machine. If I had a monitor, I would still be skeptical about the compatibility between the AMD 7840HS and Linux kernels.
  2. I access my Windows using Remote Desktop.

I’ll receive my monitor shipped from China which takes around a month since my order was cached.

If all the attempts fail, I would have to change to another intel platform.

AMD 7840HS only supports Windows 10/11 and PCIe-passthrough functionality only works on Windows Server. I’m disappointed with AMD’s SoC.

While perhaps ridiculous, it’s also understandable.

In the extremely distant past I found myself with multiple servers but only one working PCI video card. I couldn’t disable the requirement for video in the BIOS.

So I would boot each machine with the card installed, and then yank it out (hot unplug, completely unsupported by the graphics card or motherboards) while the machine was running.

Generated some errors in the logs, worked just fine.

1 Like

Anyway, the NVMe trim command is different to the SATA one is different to the SAS one.

The issue is most likely that HyperV’s SCSI virtual driver is not mapping SAS TRIM to the underlying NVMe drive… OR its not exposing the drive as a SolidStateDrive (can you tell the RPM in the VM, that’s normally how the SSD/HD difference is communicated)

I don’t want to use Hyper-V anymore. I will install a separate Debian 12 or TrueNAS on this machine tomorrow. Have you ever tried ZfsBootMenu?

Now I have three choices:

  • Debian 12 with ZfsBootMenu
  • Proxmox VE
  • TrueNAS SCALE

My calculation resources are limited and I have to make the best choice.

Yes, this problem is due to Hyper-V.