TerraMaster F8 SSD Plus - TrueNAS Install Log

Hi guys,
I think i found a solution for the VT-D problem. If you add the boot option pci=nommconf to grub you can boot every OS including TrueNAS Scale.
I tried with TrueNAS, Ubuntu, Debian and Proxmox.
I was not sure if it is stable so I tested it for about 3 weeks now and i can report that so far everything works perfect.

I am now running proxmox with a TrueNAS vm on top. The NVMe’s are handed over to TrueNAS via PCIe passthrough (VT-D, iommu) so truenas has access to the whole disks and not just virtual disks. So also SMART checks are possible.
By the way this solution is not limitted to 4x m.2 nvme’s, I run it fully populated with 8.

This really opened the device for me to use it as a homeserver/nas as i planned for. I was even able to passthrough the GPU to Jellyfin, immich and nextcloud for transcoding and basic AI workloads.

It took me a long time to find the issue and solve it and I hope it helps some of you!

1 Like

I have an F8 SSD and follow this helpful thread and got TrueNAS Scale installed. But then drives started to overheat and I could not figure out fan control. Has anyone solved this on the F8? If so, could you share the steps? Thanks.

Interested to know how you made it work!

I have proxmox 8.3 installed and with the pci=nommconf option it crashes the whole system, and without it it gives me an error when trying to start the TrueNAS VM

kvm: …/hw/pci/pci.c:1654: pci_irq_handler: Assertion 0 <= irq_num && irq_num < PCI_NUM_PINS' failed.

Could you please add more info on which VM config you have on Proxmox, version of truenas you are using, installation of Proxmox (ZFS, Ext4, etc) Thanks!

Insteresting, If I install proxmox over ZFS and enable pci=nommconf It doesn’t crash but I get similar error

kvm: ../hw/pci/pci.c:1654: pci_irq_handler: Assertion 0 <= irq_num && irq_num <PCI_NUM_PINS’ failed.`

Attaching my VM config

PCI devices

Here the settings that i changed for the TrueNAS VM:

  • Guest OS version: 6.x-2.6 Kernel
  • System Machine: q35
  • System Bios: OVMF (UEFI)
  • Root Disk SSD Emulation: True
  • Root Disk Cache: No cache
  • Root Disk Discard: True
  • CPU Type: host

Adding NVMEs as PCI devices:

  • All Functions: True
  • PCI-Express: True

Here my config file:

agent: 1
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: host
cpulimit: 4
efidisk0: local-lvm:vm-201-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:06:00,pcie=1
hostpci1: 0000:07:00,pcie=1
hostpci2: 0000:08:00,pcie=1
hostpci3: 0000:09:00,pcie=1
hostpci4: 0000:0a:00,pcie=1
hostpci5: 0000:0b:00,pcie=1
hostpci6: 0000:03:00,pcie=1
machine: q35
memory: 16384
meta: creation-qemu=9.0.2,ctime=1737936268
name: TrueNAS
net0: virtio=D1:31:11:88:FF:B5,bridge=vmbr0,firewall=1,tag=10
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-201-disk-1,discard=on,iothread=1,size=32G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=af099bae-0127-4bc4-98c1-35e32a21cb1f
sockets: 1
startup: order=1

TrueNAS version: 24.10.2
Proxmox is installed with LVM on one NVMe

I never had the error you mentioned but I hope this settings will help you.

I have another idea what it could be…
Do you have dedicated IOMMU groups for your NVMe devices?
Check out this: https://pve.proxmox.com/wiki/PCI_Passthrough#Verify_IOMMU_isolation

Thanks! I do have IOMMU enabled, but do you have any other options related to IOMMU in BIOS or Grub? I’ve checked the groups, and they are in different numbers; this is very strange.

Here’s the output

root@terraprox:~# dmesg | grep -e DMAR -e IOMMU
[    0.009596] ACPI: DMAR 0x0000000072259000 000088 (v02 INTEL  EDK2     00000002      01000013)
[    0.009632] ACPI: Reserving DMAR table memory at [mem 0x72259000-0x72259087]
[    0.097112] DMAR: Host address width 39
[    0.097114] DMAR: DRHD base: 0x000000fed90000 flags: 0x0
[    0.097125] DMAR: dmar0: reg_base_addr fed90000 ver 4:0 cap 1c0000c40660462 ecap 29a00f0505e
[    0.097129] DMAR: DRHD base: 0x000000fed91000 flags: 0x1
[    0.097134] DMAR: dmar1: reg_base_addr fed91000 ver 5:0 cap d2008c40660462 ecap f050da
[    0.097138] DMAR: RMRR base: 0x0000007c000000 end: 0x000000803fffff
[    0.097143] DMAR-IR: IOAPIC id 2 under DRHD base  0xfed91000 IOMMU 1
[    0.097146] DMAR-IR: HPET id 0 under DRHD base 0xfed91000
[    0.097148] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.098236] DMAR-IR: Enabled IRQ remapping in x2apic mode
[    0.306045] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics
[    0.372522] DMAR: No ATSR found
[    0.372523] DMAR: No SATC found
[    0.372525] DMAR: IOMMU feature fl1gp_support inconsistent
[    0.372526] DMAR: IOMMU feature pgsel_inv inconsistent
[    0.372528] DMAR: IOMMU feature nwfs inconsistent
[    0.372530] DMAR: IOMMU feature dit inconsistent
[    0.372531] DMAR: IOMMU feature sc_support inconsistent
[    0.372533] DMAR: IOMMU feature dev_iotlb_support inconsistent
[    0.372535] DMAR: dmar0: Using Queued invalidation
[    0.372540] DMAR: dmar1: Using Queued invalidation
[    0.374826] DMAR: Intel(R) Virtualization Technology for Directed I/O

And the IOMMU

root@terraprox:~# dmesg | grep 'remapping'
[    0.097148] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping.
[    0.098236] DMAR-IR: Enabled IRQ remapping in x2apic mode

And IOMMU groups also ok

Nothing, can’t make it work.

  • I also tried with kernel 6.11, no luck.
  • Also tried with ZFS installation, no luck.
  • Even tried restoring BIOS defaults, no luck.

Either I’m doing something differently, or it’s because of the SSDs, or we have a different board revision that makes some internal changes.

Currently on holiday, cannot really check much this week.
I have no special settings in Bios beside having VT-d enabled.

Np, enjoy!

Here are my details

This is the F8 Plus

sorry for late reply…
indeed I have, the following options are set for iommu to work: pcie_acs_override=downstream intel_iommu=on iommu=pt

You seem to have a newer BIOS version than me. But I am not sure if this could really be the reason.

I will try to find out how to perform a bios upgrade myself. Maybe at some point the PCI boot option is not even necessary any more if they fix the ACPI tables or whatever is causing these issues. But I am not sure where to get it from.

To resolve the issue, I disconnected the PWM pin from the fan connectors, resulting in the fans operating at their maximum speed (~ 2500 RPM) continuously.

The pin can be removed by pulling on the cable, while pushing down the pin’s latch on the side of the connector.

After removing the pin, just isolate it, so it won’t short-circuit anything by accident.

The process is easily reversible by just putting the pin back in.

Should look something like that:
https:://ibb.co/5WwVtg7T

(remove one of the “:”)
(cannot post links or images :roll_eyes:)

Take the tutorial from @TrueNAS-Bot .

Hi! To find out what I can do, say @truenas-bot display help.

This worked for me too.

I put directions on how to make this work in another thread:

1 Like

Can you please explain why lack of IBECC is such a big deal? Thanks!

For the project I want to use this in, I want to sacrifice some RAM performance to improve data integrity. IBECC is something that’s built into this SoC, but TerraMaster have decided to not allow it to be turned on for some reason.

2 Likes

Thanks @Wolfizen so I wont order from PrimeDay…