TrueNAS VM - Flapping SMB transfer speed between 1Gbit/s and zero

Hello everyone,

First of all, I am still relatively new to the topic of TrueNAS. My TrueNAS runs as a VM on a Proxmox VE host.

I previously had a TrueNAS Core VM running and migrated to TrueNAS Scale yesterday.

TrueNAS Scale VM hardware configuration:

  • 8 core of the proxmox host Intel i5 1235U CPU
  • 16GB RAM
  • 32GB boot disc (hardware boot disc = WD_BLACK SN770 2TB)

Storage configuration:
1 x 1280GB QEMU SCSI (partition from the boot disc)
1 x Intenso SATA 2TB SSD integrated in TueNAS as PCI passthrough
1 x SanDisk Ultra 3D SSD 1TB (external enclosure) in TrueNAS with USB port passthrough
1 x WD Red Plus 3.5’ HDD 6TB (external enclosure) integrated in TrueNAS with USB port passthrough

I have created a separate pool with a separate file system for each of the above hard drives. So a total of 4 pools as a stripe without other options, each with 1 dataset.

See screenshot:

In the pools/datasets I turned Sync OFF and the record size is set to 1M, that’s what I found in some other threads, videos.

My Proxmox is connected via 1gig ethernet cable and my Mac Mini M2 is also connected via cable.

Now I want to put my data back to the new TrueNAS instance. First I wanted to fill my Pool shown in the screenshot which is the Intenso SATA SSD with the SATA PCI passtrough.

As you can see the write speeds are flapping between the full 1gig and zero. It is very strange because when I transfer data to the WD Red Plus 3,5" HDD (TrueNAS-Pool-4) which is connected via USB to the Proxmox host doesn’t have this issue and get constant 1Gig speed (around 115MB/s).

The files I copied from my Mac to the TrueNAS-Pool-2 are around 1GB each, in total 200GB.

This is the disk statistics of the WD Red HDD in Pool-4:

Hope someone of you can help me with this issue :slight_smile:


I am suspicious about the USB, either the passthrough or the module itself.

Hi, the two disks connected via USB work perfectly fine. The SATA SSD inside my ThinClient has this strange flapping

Got little idea then, maybe it’s an issue with the SSD… but you should not be expereincing this only after you switched to SCALE.

Which version of SCALE are you running?

I installed the latest SCALE version, just downloaded from the website.

And you didnt have this with CORE ?
Could be a cashing problem of the SSD…

I assume this is a messing around only setup, right? :slightly_smiling_face:

I dont know if the same happened in Core. In Core I didnt do the passtrough of USB and PCI. I did HDD passthrough with my Proxmox. If you do HDD passthrough via SCSI the TrueNAS doesnt get the real hardware and cannot do things like S.M.A.R.T, Trim and Scrub as it should be.

I also think that performance is better with passing through the real hardware.

The SSD which has this issue is also not a special “NAS” SSD. It is called Intenso TOP SSD SATA III. I dont know if the build-in cache is enough for NAS systems. The WD Red Plus has a dedicated cache of 256MB and as I said it works perfect and has a constant write speed of around 100-116MB/s

and my Proxmox is for home usage only, so not only messing around.

I want to set everything up as good as possible

This being a virtualized istance does not help troubleshooting, but that write-wait-repeat pattern is usually sympton of a drive being not able to properly flush the data OR something off with the RAM’s transactional groups flushing.

If I were to place a bet it would be one the virtualization side of things but really, have very little clue about where to start looking… WAIT, how are you PCIe passthrough-ing a SATA device? Smelling an adapter here.

Since its a i5 1235U, i assume its whatever controller that mini pc or laptop has.
Im impressed that it even could be isolated…

I used the option build in Proxmox “Add PCI device” and selected the Alder Lake SATA controller.

It’s a HP Pro Mini 260 G9 Thin Client with the 12. Gen i5 1235U

I liked that HP uses Notebook CPU for this Thin Client to save some energy because its a 24/7 device.

I never worked with proxmox, but try to pin only 2 cores to the truenas VM. 8 cores is to much.

No adapters then, well… not much else in mind. I will pass the ball to more experienced users.

Why only 2 cores? Do you really think this enough?

Start with 2 and add more as needed.

Your CPu only has 10 total cores, 2 high speed cores and 8 effiuciency cores. What else do you run on Proxmox? You can over allocate CPU cycles between VMs. always start small and add if you need it.

There’s varying schools of thought here. Overcommitting cpu resources is usually fine, unless the server is extremely busy.

The hypervisor threads out all these into their own PIDs and the Linux kernel in the hypervisor will pretty neatly load balance the threads across cores.

Workloads and use cases all vary but given that your likely not very busy home nas is on a likely not very busy hypervisor, the ability to “spike up” by giving it plenty of cores is a good thing.

I have no experience with P/E cores in Linux or virtualization, so I’m not sure how it handles what goes where. Sounds like newer kernels are pretty good at it tho.

Could this be a Realtek ethernet issue?

Could be. IMO a big part of the “Realtek is Awful mantra” was at least in part the FreeBSD drivers. I think the software end just didn’t get much love because the hardware is crap and no one really cared enough.

@joeschmuck or jgreco may have posted something in the old forums about this. There were FreeBSD driver notes for a Realtek NIC in the early 2000s. The dev wrote some not so kind words about the hardware.

As far as I can tell, more homelabbers are actually using them now with other OSes. There was a VMWare Fling for USB Realtek NICs that actually worked. It seems it’s not as bad in Linux (in this case its Proxmox’s network stack).

But, then, there are issues still. lol.