Storage configuration:
1 x 1280GB QEMU SCSI (partition from the boot disc)
1 x Intenso SATA 2TB SSD integrated in TueNAS as PCI passthrough
1 x SanDisk Ultra 3D SSD 1TB (external enclosure) in TrueNAS with USB port passthrough
1 x WD Red Plus 3.5’ HDD 6TB (external enclosure) integrated in TrueNAS with USB port passthrough
I have created a separate pool with a separate file system for each of the above hard drives. So a total of 4 pools as a stripe without other options, each with 1 dataset.
In the pools/datasets I turned Sync OFF and the record size is set to 1M, that’s what I found in some other threads, videos.
My Proxmox is connected via 1gig ethernet cable and my Mac Mini M2 is also connected via cable.
Now I want to put my data back to the new TrueNAS instance. First I wanted to fill my Pool shown in the screenshot which is the Intenso SATA SSD with the SATA PCI passtrough.
As you can see the write speeds are flapping between the full 1gig and zero. It is very strange because when I transfer data to the WD Red Plus 3,5" HDD (TrueNAS-Pool-4) which is connected via USB to the Proxmox host doesn’t have this issue and get constant 1Gig speed (around 115MB/s).
The files I copied from my Mac to the TrueNAS-Pool-2 are around 1GB each, in total 200GB.
I dont know if the same happened in Core. In Core I didnt do the passtrough of USB and PCI. I did HDD passthrough with my Proxmox. If you do HDD passthrough via SCSI the TrueNAS doesnt get the real hardware and cannot do things like S.M.A.R.T, Trim and Scrub as it should be.
I also think that performance is better with passing through the real hardware.
The SSD which has this issue is also not a special “NAS” SSD. It is called Intenso TOP SSD SATA III. I dont know if the build-in cache is enough for NAS systems. The WD Red Plus has a dedicated cache of 256MB and as I said it works perfect and has a constant write speed of around 100-116MB/s
This being a virtualized istance does not help troubleshooting, but that write-wait-repeat pattern is usually sympton of a drive being not able to properly flush the data OR something off with the RAM’s transactional groups flushing.
If I were to place a bet it would be one the virtualization side of things but really, have very little clue about where to start looking… WAIT, how are you PCIe passthrough-ing a SATA device? Smelling an adapter here.
Your CPu only has 10 total cores, 2 high speed cores and 8 effiuciency cores. What else do you run on Proxmox? You can over allocate CPU cycles between VMs. always start small and add if you need it.
There’s varying schools of thought here. Overcommitting cpu resources is usually fine, unless the server is extremely busy.
The hypervisor threads out all these into their own PIDs and the Linux kernel in the hypervisor will pretty neatly load balance the threads across cores.
Workloads and use cases all vary but given that your likely not very busy home nas is on a likely not very busy hypervisor, the ability to “spike up” by giving it plenty of cores is a good thing.
I have no experience with P/E cores in Linux or virtualization, so I’m not sure how it handles what goes where. Sounds like newer kernels are pretty good at it tho.
Could be. IMO a big part of the “Realtek is Awful mantra” was at least in part the FreeBSD drivers. I think the software end just didn’t get much love because the hardware is crap and no one really cared enough.
@joeschmuck or jgreco may have posted something in the old forums about this. There were FreeBSD driver notes for a Realtek NIC in the early 2000s. The dev wrote some not so kind words about the hardware.
As far as I can tell, more homelabbers are actually using them now with other OSes. There was a VMWare Fling for USB Realtek NICs that actually worked. It seems it’s not as bad in Linux (in this case its Proxmox’s network stack).