No matter what I do, iSCSI speed is about 30-200MB/s.
I tried different block sizes for the LUN, different “Logical Block Sizes”.
I even tried PCI Passthrough one of the physical port to the VM and established direct wired connection between my PC (win11) and that port.
No luck, TrueNAS iSCSI implementation, seems, doesn’t work for some cases.
From the same instance of TrueNAS I get stable SMB speed 1.12 GB/s for the single file (20GB) transfer (without direct connection and router is used and still).
I also have a Synology DS923+ with a 10Gb NIC and 4 spinning rust drives in RAID 5.
DS923+ delivers stable 450-500 MB/s via iSCSI (even without direct connection).
Interesting thing though, while testing I decided to virtualize win11 to see what performance I will get if both the TrueNAS and win11 placed within Proxmox.
In this case, both virtual machines have a network connection with a throughput of about 30 Gbps.
For the test purpose, I used single VDEV with 4x drives in RAIDz1.
And the results are really strange - sometimes I saw iSCSI write speed about 1350MB/s, sometimes about 500, and sometimes really low.
I believe the iSCSI speed issue has something to do with how TrueNAS handles the network card.
could you describe exactly what your configuration is? from reading your post it seems like you have proxmox installed on top of truenas and have passed the controller through?
or is truenas on top of proxmox? or are they on separate machines?
this is the way proxmox is handling the transfer, it knows that the file is on the same disk and is essentially reassigning it from one VM to another. this is very common with virtualization environments.
I updated my first post.
Proxmox is a hypervisor and TrueNAS is a virtual machine.
Not the case. Proxmox does not own the SATA controller because of Passthrough. TrueNAS storage (4x SATA SSD) is not visible to the Proxmox in any way.
Win11 test VM was installed on the Proxmox virtual disk (file on the separate NVME disk).
Observed 30Gbit network connection (confirmed by iperf) I can explain by Intel NIC loopback feature.
Provides loopback functionality, where data transfer between the virtual machines within the same physical server need not go out to the wire and come back in, improving throughput and CPU usage.
thank you for updating the post. now that i understand your setup better, hopefully i can be of more assistance. so we know the truenas can get 1.12GB/s as we have a system that is able to achieve that. so there does not apear to be any issue on the networking side.
that leaves the ISCSI configuration. I have not tried iscsi on win11 myself so cannot speak to any differences the latest OS may have but it is possible that there is a setting that needs to be adjusted somewhere.
keeping in mind that iSCSI is handed out as a block device the filesystem is handled by the initiator. when connecting to the iscsi target, is the block device eagerly zeroed or lazily zeroed?
it is possible you are seeing additional overhead caused by the win11 system having to format the drive as it is being written.
there may also be some settings on the truenas side such as the sync/async setting mentioned by stux.
any compression, dedup or encryption may also be affecting it if these settings were left as default (mileage may vary here, depending on your default settings)
I updated first post with more details.
Sync is disabled.
In this case, the TrueNAS developers would be the first to know about it and the recommendations would appear in the documentation.
I also use Synology iSCSI from the same win11 machine - transfer speed is in the expected range (450-500MB/s) and quite stable.
I use “quick format” in Windows as usual. So it is lazily zeroed.
With Synology LUN quick format does not cause any visible slowdown.
If this is the source of TrueNAS iSCSI performance, it should recover on its own during testing, as I moved a 20GB file onto a 50GB LUN multiple times (not exceeding 50 GB of space).
It looks like the problem is somewhere outside of user exposed settings.
So, I have a plan to install TrueNAS on the same server (bare metal) and take a look.
Also, long term plan to install Intel x550-T2 card and test again.
It seems your iSCSI speed issue might be related to network configuration or TrueNAS SCALE’s handling of the NIC. Since SMB speeds are fine but iSCSI fluctuates, try switching to a more performance-tuned virtual NIC like Intel e1000 to see if VirtIO is the bottleneck. Ensure MTU 9000 is consistent across all network devices, including switches and initiators. Adjust iSCSI queue depth and session parameters on both TrueNAS and the initiator (Windows). Additionally, test whether PCI passthrough is causing instability by using the onboard NIC for comparison. These steps may help stabilize iSCSI performance.
The onboard NIC is only 1Gb, so testing possibilities are limited.
And without PCI passthrough, I would be able to test only with virtual disks (hosted on the NVME SSD).
But anyway, it’s worth to try all options.
You’re right that with a 1Gb onboard NIC, testing options are limited. Without PCI passthrough, you can only test with virtual disks (hosted on the NVME SSD). However, it’s still worth trying all available options as it may provide the best results
I decided to test on bare metal, so now the host machine is TrueNAS.
No Proxmox, no virtualization and passthrough, all 64 GB ram available.
I just installed TrueNAS and restored settings from the settings backup.
iSCSI:
SMB:
So, now I will try to reset all TrueNAS settings and perform clean TrueNAS setup.
Going to setup direct wired connection.
Later I will try intel X550-T2 NIC as well, just in case and combination of intel x550-T2 → x520-DA2.
If this will not help, then the issue is in TrueNAS itself.
I have read many discussions about TrueNAS iSCSI performance, everyone advises to increase RAM, try different settings, but maybe TrueNAS is not suitable for iSCSI due to zfs design, like it’s not good for SSDs.
This is a test i did on my 1.5 tib iscsi share on my ssd pool.
I had to manually set the allocation unit size in windows disk management. It was giving me very poor performance with it set to default.
it is slower over smb than iscsi
Why? Dual mirror should provide 2x IOPS and with 64GB RAM, ARC can cache the entire file, isn’t it?
Interesting, because “default” is equal to the “iSCSI Extent Logical Block Size” - essentially, 512 or 4K.
When formatting with the “default” allocation unit size, Windows uses the exact value provided by TrueNAS.
So, can you specify, please, your:
Main dataset recordsize (128K?)
iSCSI Zvol Volblocksize (16K?)
iSCSI Extent Logical Block Size (4K?)
Selected Windows Allocation unit size - during format (4K?)
TrueNAS network interface MTU value and windows NIC MTU value.
It seems OK.
for disk in /dev/sd?; do; hdparm -W $disk; done
/dev/sda:
write-caching = 1 (on)
/dev/sdb:
write-caching = 1 (on)
/dev/sdc:
write-caching = 1 (on)
/dev/sdd:
write-caching = 1 (on)
When you get back to testing, I’d suggest doing two things:
Increasing the zvol block size to 128K (If bandwidth is your primary goal)
Testing with fio or some tool that lets you control queue depth
As a side note: a NAS with 4 slow HDDs can’t write at 500MB/s and be reliable (what protection is used). If your test has less data than your NAS RAM, you can get artificially high numbers.
It’s up to 500, as graph shows, it fluctuates between 300-500.
Anyway, it’s great result for 4x 5200rpm HDDs in raid5.
The NAS only has 4GB of RAM compared to the 64GB TrueNAS test setup.
Anyway, I’m fine with the NAS using the HDD cache or speeding up transfers with the memory cache. Each device is connected to a UPS, and the chances of the DS923 freezing are low (but just in case, I use cloud backups for critical data).
I tried using TrueNAS with sync=disabled because there’s no way I’m writing data twice to the SSD (LOG+Actual write), I know that I can loose data for 5-6 seconds.