Getting Started with NVMe over TCP

With the upcoming 25.10 TrueNAS release, I noticed that NVMe over TCP is now available in RC. It’s an interesting technology, so I decided to test it in my small homelab.

After some research, reading the documentation, torturing chatgpt and experimenting, I found that the basic setup is actually straightforward. However, securing it properly is more complex. I organized my notes and sample configurations here and also published them on GitHub, which may be more convenient to use than copying from the forum.

I hope this post eventually would be useful to someone. Feedback or corrections from more experienced users are very welcome.

Please also check official documentation https://www.truenas.com/docs/scale/25.10/scaletutorials/shares/nvme-of/ . It contains more details and UI screenshots for setup on the NAS side.

Disclaimer

  • This guide is provided without any guarantees, especially regarding security options.
  • The author is new to NVMe over Fabrics, VLANs, security and network namespaces and is exploring this field.

Software and Package Installation

The configuration was tested with the following software versions:

  • TrueNAS: 25.10 RC1 (first release with NVMe over TCP support)
  • Linux kernel: 6.14.0-33-generic (Ubuntu 24.04.1)
  • nvme-cli: 2.8 (libnvme 1.8)

Older versions of nvme-cli and libnvme may also work, but compatibility is not guaranteed.
Install the required packages on the client host:

sudo apt-get update
sudo apt-get install -y nvme-cli iproute2 net-tools

Basic Setup

This unsecured configuration is quick and simple to deploy. It is suitable for testing performance or use within a fully trusted, isolated network. For any production or less controlled environment, consider security measures and check the “Secured Setup” section.

  1. Ensure TrueNAS has a static IP. If unsure, check System → Network.
  2. Navigate to System → Services and enable the NVMe-oF service.
  3. Go to Sharing → NVMe-oF Subsystems → Add in the TrueNAS UI.
  4. In the wizard:
    • Provide a name.
    • Under Namespace, add the previously created Zvol, then save.
    • Click Next, Add, and select Create New.
    • Select your static IP.
    • Finalize the Add Subsystem wizard by clicking Save.
  5. Copy the NQN of the created namespace.
  6. Create an .env config file on the client host at /etc/nvme/nvme-basic.env and fill in the NAS IP and NQN from the previous step:
NVME_NAS_IP=
NVME_NAS_PORT=4420
NVME_NAS_NQN=

(You can also clone the repository and run sudo cp nvme-basic.env /etc/nvme/, then edit the file afterward.)

  1. Create a systemd service:
[Unit]
Description=Connect NVMe over TCP
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
EnvironmentFile=/etc/nvme/nvme-basic.env
ExecStartPre=/usr/bin/modprobe nvme-tcp
ExecStart=/usr/sbin/nvme connect -t tcp -a ${NVME_NAS_IP} -s ${NVME_NAS_PORT} -n ${NVME_NAS_NQN}
ExecStop=/usr/sbin/nvme disconnect -n ${NVME_NAS_NQN}
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

(Alternatively, copy the file using sudo cp nvme-basic.service /etc/systemd/system/.)

  1. Start the service and verify NVMe connectivity:
systemctl start nvme-basic
nvme list
nvme list-subsys /dev/{device_name}
  1. Enable the service for persistence after reboot:
systemctl enable nvme-basic

That’s all! You now have an NVMe device that behaves like a local one.

Secured Setup

Key points:

  • TrueNAS provides authorization keys for NoT, which are used here to restrict access to the specific client host.
  • All communication between the NAS and client is isolated within a dedicated VLAN.
  • To forbid non-root processes access to network traffic, the VLAN interface on the client is placed inside a dedicated network namespace.

Client Host Configuration and Service File

Generate the client NQN and copy the provided service file, auxiliary bash script, and .env file containing variables and secrets:

nvme-connect.service

[Unit]
Description=Connect NVMe over TCP in isolated namespace
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
EnvironmentFile=/etc/nvme/nvme-secret.env
ExecStartPre=/usr/local/sbin/nvme-netns-setup.sh
ExecStart=/usr/bin/ip netns exec ${NVME_NS} /usr/sbin/nvme connect \
    -t tcp -a ${NVME_NAS_IP} -s ${NVME_NAS_PORT} -n ${NVME_NAS_NQN} \
    --dhchap-secret=${NVME_CLIENT_KEY} \
    --dhchap-ctrl-secret=${NVME_NAS_KEY}
ExecStop=/usr/bin/ip netns exec ${NVME_NS} /usr/sbin/nvme disconnect -n ${NVME_NAS_NQN}
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

nvme-netns-setup.sh

#!/bin/bash
set -euo pipefail

/usr/bin/modprobe nvme-tcp

CONF=/etc/nvme/nvme-secret.env
[ -f "$CONF" ] && source "$CONF"

# Sanity checks
: "${NVME_NS:?Missing NVME_NS}"
: "${NVME_IF:?Missing NVME_IF}"
: "${NVME_CLIENT_IP:?Missing NVME_CLIENT_IP}"

# Create namespace if not exists
if ! ip netns list | grep -q "^${NVME_NS}\b"; then
    ip netns add "$NVME_NS"
fi

# Move VLAN interface to the namespace if not already there
if ip link show "$NVME_IF" 2>/dev/null | grep -q "$NVME_IF"; then
    ip link set "$NVME_IF" netns "$NVME_NS"
fi

# Configure inside namespace
ip netns exec "$NVME_NS" bash <<EOF
set -e
ip link set lo up
ip link set "$NVME_IF" up
ip addr show "$NVME_IF" | grep -q "$NVME_CLIENT_IP" || ip addr add "$NVME_CLIENT_IP"/24 dev "$NVME_IF"
ip route show default | grep -q "$NVME_GW" || ip route add default via "$NVME_GW"
EOF

nvme-secret.env

NVME_NS=nvme-network-namespace
NVME_IF=eth0.50
NVME_VLAN_ID=50
NVME_GW=192.168.50.1
NVME_CLIENT_IP=192.168.50.20
NVME_NAS_IP=192.168.50.10
NVME_NAS_PORT=4420

NVME_NAS_NQN=<GENERATED VALUE FROM TRUENAS>
NVME_CLIENT_KEY=<KEY FROM TRUENAS FOR CLIENT AS PRESENTED>
NVME_NAS_KEY=<KEY FROM TRUENAS IDENTIFYING IT AS PRESENTED>
nvme gen-hostnqn > /etc/nvme/hostnqn

cp ./nvme-netns-setup.sh /usr/local/sbin/nvme-netns-setup.sh
cp ./nvme-secret.env /etc/nvme/nvme-secret.env
cp ./nvme-connect.service /etc/systemd/system/nvme-connect.service

chmod 700 /usr/local/sbin/nvme-netns-setup.sh
chmod 600 /etc/nvme/nvme-secret.env

Adjust the network configuration (IP addresses, VLAN ID, etc.) in /etc/nvme/nvme-secret.env according to your network setup. The NAS NQN and security keys should be filled in after completing the TrueNAS configuration.

Client Host Network Setup

Stable Network Interface Name

Modern Linux systems use interface names based on firmware or hardware properties, which can differ from traditional ethX naming. To maintain consistency for configuration, explicitly assign a persistent name to the physical interface (e.g., eth0).

Create a udev rule file with the MAC address of the interface:

sudo vim /etc/udev/rules.d/70-persistent-net.rules

Add line like:

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="aa:bb:cc:dd:ee:00", NAME="eth0"

Add a rule specifying the desired name and MAC address. Then reload udev rules and reboot:

sudo udevadm control --reload-rules
sudo reboot

VLAN Interface Creation

First we need to create a VLAN interface.
In my case NetworkManager managed networking, so using it to create a new interface.

nmcli connection add type vlan con-name "vlan${NVME_VLAN_ID}" dev eth0 id ${NVME_VLAN_ID} ip4 ${NVME_CLIENT_IP}/24

Otherwise please consult your distribution documentation.

TrueNAS NVMe-oF Setup

  1. Navigate to System → Network to configure VLAN and assign a static IP.
  2. Create a Zvol with the desired size and properties.
  3. Navigate to Sharing → NVMe-oF Subsystems → Add in the TrueNAS UI.
  4. In the wizard:
    • Provide a name for the NVMe share.
    • Under Namespace, add the previously created Zvol.
    • In Access Settings, untick “Allow any host to connect”.
    • To restrict access to the client host:
      1. Click Allowed Hosts → Add → Create New.
      2. Enter the host NQN (from /etc/nvme/hostnqn).
      3. Tick Require Host Authentication.
      4. Generate both keys and save them to the client /etc/nvme/nvme-secret.env.
    • To limit access to the dedicated VLAN, add the VLAN port.
  5. Save the configuration.
  6. If NVMe-oF Subsystems is not running, enable it when prompted.
  7. Notice generated NQN for “share” in the TrueNAS view and store it as NVME_NAS_NQ in /etc/nvme/nvme-secret.env.

[Optional] Check setup halfway

See network interface in dedicated namespace

sudo ip netns exec $NVME_NS ip a

Discover announced nvme namespaces

sudo ip netns exec nvme-ns nvme discover -t tcp -a $NVME_NAS_IP -s $NVME_NAS_PORT

Enable NVMe Connection Service

Start the service manually to test the connection and verify the NVMe device:

sudo systemctl start nvme-connect
sudo systemctl status nvme-connect
nvme list

Enable the service to start automatically at boot:

sudo systemctl enable nvme-connect

Filesystem Creation and Mounting

A newly connected NVMe device can be used like a local device in various ways. Here, it is shown being used for a single ext4 filesystem and mounted via an fstab entry.

  1. Identify the newly connected NVMe device using nvme list.
  2. Create an ext4 filesystem:
sudo mkfs.ext4 -E lazy_itable_init=0 /dev/{nvme dev name}
  1. Disable journaling, relying on ZFS on the NAS:
sudo tune2fs -O ^has_journal /dev/{nvme dev name}
  1. Create a mount point for the desired user:
mkdir -p ~/nvme_mount
  1. Retrieve the filesystem UUID:
blkid | grep {nvme dev name}
  1. Add an entry to /etc/fstab using the UUID by sudo vim /etc/fstab:
UUID={new fs uuid} /home/{user}/nvme_mount ext4 defaults,nofail,_netdev,user,auto,noatime,nodiratime,barrier=0 0 2
  1. Check mount as the user:
mount ~/nvme_mount

Finally try to reboot, nvme and fs should be mounted automatically.

2 Likes

Quick, but likely stupid questions.

Is NVMe over TCP just another share protocol like iSCSI, NFS or SMB?
Or specific only to NVMe devices without ZFS on TrueNAS?

(Well, more like iSCSI because NVMe over TCP seems to be for block devices…)

1 Like

Your github repository is unavailable. Is it perhaps private?

My bad! Changed to public.

Yes, it’s more similar to iscsi, since it expose block device.
It should be faster and provide lower latency, but i did not try to utilize iscsi before to compare.

1 Like

Thanks.

Next question:

Is NVMe over TCP able to multi-path, similar to iSCSI?

Bit confused, are we discussing on NVMe-oF or NVMe-TCP?

Before we jump into it, one should also consider the fact about what is the minimum n/w bandwidth that is necessary for using this over TCP(may be we will consider a Gen 3x4 m.2 drive as they are very common now a days). Isn’t it?

Fixed title.
As i understand NVMe-TCP is part of NVMe-oF standards. And community edition of TrueNAS currently provides only TCP version.

Yes. Most documentation suggests using high-speed networking (25 Gbps or higher) and a fast storage pool to see real benefits compared to iSCSI.
In my case, it was more about curiosity and finding a way to move a lot of VM images off my desktop storage. It also provides an easy method for daily VM backups using ZFS snapshot and replication tasks for the zvol.
And it seems to work fine even on measly 2.5Gb wire. Still I would not even try to do it wireless.

Thank you for the quick clarifications and update.

It would be interesting to know, if you are seeing any significant improvement over 2.5Gbe lan - using this setup. Have you tried any speed test to this particular disk over network?

It’s far out of my scope ( small playground in homelab), but google suggest it should be possible. And I’m not sure if multi-path supported by current TrueNAS implementation.
From documentation i suspect it might be a feature on enterprise edition.

I haven’t used iSCSI before, but now I’m curious to compare it. I’ll try setting it up as well and run some tests. Could you suggest a good way to measure performance?

I’ve only tried fio --name=randread --filename=testfile --ioengine=libaio --direct=1 --bs=4k --rw=randread --numjobs=1 --size=1G --runtime=10 --time_based --group_reporting and compared with samba

nvme over tcp

randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=10.9MiB/s][r=2795 IOPS][eta 00m:00s]
randread: (groupid=0, jobs=1): err= 0: pid=34846: Thu Oct 16 11:48:18 2025
  read: IOPS=2764, BW=10.8MiB/s (11.3MB/s)(108MiB/10001msec)
    slat (usec): min=7, max=222, avg= 9.59, stdev= 2.46
    clat (usec): min=108, max=4034, avg=351.31, stdev=152.20
     lat (usec): min=119, max=4044, avg=360.90, stdev=152.24
    clat percentiles (usec):
     |  1.00th=[  124],  5.00th=[  128], 10.00th=[  178], 20.00th=[  194],
     | 30.00th=[  322], 40.00th=[  355], 50.00th=[  371], 60.00th=[  388],
     | 70.00th=[  400], 80.00th=[  412], 90.00th=[  537], 95.00th=[  545],
     | 99.00th=[  594], 99.50th=[  611], 99.90th=[ 2180], 99.95th=[ 2671],
     | 99.99th=[ 3687]
   bw (  KiB/s): min=10616, max=11576, per=100.00%, avg=11093.05, stdev=206.60, samples=19
   iops        : min= 2654, max= 2894, avg=2773.26, stdev=51.60, samples=19
  lat (usec)   : 250=25.30%, 500=61.83%, 750=12.56%, 1000=0.07%
  lat (msec)   : 2=0.12%, 4=0.12%, 10=0.01%
  cpu          : usr=0.44%, sys=3.45%, ctx=27745, majf=0, minf=11
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=27650,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=10.8MiB/s (11.3MB/s), 10.8MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=108MiB (113MB), run=10001-10001msec

Disk stats (read/write):
  nvme2n1: ios=27415/0, sectors=219320/0, merge=0/0, ticks=9519/0, in_queue=9519, util=95.39%

smb

randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
fio-3.36
Starting 1 process
randread: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [r(1)][100.0%][r=6464KiB/s][r=1616 IOPS][eta 00m:00s]
randread: (groupid=0, jobs=1): err= 0: pid=38128: Thu Oct 16 11:53:34 2025
  read: IOPS=1646, BW=6585KiB/s (6743kB/s)(64.3MiB/10001msec)
    slat (nsec): min=6720, max=85409, avg=11352.82, stdev=2470.40
    clat (usec): min=141, max=33597, avg=595.42, stdev=403.79
     lat (usec): min=152, max=33608, avg=606.77, stdev=403.76
    clat percentiles (usec):
     |  1.00th=[  182],  5.00th=[  330], 10.00th=[  367], 20.00th=[  529],
     | 30.00th=[  570], 40.00th=[  578], 50.00th=[  578], 60.00th=[  586],
     | 70.00th=[  586], 80.00th=[  603], 90.00th=[  668], 95.00th=[ 1106],
     | 99.00th=[ 1369], 99.50th=[ 1500], 99.90th=[ 2966], 99.95th=[ 5276],
     | 99.99th=[20579]
   bw (  KiB/s): min= 4944, max= 7368, per=99.83%, avg=6574.74, stdev=691.64, samples=19
   iops        : min= 1236, max= 1842, avg=1643.68, stdev=172.91, samples=19
  lat (usec)   : 250=1.34%, 500=16.70%, 750=75.04%, 1000=0.90%
  lat (msec)   : 2=5.75%, 4=0.22%, 10=0.04%, 20=0.01%, 50=0.01%
  cpu          : usr=0.18%, sys=2.77%, ctx=16517, majf=0, minf=10
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=16464,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
   READ: bw=6585KiB/s (6743kB/s), 6585KiB/s-6585KiB/s (6743kB/s-6743kB/s), io=64.3MiB (67.4MB), run=10001-10001msec

One quick way to test would be to map it with iSCSI and then use CrystalDiskMark to check the speed (on windows)

Well, since I only use Windows for Steam these days, it wasn’t a quick task for me :).
But now I’ve got iSCSI running on both Linux and Windows. Unfortunately, NVMe over TCP seems to be limited to Windows Server or requires some non-trivial third-party software, so there’s no proper way to compare it on Windows for me currently. After spending a few hours, I decided to pause until upgrading to a faster network (probably 10G next year) and setting up at least a mirrored pair of fast NVMe drives as the test base.

I’d hide the test results under a spoiler, since I’m not confident they accurately reflect differences in the technologies. Instead, I’d summarize my observations, specific to a my setup and 2.5 Gbps network:

  • Both Linux implementations of NVMe and iSCSI, as well as Windows iSCSI, can saturate the link under suitable load.
  • On Linux, NVMe/TCP performs visibly better that iSCSI on writes.
  • On Linux, iSCSI shows slightly worse write performance compared to its reads, and similarly, Windows iSCSI writes are weaker than reads.
  • Increasing I/O depth and/or concurrent jobs generally improves throughput.
  • While CrystalDiskMark reported great numbers, I couldn’t reproduce those results when copying a 50 GB VMDK file to the iSCSI-mounted volume.
results table is here, but take it with a big grain of salt

READ

Test Type NVMe/TCP iSCSI iSCSI W11 CryMrk
MB/s IOPS MB/s IOPS MB/s IOPS
SEQ1M Q8T1 294 280 294 280 294 281
SEQ1M Q1T1 216 205 233 222 268 256
RND4K Q32T1 275 67k 233 56.9k 262 64k
RND4K Q1T1 9.8 2338 11.8 3033 26 6290
RND4K Q8T4 271 66k 267 64k 260 64k

WRITE

Test Type NVMe/TCP iSCSI iSCSI W11 CryMrk
MB/s IOPS MB/s IOPS MB/s IOPS
SEQ1M Q8T1 290 276 188 179 292 279
SEQ1M Q1T1 216 205 139 143 173 165
RND4K Q32T1 27 6606 13 3183 47 11.5k
RND4K Q1T1 11 2748 3 689 4.7 1146
RND4K Q8T4 91 22k 20 5252 52 12.7k
… 4k zvl rs 246 60k 200 49k
fio commands for reference
# SEQ1M Q8T1
fio  --name=seqread --rw=read  --filename=$NVMEoF --bs=1024k --numjobs=1 --iodepth=8 --ioengine=libaio --direct=1 --size=32G --group_reporting
fio  --name=seqread --rw=read  --filename=$ISCSI  --bs=1024k --numjobs=1 --iodepth=8 --ioengine=libaio --direct=1 --size=32G --group_reporting

# SEQ1M Q1T1
fio  --name=seqread --rw=read  --filename=$NVMEoF --bs=1024k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=32G --group_reporting
fio  --name=seqread --rw=read  --filename=$ISCSI  --bs=1024k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=32G --group_reporting

# RND4K Q32T1
fio  --name=randread --rw=randread  --filename=$NVMEoF --bs=4k --numjobs=1 --iodepth=32 --ioengine=libaio --direct=1 --size=32G --group_reporting
fio  --name=randread --rw=randread  --filename=$ISCSI  --bs=4k --numjobs=1 --iodepth=32 --ioengine=libaio --direct=1 --size=32G --group_reporting

# RND4K Q1T1 (just 2G)
fio  --name=randread --rw=randread  --filename=$NVMEoF --bs=4k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=2G --group_reporting
fio  --name=randread --rw=randread  --filename=$ISCSI  --bs=4k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=2G --group_reporting

fio  --name=randread --rw=randread  --filename=$NVMEoF4k --bs=4k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=2G --group_reporting

# RND4K Q8T4
fio  --name=randread --rw=randread  --filename=$NVMEoF --bs=4k --numjobs=4 --iodepth=8 --ioengine=libaio --direct=1 --size=8G --group_reporting
fio  --name=randread --rw=randread  --filename=$ISCSI  --bs=4k --numjobs=4 --iodepth=8 --ioengine=libaio --direct=1 --size=8G --group_reporting

## write
# SEQ1M Q8T1
fio  --name=seqwrite --rw=write  --filename=$NVMEoF --bs=1024k --numjobs=1 --iodepth=8 --ioengine=libaio --direct=1 --size=32G --group_reporting
fio  --name=seqwrite --rw=write  --filename=$ISCSI  --bs=1024k --numjobs=1 --iodepth=8 --ioengine=libaio --direct=1 --size=32G --group_reporting

# SEQ1M Q1T1
fio  --name=seqwrite --rw=write  --filename=$NVMEoF --bs=1024k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=32G --group_reporting
fio  --name=seqwrite --rw=write  --filename=$ISCSI  --bs=1024k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=32G --group_reporting

# RND4K Q32T1 (just 8Gg)
fio  --name=randwrite --rw=randwrite  --filename=$NVMEoF --bs=4k --numjobs=1 --iodepth=32 --ioengine=libaio --direct=1 --size=8G --group_reporting
fio  --name=randwrite --rw=randwrite  --filename=$ISCSI  --bs=4k --numjobs=1 --iodepth=32 --ioengine=libaio --direct=1 --size=8G --group_reporting
fio  --name=randwrite --rw=randwrite  --filename=$NVMEoF4k --bs=4k --numjobs=1 --iodepth=32 --ioengine=libaio --direct=1 --size=8G --group_reporting 

# RND4K Q1T1 (just 2G)
fio  --name=randwrite --rw=randwrite  --filename=$NVMEoF --bs=4k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=2G --group_reporting
fio  --name=randwrite --rw=randwrite  --filename=$ISCSI  --bs=4k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=2G --group_reporting

fio  --name=randwrite --rw=randwrite  --filename=$NVMEoF4k --bs=4k --numjobs=1 --iodepth=1 --ioengine=libaio --direct=1 --size=2G --group_reporting

# RND4K Q8T4 (just 2g)
fio  --name=randwrite --rw=randwrite  --filename=$NVMEoF --bs=4k --numjobs=4 --iodepth=8 --ioengine=libaio --direct=1 --size=2G --group_reporting
fio  --name=randwrite --rw=randwrite  --filename=$ISCSI  --bs=4k --numjobs=4 --iodepth=8 --ioengine=libaio --direct=1 --size=2G --group_reporting
fio  --name=randwrite --rw=randwrite  --filename=$NVMEoF4k --bs=4k --numjobs=4 --iodepth=8 --ioengine=libaio --direct=1 --size=2G --group_reporting
fio  --name=randwrite --rw=randwrite  --filename=$ISCSI4k  --bs=4k --numjobs=4 --iodepth=8 --ioengine=libaio --direct=1 --size=2G --group_reporting
1 Like

Thank you for the efforts and summarizing the results. Even though our initial expectations were not met, the leanings from this were definitely worth it. Appreciated you taking the time for that, and doing it for all of us.

Just a suggestion from my experience: if you decide to go 10Gbe in future, if possible, stick with fiber version and skip Ethernet (especially any usb4/thunderbolt adapters) for avoiding heat related issues.

… and power consumption, with EU electricity cost :frowning:
New Realtek’s 8127 looks promising with 2W for RJ45, but now sfp+ options yet, and currently only available through aliexpress.