25.10 drive performance has dropped significantly

I have two locations where I have upgraded TN 25.04.2.3 to 25.10. In both instances the drive performance has dropped through the floor. In some instances it has dropped as much as 98%. I had this same issue during the beta and reported it then.

One machine is a bare metal and one is a virtual machines (XCP-NG): the virtual one is running to a passed through LSI controller. Both have worked great since version 22xxxx.

Shown below are the tests from FIO (Flexible I/O Tester).

The drives below are all SATA drives but the same holds true with SSD drives.

Any ideas?

Thanks!

Computer #1
Processor Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz (bare metal)
RAM 62.8 GB
VDEV type RaidZ2
SLOG SDD
TESTS
Random write test
test: test: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before WRITE: bw=1321MiB/s (1385MB/s), 1321MiB/s-1321MiB/s (1385MB/s-1385MB/s), io=4096MiB (4295MB), run=3101-3101msec
After WRITE: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=1728MiB (1812MB), run=22698-22698msec
Decrease -94.0%

Random Read test
test: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before READ: bw=1531MiB/s (1606MB/s), 1531MiB/s-1531MiB/s (1606MB/s-1606MB/s), io=4096MiB (4295MB), run=2675-2675msec
After bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=573MiB (601MB), run=11466-11466msec
Decrease -97.2%

Mixed Random Workload test
test: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before READ: bw=1170MiB/s (1227MB/s), 1170MiB/s-1170MiB/s (1227MB/s-1227MB/s), io=1992MiB (2089MB), run=1703-1703msec
After READ: bw=50.0MiB/s (52.4MB/s), 50.0MiB/s-50.0MiB/s (52.4MB/s-52.4MB/s), io=573MiB (601MB), run=11466-11466msec
Decrease -95.7%

Before WRITE: bw=1235MiB/s (1295MB/s), 1235MiB/s-1235MiB/s (1295MB/s-1295MB/s), io=2104MiB (2206MB), run=1703-1703msec
After WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=616MiB (646MB), run=11466-11466msec
Decrease -84.14%

Sequential write test
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before WRITE: bw=1547MiB/s (1622MB/s), 1547MiB/s-1547MiB/s (1622MB/s-1622MB/s), io=4096MiB (4295MB), run=2648-2648msec
After WRITE: bw=98.1MiB/s (103MB/s), 98.1MiB/s-98.1MiB/s (103MB/s-103MB/s), io=290MiB (304MB), run=2955-2955msec
Decrease -93.7%

Sequential Read test
test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before READ: bw=1610MiB/s (1688MB/s), 1610MiB/s-1610MiB/s (1688MB/s-1688MB/s), io=4096MiB (4295MB), run=2544-2544msec
After READ: bw=244MiB/s (255MB/s), 244MiB/s-244MiB/s (255MB/s-255MB/s), io=4096MiB (4295MB), run=16811-16811msec
Decrease -84.8%

Computer #2
Processor Intel(R) Xeon(R) CPU E5-2450 0 @ 2.10GHz (Virtual)
RAM 16 GB
VDEV type RAIDZ1
SLOG NVME

TESTS
Random write test
test: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before WRITE: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=4096MiB (4295MB), run=17884-17884msec
After WRITE: bw=189MiB/s (198MB/s), 189MiB/s-189MiB/s (198MB/s-198MB/s), io=4096MiB (4295MB), run=21641-21641msec
Decrease -17.%

Random Read test
test: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before READ: bw=1341MiB/s (1406MB/s), 1341MiB/s-1341MiB/s (1406MB/s-1406MB/s), io=4096MiB (4295MB), run=3055-3055msec
After READ: bw=64.2MiB/s (67.3MB/s), 64.2MiB/s-64.2MiB/s (67.3MB/s-67.3MB/s), io=2403MiB (2520MB), run=37417-37417msec
Decrease -95.2%

Mixed Random Workload test
test: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before READ: bw=440MiB/s (461MB/s), 440MiB/s-440MiB/s (461MB/s-461MB/s), io=1992MiB (2089MB), run=4532-4532msec
After READ: bw=70.5MiB/s (74.0MB/s), 70.5MiB/s-70.5MiB/s (74.0MB/s-74.0MB/s), io=1992MiB (2089MB), run=28242-28242msec
Decrease -84.00%

Before WRITE: bw=464MiB/s (487MB/s), 464MiB/s-464MiB/s (487MB/s-487MB/s), io=2104MiB (2206MB), run=4532-4532msec
After WRITE: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=2104MiB (2206MB), run=28242-28242msec
Decrease -83.90%

Sequential write test
test: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before WRITE: bw=243MiB/s (254MB/s), 243MiB/s-243MiB/s (254MB/s-254MB/s), io=4096MiB (4295MB), run=16889-16889msec
After WRITE: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=4096MiB (4295MB), run=22086-22086msec
Decrease -23.9%

Sequential Read test
test: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=32
Before READ: bw=1375MiB/s (1442MB/s), 1375MiB/s-1375MiB/s (1442MB/s-1442MB/s), io=4096MiB (4295MB), run=2979-2979msec
After READ: bw=405MiB/s (425MB/s), 405MiB/s-405MiB/s (425MB/s-425MB/s), io=4096MiB (4295MB), run=10109-10109msec
Decrease -70.5%

I’ve now checked six different installations, the reduced speed is in all of them.

For the heck of it, wondering if the issue was with upgrading from one version to another, last night I set up two different brand new installs; one with 25.04.2.6 and one with 25.10. Both have identical specs. Both were installed with the defaults.

25.10 data storage speeds are massively lower than 25.04.x.

I posted a bug report but they closed it saying it was something to do with: “Disk utilization not being reported properly” and “Excessive reported disk throughput” (neither of which seems to make sense??).

Hey @ArchatParks I found your earlier thread (TN 25.10 terrible drive performance) and I’m going to poke at this.

It’s definitely the exception though, we’ve been seeing huge performance numbers out of 25.10.

Do you have the raw fio command handy? I can infer it from the results but just want to be sure I’m comparing like-to-like here. I’m grabbing the debug as well.

You said these are all SATA drives in this system, correct?

Can you get to a root prompt (sudo -s will do) and run

for disk in /dev/sd?; do; hdparm -W $disk; done

I’m suspecting your PERC decided to do a silly and shut off write-caching. I just found out that my LSI did the same to me on a few of my Intel DC drives between reboots here. :thinking:

0000:03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS-3 3108 [Invader] (rev 02)
	DeviceName: Integrated RAID
	Subsystem: Dell PERC H730P Mini
	Kernel driver in use: megaraid_sas
	Kernel modules: megaraid_sas

This is how I ran the tests:

SEQIODEPTH=32
SEQIOSIZE=1m
let TARGETRNDIODEPTH=192
RNDIOSIZE=4k
DOSEQUENTIALWRITE=true
DOSEQUENTIALREAD=true
DORANDOMREAD=true
DORANDOMWRITE=true
DOONLYLATENCY=false

Running Random write test for IOP/s test
fio --randrepeat=1 --ioengine=libaio --iodepth=$SEQIODEPTH --direct=1 --name=test --filename=$DEVICE/test --bs=$SEQIOSIZE --size=$SIZE --readwrite=randwrite --ramp_time=30 --group_reporting --append-terse --output=“$output”

Running Random Read test for IOP/s test
fio --randrepeat=1 --ioengine=libaio --iodepth=$SEQIODEPTH --direct=1 --name=test --filename=$DEVICE/test --bs=$SEQIOSIZE --size=$SIZE --readwrite=randread --ramp_time=30 --group_reporting --append-terse --output=“$output”

Running Mixed Random Workload test
fio --randrepeat=1 --ioengine=libaio --iodepth=$SEQIODEPTH --direct=1 --name=test --filename=$DEVICE/test --bs=$SEQIOSIZE --size=$SIZE --readwrite=write --ramp_time=30 --group_reporting --append-terse --output=“$output”

Running Sequential Read test for throughput test
fio --randrepeat=1 --ioengine=libaio --iodepth=$SEQIODEPTH --direct=1 --name=test --filename=$DEVICE/test --bs=$SEQIOSIZE --size=$SIZE --readwrite=read --ramp_time=30 --group_reporting --append-terse --output=“$output”

I’ve done it on six different machines and all but one were SATA…one was SSD

This is from machine #1
/dev/sda:
write-caching = 1 (on)

/dev/sdb:
write-caching = 1 (on)

/dev/sdc:
write-caching = 1 (on)

/dev/sdd:
write-caching = 1 (on)

/dev/sde:
write-caching = 1 (on)

/dev/sdf:
write-caching = 1 (on)

/dev/sdg:
write-caching = 1 (on)

/dev/sdh:
write-caching = 1 (on)

/dev/sdi:
write-caching = 1 (on)

/dev/sdj:
write-caching = 1 (on)

/dev/sdk:
write-caching = 1 (on)

This is from machine #2

/dev/sda:
write-caching = 1 (on)

/dev/sdb:
write-caching = 1 (on)

/dev/sdc:
write-caching = 1 (on)

/dev/sdd:
write-caching = 1 (on)