Truenas throughput problem

truenas 25.04.2.6

10 disks sata 7200rpm 7.28tb

and 2 ssd cache disks

raidz1

from setup and up until about 2 weeks ago, i was getting around 800-900mb/s of read/write.

now i’m down to 200-300mb/s max.

i can’t seem to find the reason. any ideas of how to debug?

thank you

How do you access the storage? How do you measure the speed?

i use it with iscsi over windows

tested from windows, and also tested inside the truenas shell with this :

fio --name=seqread --rw=write --direct=0 --iodepth=32 --bs=128k --numjobs=1 --size=16G

not sure if it’s the correct way to check, but this is what i’ve found

So, it’s a block storage. I have very limited experience with zvols… But experts say that block storage on raidz is a huge no-no[1].

AIUI, fio bencmarks file storage. I’m not an expert, but with --iodepth=32 --bs=128k it looks more like a random workload. IMO, to test throughput you should use something like --iodepth=2 --bs=1M.

However, even if fio would show you desired values, this workload is very different from using zvols. For one, zvols have a 16K recordsize by default. Is there any reason why you have decided to use iSCSI instead of SMB on windows?


  1. Resource - The path to success for block storage | TrueNAS Community ↩︎

1 Like

How are these drives physically connected and interfacing with your sever?

Also what NIC / Switch Ports are these connected on for both sides? Defo 1G+ Ports etc

If for example you’re using a SATA Splitter for this, then I wouldn’t be surprised to see this sort of speed as most commonly they just take one or two SATA connections and literally split it across all these drives, but they are expecting full speed.

Also can also confirm what you get with SMB Sustain Tests? E.g. Copy a Large File (10GB+) to the server, then a lot of small files (10MB or less) and see if the speeds are hit hard or different to the block storage speeds.

Since the performance is dropping, my guess is that free space fragmentation is killing performance.

In general I would advise against RAIDZ for blockstorage, especially for 10 disk in a RAIDZ1.

  • RAIDZ1 is pretty dangerous, especially for such huge and slow disks that will take a long time to resilver
  • if you use the default 16k, you only get 66% storage efficiency instead of your expected 90%
  • if you use the none default 64k, you do get at least 88.88%, but rw amplification will kill performance

If you are interested in the nitty gritty details: opinions_about_tech_stuff/ZFS/The problem with RAIDZ.md at main · jameskimmel/opinions_about_tech_stuff · GitHub

but I think you should first describe us your use case and what you are trying to achieve.

General advice; don’t use blockstorage for files. If you do need to use blockstorage, use it on mirrors.

1 Like

it’s because i use a backup system. and its preference is a “drive” and not a network share.

the strangest thing is that it’s suddenly changed, it was working just fine.

thank you

what do you recommend for my use case? get as many read and write performance, even at a great size lost.

the network ports are 10gbps sfp.

also, i did this test and tried to copy a large file into the volume, and it seems to be working a lot faster. it’s not stable and it’s fluctuating a little bit, but all in all the throughput is high. a file of 10gb is copied in 5-10 seconds

I don’t know your use case. Nor what you are trying to achieve. You only said that it is a “backup system”. Not what software, how the storage is mounted, what files you will write, at what sizes, if only write speeds are important or maybe read speeds are even more important because you need fast restores…

If possible I would attach the storage via NFS instead of iSCSI.

Welp, you can just map a network drive (your SMB share). I think it would work ok in most of the cases. Unless your backup software really requires block storage.

AFAIK, SMB is a first-class citizen in windows (compared to NFS).