Truenas hardware:
Platform: Inspur NF5466M5
CPU: Gold 6230
Memory: 96G ECC
Hard drive: MG06SCA10TA x12 Raidz3
Only one data vdev, no other vdevs.
Client:
Windows Server 2022
Platform: R730XD
CPU: E5-2680V3
Memory: 32G ECC
Hard drive: netlist n1951 7.68T
Both machines have built-in 10G network cards, and the iperf3 speed test can reach full speed. I once used 2 S3610 SSDs to form a stripe pool for speed testing. Truenas used the default SMB preset to create a data set and was able to run at full 10G network speed. But a 12-disk raidz3 pool can only run to 4Gbps. Two servers are connected using an 8-port 10G optical port switch. When using SMB to read and write, the speed of each hard disk is around 50MB/s, but when using scrub, each disk can reach 200MB/s. The hard disk, CPU and network are not bottlenecks.Is this a configuration problem with raidz3 or some other problem? Is there any way to increase the speed without adding other vdevs?
The drop in speed after a few seconds is expected.
It stores the data in RAM, if using async, and then dumps the data to the pool when it’s set to do so, by default every 5 seconds. RAM is quick, which is likely where your initial speed comes from, but then reality sets in as the write speed adapts to what your pool can sustain.
Since reading doesn’t include the same temporary caching in RAM, it doesn’t see the same changes in transfer speed. You may see increased read speeds if you read the same data over and over again, resulting in it being stored in RAM (aka the ARC) or in the L2ARC if you have such a device set up properly.
What storage controller is in your machine? There are several RAID-based options listed in the datasheet for that system and only a single proper HBA.
What size of files are you using during the writes? Because of your wide RAIDZ geometry you will need to use fairly large files to make optimal use of the disks.
Depending on your storage controller, it might be disabling drive write cache when it sees SATA drives. From a shell on the TrueNAS machine, enter the following line:
for disk in /dev/sd?; do; hdparm -W $disk; done
Each drive should respond with write-caching = 1 (on)
No, this is not expected. The ramp-down time is too fast (usually within seconds) and the RAM is not completely filled at this time. It takes about 1 minute after slowing down to 4Gbps for the zfs cache to fill up RAM
I use an OEM LSI 9362-8i to connect all the hard drives, but I do not use Hardware RAID and all the disks are in JBOD mode.
The file size is basically above 20GB.
I think the slow writing speed has nothing to do with whether the cache is enabled, because when I purchased these second-hand hard drives, I did a full disk write test, and the write speed outside the cache was still close to 200MB/s.
This doesn’t seem to be of much help. Although my disk is second-hand, I have conducted a full disk write test before using it. There is no disk fragmentation problem, and each disk is far more than 100MB/s. In theory, the 12-disk raidz3 also has more than 1GB/s. s speed, it should be able to run a full 10Gbps network
How did you conduct your testing? The results that I referred to in post #2, from Calomel, is about right for what you are reporting on network speeds.
@HoneyBadger, do you have any feedback on the Calomel testing I linked? Is the performance about right or is that data outdated?
@rewq43211234, is your LSI 9362-8i in IT mode or JBOD? Can you run sas3flash and check? You can copy and paste the results back here using Preformatted Text mode for easier reading. It looks like </> on the toolbar or (Crtl+e).
Tested using Hard Disk Sentinel. #2 doesn’t exactly match my actual situation. The reading speed differs too much. 9362-8i does not have IT mode firmware, at least I haven’t found it. It is currently in JBOD mode.
According to the article, enabling or disabling LZ4 compression and the amount of memory available can impact speed to some extent. However, I tested both enabling and disabling LZ4, and it had no noticeable effect on read and write speeds. I also tried increasing the RAM to 768GB (though I believe it’s completely unnecessary), but there was no change in performance.
Since I have very high requirements for data security, I don’t plan to improve speed by adding ARC, as I don’t have a power-loss protection mechanism in place.I still believe that it is unexpected for a 12-disk RAIDZ3 to fall short of 10Gbps.
Another thread I found, although a bit old and FreeBSD / TrueNAS related. Probably just look at first post. Is there a write cache enabled on your LSI?
I’m waiting to see if anyone else posts with experience on Raid-Z3. That is usually more Enterprise level.
I used Hard Disk Sentinel’s write test on Windows Server 2022 to test all the disks simultaneously. After confirming that there were no issues, I installed TrueNAS. I didn’t perform any additional disk tests directly in TrueNAS; I only checked the disk I/O in the reporting section. When using SMB for read and write operations, each disk achieves around 50MB/s. However, during a scrub, each disk reaches speeds above 200MB/s.
You already have ARC: It’s the RAM, and pretty much required by ZFS
AS for L2ARC, I can only wonder where you got the idea it required “a power-loss protection mechanism”. (PLP is a requirement for SLOG.)
I had a rough understanding of ARC a long time ago and may have confused it with the write-back cache of the RAID card. I vaguely remember reading that increasing the ARC size limit could cache more dirty data, which might improve initial write speeds. I might have mixed up ARC with SLOG.
However, after further research, I realized that ARC doesn’t impact write speeds. Adding an SSD or NVMe as SLOG wasn’t part of my plan. For this machine, which is used for incremental backups, adding a SLOG isn’t ideal. Since most of the writes are infrequent and the machine is often powered off after a period of heavy writes, there isn’t enough time for the SLOG to effectively write to the disk.
Yes, so I stated it at the beginning, is there any way to increase the speed without adding other vdevs, such as detailed settings for smb or raidz.I am just getting started with zfs and I don’t know how to set it up appropriately to get more performance out of raidz.