Read and Write slow RAIDz1- truenas scale

Server Truenas

Version Truenas: Dragonfish-24.04.0

HP ML310e Gen8
Dynamic Smart Array B120i
Intel(R) Xeon(R) CPU E3-1220 v3 @ 3.10GHz
RAM 32GB

Storage:

RAIDZ1 = 3x WD RED 3TB SATA (2x WDC_WD30EFRX + 1x WDC_WD30EFAX
Log VDEVs = 1x NVME 1TB
Cache VDEVs = 1x NVME 500GB

Network:
10Gbps MTU 9000

Service:
iSCSI and NFS

Server Proxmox

Version: pve-manager/7.4-17/513c62be

24 x Intel(R) Xeon(R) CPU X5675 @ 3.07GHz (2 Sockets)
RAM 188GB

Network:
Port1 (STORAGE):10Gbps MTU 9000
Port2 (DATAS): 10Gbps MTU 1500

Connection iSCSI:

Storage iSCSI with portal + Storage LVM with Base Storage iSCSI

Topology

Server Truenas Switch 10Gbps Server Proxmox

Test with iperf3: ~9Gbps

Situation:

VMs or Desktop reading around 150 MBps and writing averaging 30 MBps, is this normal?

My test:
VM with Windows or Linux and copying 11GB (5 files ~2.2GB), copy started with 150 ~200MBps and ended with 15~30MBps

Copying about 300MB only 4KB, writing speed ~15MBps constant.

If connect iSCSI directly on Windows or Linux in VM or Desktop, the result will be the same. It started with 150~200MBps and ended with 15~30MBps.

But, if I test with the Truenas shell “fio”, the write graphic is constantly ~60MBps

I tested on NFS and gave the same result too.



Didn’t check the rest of your post in detail, this seems to be a SMR drive which is not suitable for ZFS and comes with bad performance.

1 Like

This seems to be a SMR drive, and SRM drives are known to perform really badly with ZFS. Up to the point of data loss.

I suspect this is unsuitable for ZFS aswell, seems to be a raid controller. Again you risk data loss.

I didnt completly understand your setup, do you run Truenas on bare metal, or virtualized ?

I might be wrong but as far as i know different packet limits in the same network is a no go. Try all with 1500 mtu

iSCSI on raidz1 is another no-go.

And 32 GB RAM is not enough to support 500 GB L2ARC.

1 Like

From the WD Red web page:

Designed for personal and home office NAS systems, our unique algorithm balances performance and reliability in NAS and RAID environments. WD Red™ drives are optimized for environments where idle time is available to perform necessary background operations. To ensure optimal performance, always check compatibility with your system. WD Red™ drives may not be suitable for higher workload environments. For ZFS file systems and overall NAS system compatibility, we highly recommend WD Red™ Plus drives, which are optimized for higher workloads.

I think that the explanation for the highlighted text is as follows (taken from Reddit):

Because then it has to write much data which potentially fills the CMR “cache” of the SMR drive. after the “cache” is filled the drive needs time to move data from CMR to the SMR part - and that’s where the problem starts. Additional writes are now really slow.

If you think it is bad now, wait until you need to resilver your array because a drive fails.

The reason that WD recommend the Red Plus (or Pro) drives is because they are CMR drives rather than SMR.

Also, if disk performance is important, use mirrors rather than RAIDZ1.

2 Likes

What would be recommended L2ARC with 32GB RAM ?

Don’t you recommend iSCSI on truenas with RAIDz1?

Can you explain better?

@Farout @chuck32

I made a test environment with the following HDs

Dynamic Smart Array B120i → AHCI (I believe non-raid)
ML310e server with physical truenas

HDD WDC_WD30EFRX (CMR)
HDD WDC_WD30EFAX (SMR)
NVME WD Red SN700 1000GB
NVME SanDisk SSD Plus 500GB A3N

All with RAID0(Stripe)

The laboratory test was:
Copy 5 files totaling 7G
Copy 1GB of 4kb files
Copy 5GB of 4mb files

Results:
HDD(SMR) Model: WD30EFAX
7G → 11~25MBps
1GB/4k → 11~23MBps
5GB/4m → 29~59MBps

HDD(CMR) Model: WD30EFRX
7G → 35~60MBps
1GB/4k → ~23MBps
5GB/4m → 52~73MBps

NVME 1T
7G → ~130MBps
1GB/4k → ~21MBps
5GB/4m → ~125MBps

NVME 1T
7G → ~150MBps
1GB/4k → ~23MBps
5GB/4m → ~130MBps

Do you think this speed can increase further?
Do you recommend model WD30EFRX?

I think of RAIDz1 for its reliability, size and acceptable performance.

Soon I will do more tests with RAIDz1 with CMR disks

Thanks

Not an expert on controllers but it was deemed unsuitable here:

You would need an HBA in it mode like an LSI. But I think you will not be able to fit the drives in your case then? Does it use a backplane? Maybe someone else has a recommendation here.

Way too expensive $ per GB wise. 8TB+ drives are much more efficient.

Even your nvme speeds are slow, are you copying from a single HDD?

The recommendation for iSCSI is all mirrors and a minimum of 64 GB RAM. Please read:

Incidentally, 64 GB is also the recommended minimum before even considering L2ARC.

SUMMARY: For a newcomer to TrueNAS ZFS-based storage, selecting trouble-free hardware can be a minefield if you don’t research it properly.

Even what would appear to be decent hardware from reputable manufacturers can be problematic:

  • Disk drives that are completely unsuitable for ZFS (or RAID in general) - because they are impossible to resilver.
  • Disk controllers that are incompatible with ZFS or need very specific configurations to be suitable
  • When to use mirrors vs. RAIDZx.
  • When to use L2ARC
  • When to use SLOG
  • How to configure SLOG when you use it (i.e. mirrors)
  • When to use a metadata vDev
  • How to configure a metadata vDev when you use it (i.e. mirrors)

P.S. @dan and I are trying to consolidate the accumulated wisdom of the TrueNAS community into Uncle Fester’s TrueNAS Beginner’s Guide (with full attribution) in the hope that there will be a single place for newcomers to go to get a comprehensive guide to planning, installing and running a small-ish TrueNAS server.

1 Like

None; 32 GB RAM is not enough to support any L2ARC.

No, as you’ve already been told, and given a link to a resource that explains it thoroughly.

People will tell you, that you should avoid RAIDZ, because performance will probably suck, due to io amplification.

I for one think, that the more important part is that you won’t get a huge storage advantage over mirror.
You will not get the storage efficiency you think you will get in most cases. See this table.
For your 3 wide pool, you will actually get 66% (for the default volblocksize).
But that is only 16% more than 50%. Why bother with huge performance penalties, for such a small storage advantage?

It does provide a 100% increase in usable capacity.

(2-way mirror to 3-way raidz1)

ISCSI is block storage. Block storage uses small blocks. Small blocks work better with mirrors.

Yeah, because you’re comparing a two drive system with a three drive system :slight_smile:
Efficiency is still only 16% better.

Using 4 drives in mirror will get you the exact same usable storage as a 3 wide RAIDZ1 :grin: