ZFS: quantifying performance gain

I have a few NAS servers running TrueNAS (core + scale) and a few non-ZFS based NAS systems, plus virtue NAS systems (on VirtueBox and PVE), over different hardware systems and a 1Gbps network.

  1. I have not noticed any performance difference between the NAS systems on ZFS and non-ZFS NAS systems - the network is my binding constraint.
  2. I have also changed my hardware configurations, with RAM from 4GB to 128GB. No difference for ZFS or non-ZFS systems.
  3. I have also played with hardware settings for NAS systems on VB/PVE. no difference as well.
  4. I have noticed that some NAS systems (fnOS for example) do seem to use RAM as buffers. but without noticeable performance gains.

So, has anyone done analysis or aware of analysis that shows that big RAM helps materially for ZFS?

First, ZFS is not the fastest file system, logical volume manager and RAID solution. It’s original design was to replace Solaris UFS with something that solved other problems. For example, like long boot time file system checks. Or improved data integrity.

Having larger RAM helps for specific cases, like repeated access to files or directory metadata. Yet, not reading the same files or directories over again simply means the ARC, (aka RAM cache / L1ARC), does not really contain useful entries. In fact, it may not populate easily.

However, if you are already network bound, then that is the area to improve.

Whence you have faster network, you can consider a Metadata only L2ARC device. Or possibly a Special Allocation vDev for Metadata and small blocks. But, the average home user generally does not need those, which complicates ZFS pool management.

2 Likes

makes sense.

for my implementation, ram (or zfs) doesn’t seem to make a difference.

but i’m wondering if for people with faster network, if more ram / zfs makes a difference.

i did some experiment, using fnOS, btrs vs. zfs in transferring several GBs of data.

  1. for large files, the transfer speed is the same: ~110MB/s (over a 1gbps network).

  2. for small files, the transfer speed over zfs is about 70 - 80% of that over btrs.

so this seems to confirm that zfs isn’t here for performance but for security / reliability.

for speed, large ram buffer + btrs is actually very good.

Testing this on a 1 Gbit network is pointless. 1 HDD can saturate that.

Yes.

That said, the big boys, (aka Enterprise Data Centers), can get better performance. They throw money at the bottle neck. It simply depends on what is desired.


On the subject of BTRFS, it appears that limited development is being done. For example, even after 15 or so years, RAID-5/6 is still unstable:

RAID56 unstable n/a (see below)

While I wish BTRFS had been farther along, it seems the “next gen FS” for Linux will be bCacheFS. (Though that has it’s own can of worms…) Unless you are using Red Hat. In which case, it will a combination of XFS, LVM & Stratis.