There have been many changes to ZFS over the decades. And yes, it is now 19 years since ZFS was released by Sun Microsystems. (ZFS was in development for several years before then.)
Some ZFS information has been distorted or changed over the years. Or was never a hard and fast rules. Here is some of the more common misinformation:
Long ago it was suggested to have 1GByte of memory per 1TByte of ZFS disk storage. But, this was never a hard and fast rule. And it does not apply to casual users. Nor to applications where data is read once and not again for a long time, like media files.
Ram to HDD TB size question | TrueNAS Community
One old rule involved ZFS RAID-Zx vDev widths, which had optimal widths per parity level, (1-3 disks worth of parity). However, with data compression, that âruleâ is not applicable today. Here is the reference:
How I Learned to Stop Worrying and Love RAIDZ
In the past, you could not expand a RAID-Zx vDev. Today you can, though with some caveats:
- Pre-existing data maintains itâs data to parity ratio. A simple copy, (perhaps through a re-balance script), will solve that.
- Free space reporting seems to reflect the old data to parity ratio.
For quite some time ZFS supports a full hybrid HDD/SSD pool using Special Allocation vDev(s). It is possible to select what is stored on the Special vDevs, small files, metadata only, De-Dup tables, etcâŚ
It is possible to remove top level Mirror or Stripe data vDevs from a pool. However, if the pool contains a RAID-Zx vDev that removal is not possible.
Redundancy exists in the vDev, not the pool level. One non-redundant data vDev puts the whole pool at risk.
Note that SLOG / LOG, L2ARC and Hot Spare(s) are not data vDevs. However, a Special Allocation vDev IS critical to a pool.
The ZFS Scrub of Death without ECC memory IS real! However, unless you have really bad non-ECC memory, you are so unlikely to experience problems, you can stop worrying about it. And if you insist about worrying on The ZFS Scrub of Death, get ECC memory and quit worrying so much. (Youâll live longer too.)
Here are references:
Will ZFS and non-ECC RAM kill your data? â JRS Systems: the blog
ECC memory vs Non-ECC memory - Poll! - #65 by DAVe3283
Recent changes to ZFSâ L2ARC handling mean that larger devices and smaller memory sizes are now usable. It is still recommended to add memory over L2ARC, as memory will always be faster. Users must test for viability themselves.
The old rule about not adding L2ARC with less than 64GBs of memory is no longer true. However, low memory servers, 8GBs without Apps / VMs, or 16GBs with some Apps / VMs still may not be suitable for adding L2ARC.
Further, monitoring your ARC hit rate and size is still important, because if ARC is both small and hit rate very high, a L2ARC wonât help.
- The old âruleâ that L2ARC should be about 5 times RAM, and no more than 10 times RAM is no longer applicable.
- The individual record overhead for L2ARC in RAM was reduced from 180 bytes to 96 bytes.
ZFS Cache / L2ARC adding as reduced size | TrueNAS Community- More recent versions of ZFS have improved the eviction logic from ARC to L2ARC
- Persistent L2ARC improves itâs usability, even for lower memory servers
- Maintaining original compressed records in L2ARC has improved how much can be stored
On the other hand, the old rule of not adding a L2ARC unless you have a known use would still apply. For example, a media server that serves up a media file once and not again for a long time would not receive any real benefit from L2ARC. However, a weekly backup server might benefit from a L2ARC.
If you have any to add, reply with the information and I will review. If applicable, then I will update the first post.