Unraid to TrueNAS Scale help

Hello everyone,

I’m in the process of moving from Unraid to TNS, but I’m having a hard time wrapping my brain around some items. I’m looking for some advice or help setting up my “last server build ever”.

What I need clarity on is, ZIL/ZLOG and METADATA vdevs. I’ll share my hardware and perhaps someone can help guide me on what to do.

ISP 3Gbps simultaneous
256GB DDR4 ECC 2400Mhz
Supermicro h11ssl-i rev 2.0
LSI 9305-24i (NAS and Media pools only)
Dual 10gbps NIC
Fractal Define 7XL

2x 240GB SSD’s for OS (mirror)
10x 12TB HDD’s Raid Z2 or Z3
12x 1.92TB SSD’s (2x icydock encolusure 6 bays each) RaidZ3
2x 110GB Optane / 2x 1TB NVME (Asus M.2 card 16x)
2x 1.92TB SSD mirror for APPS
2x 1.92TB SSD Stripe for Cache

My goal, is to have media on HDD’s and NAS on SSD’s. Seeing as my NAS would be internal only and not see as much usage I figure having it on SSD’s makes more sense. The media library would be on HDD’s and accessed externally and some media will be on higher demand, so I’m thinking of using the 2 1.92 SSD’s for a cache. The media will be fetched on the server itself so I’m thinking a ZLOG and Metadata on the NVME’s both in Mirror.

My issue is, I don’t understand if the ZLOG would be used from the apps pool to the media pool. I have tried a test but it does not seem to be working as I thought it would be. I’m reading a lot of documents and peoples comments are sometimes contradicting. Is there any benefit of me using a ZLOG? or am I just lost in all this new lingo and complicating my life? The Metadata vdev I would use on my media pool would be for all the images and small files to help speed up jellyfin and other apps reading files from the pool.

Any help is greatly appreciated.

Thank you,

Welcome to TrueNAS forums.

There is no such thing as a ZLOG, it’s SLOG or Log device. It is always helpful to use the proper terminology. ZFS does not have a write cache, the SLOG is not a write cache. All pools have a ZIL for synchronous writes, which is inside the data vDevs.

Further, if you have to ask about SLOG, Special Metadata vDev or L2ARC / Cache devices, you probably don’t need them. They can always be added after.

In general, L2ARC / Cache device(s) would only be 5 to 10 times your RAM size. So with 256GB of RAM, using 2 x 1.92TeraByte SSDs for L2ARC / Cache is a bit too much. This is because the pointers to the data in the L2ARC / Cache are in RAM. Too much L2ARC / Cache and too much RAM is used.

I highly suggest reading about ZFS and it’s pool configuration options. I don’t have a list of Resources or docs to read, but you can find some in the Resources section of this forum. (See left hand side, Resources’ Categories.)


Thank you for correcting my typo. I am aware that the pools do have ZIL by default, I was under the impression that it was better to make it external for quicker disk access so that the disks are not writing the data twice (ZIL > Pool).

I could drop 1 SSD or mirror the cache since I already have the disks. With kid shows and popular videos I do think that the cache would benefit on energy savings.

I think I’ll stick to the cache but wait on the SLOG and Meta vdevs until I understand more about them.

Thanks again for the quick response.

It is not possible to Mirror the L2ARC / Cache devices. They are considered non-critical and can fail at any time without data loss. Of course any data that was cached on it / them would then have to be gotten directly from the pool if it / they failed.

SLOG, (and the in pool version ZIL), are only used for synchronous writes. Unless you use NFS or iSCSI, (or a database), the average user does not get any advantage from a SLOG.

Now in regards to the Special Metadata vDev, it should have the same level of redundancy as the pool’s data vDev(s). You list RAID-Z3, which can sustain 3 disks of failure without data loss. So any Special Metadata vDev on a RAID-Z3 pool should consist of 4 / FOUR disks in a Mirror. That is so it can sustain the same number of disks lost before data loss.

It must be noted that loss of a Special Metadata vDev means TOTAL loss of the pool. This is unlike SLOG or L2ARC / Cache devices which are different. If the SLOG is lost before pool export / power loss, then no data is lost. (In pool ZIL takes over…) And even if the SLOG is lost on crash / power fail, the amount of data loss is minimal.

Loss of a L2ARC / Cache device is a non-event, in regards to pool data integrity. (Of course, any data that was cached, is now having to go the pool’s data vDevs, which are likely slower.)

Now one trick if you have lots of small files, is to use a persistent L2ARC / Cache device on SSD or NVMe for Metadata only. Because the L2ARC / Cache is not your only copy, (unlike a Special Metadata vDev), any failure of the L2ARC / Cache simply slows down metadata access as those requests go back to the RAID-Z3 data vDev.

Anyway, some to learn so that you can have a healthy NAS for many years.


I’ll go ahead and take the “booh’s!” from the crowd.

One could rationalize less redundancy for their (fast) Special vDev.

  1. The NVMe devices are less prone to the mechanical wear-and-tear of spinning HDDs.

  2. Much, much faster devices, and much less total data. To resilver an NVMe mirror vdev is substantially faster, with an extremely small window of failure, compared to resilvering an HDD in a RAIDZn vdev.

Just do this thought experiment:

How would you feel if you got an alert to replace / resilver a 12-TiB HDD in a RAIDZ3 vdev?

Compare that to how you would feel to get an alert to replace / resilver a 1-TiB NVMe in a three-way mirror vdev?



Yeah. I’d use a mirror ssd metadev with a raidz2 rusty pool.

4 disks in a mirror means I can only loose 2 disks? One per mirror correct? or am I not understanding what you are talking about ← Very likely at this time of day lol

A four-way mirror allows the loss of three devices. (All four devices are mirrored.)

You might be thinking of two two-way mirrors.

Assuming 4 disks in a stripe of 2 disk mirrors, 2 at best. If you lose the wrong two you lose your pool.

But you should have a backup anyway.

I plan on backing up my NAS to my Media pool and to the cloud via a service like backblaze. The Media pool I don’t REALLY care about as its all not important data, it only movies, shows, music, etc…

With the knowledge I have found today it seems like perhaps the metadata is not critical as there are no small files really for media except cover art or lyric files, even then its not thousands in a directory. The SLOG is also not critical as the only writing to that pool is once the content is finished downloading. So in reality I just would benefit from a L2ARC for my repeated playback of content?

Anything scanning the contents of a directory will likely benefit from a special VDEV. The can decrease the load times of large directories and speed up particular operations.

For instance, Plex will scan your media directories periodically to look for new or removed media. This operation can take some time and will likely be reduced if you had a special VDEV. But, if you aren’t radically changing things often you might not really care at all, and even if you did the $$ spent on those fast ssds may end up better spent.

Your assertion that there are no small files is only one art of equation. Metadata is metadata, small files are small files. Special vdevs can accelerate both.

If you are hosting VMs or Apps on SCALE, you I’d encourage you to enable sync writes on their respective datasets and zvols, and include the SLOG. Not for speed reasons (async will always be faster) but for data consistency reasons. The sync write pathway gives you piece of mind and can be used for services hosted locally in k3s/kvm on the TrueNAS, not just for clients connecting over the network.

Whether or not you’d benefit from L2ARC is really TBD. You’d have to have a lot of concurrent access to your Rick and Morty episodes or whatever for those huge files to stay in cache and not get evicted.