Adding a single NVMe drive as a cache drive for two pools

Hey! I have a home setup that is primarily my media server, and I have two pools (1x2 10tb, 1x3 10tb). I also have a 1tb nvme hooked up to the system, that is not getting used at all. I would like to use it as a cache drive for both pools. Is it possible to partition it and split the drive so both pools use it as a cache drive?

I’m using Fangtooth, and my apps consist of Plex, Cloudflare, Overseerr and Tautulli.

Thanks in advance!

Yes its possible - nearly anything is possible

However it cannot be done through the GUI. Its also unsupported

Lastly, my view is that if you don’t know how to do this, then I (hopefully we) won’t tell you how. But it is possible.

1 Like

I think that if it’s unsupported then it’s probably not something I should do. Maybe in the future!

Just wanted to explore options, as you said - if I don’t know how to do it, I probably shouldn’t if it can do harm. But then again, I didn’t know how to create my own server and have apps and network and all, but I read and learned and now I (somewhat) know!

Why do you think you would profit from a cache drive (assuming you mean L2ARC)? How much memory do you have? What are your ARC stats?

I have 32gb at the moment, and the reason I was thinking of a cache drive was because I sometimes transfer a bulk of media at once, sometimes from 1 source to another, and sometimes from 2 sources to 2 other places. I thought because of parallel transfers it might help, and also give a nudge to transfer rates.

I have no idea what my stats are but currently the system is reports ARC at 25.5gb.

Try the command arc_summary and look for the “total accesses” section:

ARC total accesses:                                               352.8M
        Total hits:                                    99.7 %     351.7M
        Total I/O hits:                               < 0.1 %      15.8k
        Total misses:                                   0.3 %       1.0M

A hit rate above 99% is not uncommon and in this case you do not need an L2ARC device. It will probably even slow down your system.

Kind regards,
Patrick

1 Like

Rule of thumb is, that you might, with specific workloads, benefit from a seperate “read cash”- device once you have a minumum of 64 GB of RAM and you cant put more RAM in.

Thats because your RAM ( =ARC) is your cache and only data that gets regularly evicted from RAM can end up in the level 2 ARC device.

Setting a L2ARC device to “metadata only” mode, might be beneficial if you need to browse through lots of small files in lots of folders.

There is no harm in experimenting, because the L2ARC device is not critical to the pool, so you can add and remove it, or it can fail without dataloss.

There is no seperate “write cache” in ZFS. Again, data is first written to the RAM (I think 5 seconds worth of it is standard), then flushed to the pool.

A SLOG device is not a write cache. Its a seperate intent LOG device.

Only beneficial if you use sync writes (NFS, Databases, VM storage,..,).

Again, the SLOG is not critical for pool operation, so you can also experiment with it.

CAVE

Metadata, and dedup vdevs are Pool critical, if they fail, your data is gone.

1 Like

Yep, seeing 99.7% just like your example. I appreciate it!