I’ve done a lot of googling and reading before posting, and this certainly seems like a hot topic, so I wanted to provide my specific scenario.
Here’s my Hardware:
Supermicro X10SRL-F Motherboard
Xeon E5-2697A v4
128GB DDR4 2400 ECC REG RAM
Intel ARC A380 GPU
Mellanox CX314A 40gbps NIC
Supermicro AOC-S3008L-L8E SAS HBA
2x HP S750 256GB SATA3 2.5” SSD (Mirrored for the OS)
8x Seagate 8TB 7.2Krpm SAS HDD
2x SAMSUNG PM953 960GB NVME M.2 in PCIe Adapters
I plan on setting up 2x 4 disk VDEVs in Raidz1 for my storage pool from the SAS HDDs.
My current plan for those NVMe drives is to use them in some way as caching functions, be it for L2ARC, SLOG, or Metadata.
My use case is mixed…
centralized data store for backups from my and my wife’s laptops (apple Time Machine),
centralized storage of personal media - photos and video I’ve taken of the family,
centralized store and serve of Plex library of blu-rays/downloaded tv shows and movies to be streamed to TVs and portable devices in the house and also downloaded to the memory of iPhones and iPads for remote use (the Intel ARC is meant for transcoding workloads for Plex), and
direct access photo, video and audio editing from my PC workstation. I have a 40gb network connection to the workstation, and 1gbps to the rest of my network (plex clients, laptops, the internet, etc).
given all of these parameters and my use cases, what is my best use case for the two NVMe drives? I was initially thinking a mirrored SLOG might be best, but I’m not sure.
Funnily enough, I asked ChatGPT this question, using the 3.5 model, the 4 model and the 4o model. Each one gave a different answer:
3.5: Mirrored SLOG was the primary recommendation
4: one drive for L2ARC and one for Metadata was the recommendation
4o: use one drive for L2ARC and use one drive for SLOG
What are your recommendations, given my specs above?
Nothing in your stated use case sounds like there’d be any call for SLOG, and I doubt you’d have any need for L2ARC with that much RAM. But I don’t know enough about the metadata vdevs to know how beneficial they’d be.
No need for a SLOG. (Time Machine will actually make sync writes, but this is a background process for which performance is irrelevant.)
L2ARC: Test and see… (likely not)
I would rather put these 8 drives in a single raidz2 vdev.
I have no idea why any sane person would go to ChatGPT for… anything but laughs.
For the record: Mirroring SLOG is hardly useful, unless in enterprise settings, chasing “five nines”. And a single drive special vdev is downright dangerous! So one “advice” is poor, and another harmful. The last one is sane enough, in the sense that it is not dangerous (almost: do these PM953 have PLP?), but is not likely to bring actual benefits in the stated use case.
A clear example why you should NOT rely on AI for recommendations.
Using an un-mirrored Metadata vDev is a VERY VERY VERY BAD IDEA. Bad ChatGPT 4, bad ChatGPT.
Assuming SLOG were needed you would need to decide whether it would be acceptable to lose the most recent updates if a single SLOG device fails. For your stated use case, I would personally say that a single SLOG disk would be acceptable, but for e.g. database transactions on databases that are too large to hold on SSD themselves, then it would really need to be mirrored.
I am less certain whether an SLOG would be useful or not. I agree that it will not particularly benefit TimeMachine, however if your PC workstation is running Linux or is a Mac, then it might be beneficial for saving video files (if they are written synchronously by the OS - Windows SMB is always asynchronous). (It would also depend on how the video editor works i.e. how it loads and saves files and holds data in memory.)
Personally I doubt that an L2ARC would be beneficial for your use case - it might help with repetitive loading of the same large video file for editing over several days, but I suspect not.
Using the NVMe drives as a mirrored metadata vDev might be the best choice though you would need to accept that this might result in increased complexity as and when recovery actions are needed - put simply there would be another layer of complexity that can go wrong.
<sarcasm>
The AI Chat Bot Arwen recommends using the largest, cheapest USB Flash drive for SLOG or Metadata vDev. Un-Mirrored of course, AND on USB 2.0 to reduce power consumption.
</sarcasm>
Back to reality and actual helpful information.
Looking at both the original post, and the reply posts, I agree;
Special Metadata vDevs add too much complexity for little gain, (and too much risk)
SLOG does not seem appropriate, especially not Mirrored SLOG, (which Enterprise customers likely do).
Even general purpose L2ARC is not warranted based on the amount of memory.
So, why did I respond?
I actually have a helpful suggestion: Metadata only L2ARC.
Using one of the NVMe drives for Metadata only L2ARC means that various metadata only items that might overflow your RAM, will push into your Metadata only L2ARC.
This might not help depending on work load. But, it can help improve backup speed because whence the Metadata is in RAM / L2ARC, it can be referenced much faster than a RAID-Zx pool with HDDs.
There is even an option to make L2ARC persistent across reboots.
Since everyone basically explained why you shouldn’t use SLOG/L2ARC/Metadata - why not mirror these drives & use them for apps/VMs? Using apps/VMs would be subpar on your spinning rust & boot pools are for boot exclusively.
Thanks Arwen - I like the idea of the Metadata only L2ARC and persisting across reboots. I’ve done a bunch of googling trying to self-serve but I cannot for the life of me figure out how I set this up. Can you help set me straight?
TBH, I think the reason why ChatGPT is failing here is a reflection on us as the community at large. In trawling endless forum posts on this subject I couldn’t find any that gave a clear consensus on the topic regardless of the workloads being described by the original post, just lots of strong opinions and - shall we say - VIBRANT discussion. ChatGPT can only do so much when the source material is largely going to be these discussions.
I’ll raise you one: metadata-only, persistent L2ARC. That way you don’t lose the contents of the L2ARC every time your NAS reboots.
L2ARC brings most of the benefit of a ssd re: directory traversal without the risk of losing the pool if something goes wrong with the L2ARC (it is redundant). SVDEV is really cool, speeds up small files and metadata immensely but carries serious risks also. See my resource article on sVDEV for more info.
Edit: never mind, you mentioned persistent. Missed that, apologies!!! But agree that persistent, metadata-only L2ARC is really good for entry level systems, especially for processes that do a lot of directory traversal. (Ie rsync and the like).
Once the L2ARC device is added to the pool, you can use the command-line to force metadata-only via sudo zfs set secondarycache=metadata YourPoolName or optionally YourPoolName/YourDatasetName if you only want the rule to apply there. By default, L2ARC will cache all data.
And in CORE, persistent L2ARC has to be enabled via
Go to System > Tunables and click ADD . For the Variable , enter vfs.zfs.l2arc.rebuild_enabled . Set the Value to 1 and the Type to sysctl . We recommend noting in the Description that this is the persistent L2ARC activation. Make sure Enabled is selected and click SUBMIT.
as described here. Enabling Metadata-only for a L2ARC is the same for both SCALE and CORE (thank you, @HoneyBadger!).
Nah - this is exactly the problem that Google faced in its early days and that currently faces the media in various worldwide democracies, which is this…
Not all sources are of equal merit.
There is even a rule of thumb about this: Kepple’s Rule of Order (“he who shouts loudest, gets”). The right answer is not always (or perhaps not often) the one that is quoted by the most people - and fundamentally that is how AI pattern matching works.
Anthony Faucci’s statements on COVID should be given more weight by Google and by the media and by people (and by ChatGPT) because he is an acknowledged world expert. A contrary statement by Mrs Trellis of North Wales along with 1 million other ISIHAC cultists who repost her words should not be accorded the same weighting or validity.
Ditto for flat earth theories, whether the 2020 US presidential election was stolen or not, whether Area 51 has alien tech, whether injecting bleach can cure covid etc. and of course for whether you should use a single drive as a ZFS Metadata vDev.
As for your questions, the layout seems fine. You understand that a striped pool is vulnerable and thus “backup the work”.
However, in some cases video editing is better local to the client. Copy the file to edit to the local client, edit, then when done copy back. Otherwise you want low latency to the data, which implies you probably want faster network, like 10Gbit/ps Ethernet.
Start a new thread, (making a linked reference to your post above), with more details of video file size and your network configuration.
I agree re using local scratch disks for data intensive work vs. expense of trying to replicate the low latency over the network. It can be done, but why the expense of doing it on a server when the same data protection, performance, etc can be achieved locally at a much lower cost?