4K Video Editing TrueNAS Core Setup

Hi Everyone

My question: what is the best pool settings for a large group video environment . Im torn between 6 Groups or 7 Disks Raidz2 or 21 Groups of 2 Disks Mirrored. Storage Space isn’t an issue.

I will have 10-15 Editors working on 4k Image Sequences and ProRes files.

Server Specs:
SuperMicro EPYC 7443
512GB Memory
M.2 Boot
LSI 9500-16e HBA Card
Mellanox ConnectX6 100Gb Network Card
44 Disk SuperMicro JBOD Single Expander
44x Seagate X20 20TB SAS3 Drives

My biggest issue im running with testing between different pool setups are seeing RXPause packets on the Port the Server is connected to on the Switch, when doing any type of performance testing. From my understanding that signals the server isn’t able to keep up with the Write request. Would the performance I want warrant the need to fill the server/controller front 12 Bay Slots with SLOG devices to see the performance I want?

I see no use for sync writes in video editing, and therefore no need for a SLOG.

But if “the performance [you] want” is “line rate out of the NIC”, you will NOT achieve it without ditching the SAS drives along with the 9500 HBA, and going full NVMe (directly on EPYC lanes, not through Tri-Mode).

What about a “Fusion Pool”/Metadata vdev. I’ve havent seen much written or real world use case. But I think of what performance points I can take off the write request and place somewhere else thats faster.

So you want to build a sVDEV? Look no further, I started a write-up. It’s made a tremendous difference in my use case.

An sVDEV has risks associated with it, but the benefits far outweigh them. A single pool can host datasets of varying types, some of which can reside entirely on SSDs for fast performance while archival-oriented datasets can reside almost entirely on HDDs to save $$$. Yet all of them will benefit from super-fast metadata and the small file performance can also be boosted significantly.

Whether all that makes sense in a video-editing setup with that many disks is a different question. I’m glad your card is 100GbE!!! I imagine you’d want to go 21 2-drive mirrors for the highest IOPS possible. Whether a sVDEV makes a huge difference at that point is a different question.

Lastly, a 21-VDEV pool consisting of mere mirrors will have a higher risk profile than a smaller collection of Z2 clusters. I presume you’re using this mostly for scratch and not archival purposes? If so, all will be well. Otherwise, figure out how to safeguard the data as 21 opportunities for 2 drives to hose the pool is a significantly higher risk than I can live with (I use a Z3 - but my use case is a quasi WORM pool).


Another odd question… but what about switching from Core 13.U6.1 to Scale 24.04?

I wouldn’t do any fancy dockers or vms. Strickly storage for SMB

A few notes that catch my attention are

  • multichannel SMB (potentially LAG 2 40GbE)
  • ZFS Version difference

Great question and I do not have a good answer. For sure, SCALE will get all the new toys and features first.