Drive Recommendations for Network Editing Setup

Goal: reasonably performant dynamic access video editing (scrubbing timelines) 10-60TB network storage for 2-3 simultaneous mostly Mac users.

I’m new to TrueNAS/ZFS and looking for drive recommendations and guides for a novice user on what to purchase and how best to configure drives for a new TrueNAS (Scale?) server for my small (2 man) VFX/video editing filmmaking studio.

I was doing a lot of searching and eventually found this amazing build thread by Stux. My needs are different and maybe less elaborate and so bought some used components off eBay (listed below) but haven’t purchased any spinning drives, NVMe, or SSDs yet.

I have (and use and love) a separate UnRAID setup that I will continue to use for cold storage (and Docker and VMs.) I love the way that UnRAID lets me just add whatever drives later to expand my storage (provided they’re smaller than the parity drive.) We’ve just been working off external project hard drives and shuttling those back and forth and backing up to the UnRAID, but the stack of those little drives is getting as precarious as our backup hygiene. So I’m focused on building something new with some real read/write/scrub performance for pushing around 4K(-8K) Red and ProRes 444 footage.

I purchased:

  • SuperMicro X10SRi-F
  • w/ Xeon E5-1650v3
  • 10(!)x 32GB DDR4 PC4-19200 2400MHz DIMM ECC Registered
  • HBA LSI 9211-8I flashed IT mode
  • Intel X520-DA2 SFP+ NIC (Optimistic purchase, but it was cheap)

I went with old Xeon because this is really going to be NAS and I don’t anticipate wanting to do too much outside of serving up and storing a lot of footage. Also RAM is cheap and I was trying to save our budget (we’re filmmakers!) for the drives.

I’d love to be able to have 10-60TB of reasonably quick storage which I understand will be some mix of (small but fast) SSD/NVME storage and (large but slow) spinning drives. In an ideal world this would be a thing that could grow with us. What I’d love to avoid is having to keep 4 or 5 tiny drives in service once we outgrow 12TB or whatever it is, and then need to make another big purchase of 4-5 slightly larger drives and now we also need a server rack because the minitower Silverstone case doesn’t have enough bays.

IMG_9889

ZFS and TrueNAS are much more intense than UnRAID and have vdevs and pools and metadata(?) and can use RAM for cache/network (but maybe only half the RAM?) and it’s all very confusing. Let’s assume that optimistically I have ~$1200-2000 for now to spend to build this the rest of the way out.

On that regard, I’d buy a 24 drive case. The system you’re putting together can run all them drives or as many as you start with. And put a HBA card that can drive them all.
I think that this way, will be simpler in the long run.

1 Like

I’d totally be OK investing in a new case. But obviously I’d like to know what I need to put into it first. Because if you’re telling me that 24 2TB 2.5” SATA SSDs is the only way I’m going to make it happen or if it needs to be 24 NVME caddies or 24 20TBdrives—well any of those going going to have an effect on the landscape (and budget!)

hmm, you have this image:
IMG_9889

and in regards to the case I recommend something like this:

…and given that you’re choosing an i8 ( HBA ) my advise is to get i24 for futureproofing your server. You can add upto 16 more drives ( if you start with 8 drives ) through the years ( or months ). Also advise on using Z2 instead of (raid) Z1. And 8 drives is the sweet spot. Don’t go over 12 HDDs in a single pool.

In regards to the NVMe drives, use one of those internal x16 cards straight to the CPU if you can bifurcate 4x4x4x4.

Never said to use those cages to plug NVMe drives, …wrong technology to start with :slight_smile:

And, you don’t have to populate the whole box. Use what you have planned and the extra empty space, can remain empty until you need more drives, and doing video production, is just a matter of time.


I can not give you advise on that due to my lack of personal experience.
A NVMe will go much faster in a local PC than at the network speed when those fast drives are on the server.
Then again you’re gonna work with someone else and may need to work out of a shared storage.

4 SSD in stripe would saturate the SFP+ NIC. ( 4 TB x 4 each = 16 TB )
If you have the money, 8 mirror/stripe ( same 4 drive capacity ) would be safer.
A nightly ZFS ( I’ve never done it ) thing that backups the SSD pool to the HDD pool just in case. Then once you’re done with the project, move the data from the SSD pool to the HDD pool for storage ( 8 Z2 HDD, 20TB x 8 each = about 100 TB usable space ) ( as you may need the raw footage in the future. B-Rols ? )
But all this you’ll have to read around. Again, I don’t have the experience.

on drives alone that much money is kind of short for what you envision :frowning:


Note to self: 100 Gig Networking in your Home Lab

1 Like

ZFS is different. The Pool, which consists of vdevs, which in turn consist of the physical drives will be limited in performance by the slowest member.

A Pool made up of a single 6 wide RAIDZ2 vdev, can have a read speed of approx 4 drives, and a write speed as well as IOPS of approx a single drive.

3 people editing 3 different large video files on the server simultaniously will depending on the exact type of workload probably run into a low IOPS issue.

The large amount of RAM will help you though, as the second time you edit the video, it will be already cashed in RAM. Maybe you would also benefit from a L2ARC.

As for the Pool layout, i would choose pairs of mirrors.

2 Likes

Didn’t even know they came in i24. Wow! OK. I’ll definitely be getting one of these! Awesome future expansion! I’ve also started looking at cases, but will hold off until I feel like I’ve got a handle on drives.

Great! Z2 sounds good from what I’ve read and (barely) understood. :slight_smile: I really am very ignorant about all this. But I mean, I like having terms to go look up! But so: 8 drives per pool max. That’s very good to know.

Is this a thing that would do the bifurcation (quadfurcation?.) I saw a lot of PCI 4.0 options, but this one says it will work with 4.0 or 3.0 (which my SuperMicro X10SRi-F is.) Am I correct in understanding this would use my x16 PCI slot? Saturating the SFP+ NIC is probably a good thing. I would seem to have quite a lot of SATA bandwidth (particularly with an i24 card!) I know NVME tends to be much faster than SATA SSD (less sure with the 4x4x4x4 setup, but might the 8 mirror/stripe across (ideally cheaper) SATA SSDs be safer here? Or is SATA just not going to be up to doing this?

Been poking around some with my ears open. Seems like maybe there isn’t a cool hybrid solution that is half-solid state, half-rust which is somehow psychic and knows which clips to pull into a speedy pool and which ones to leave in the HDD pool. I mean I am keeping my UnRAID and will drop files over there as necessary.

Sane (wealthy) people are just buying a Synology/QNAP and moving on. But even with their out-of-the-box solutions, I hear them running into all sorts of issues. I wanted a little more flexiblity/expandability. I think most Synology boxes are 16GB or 32GB RAM. Plus I wanted a solution where it’s possible for me to dig in and, say, diagnose and perhaps even solve stuttering on 4K 60-120fps multicam playback.

I suspect that editing is an inherrently tricky thing for NAS, I think. I need to have massive video files, but it’s only occassionally that will I want to just hit play and sit back where buffering would make sense. I’ll spend a lot of time tweaking the same five minute scene, just tweaking the ins/outs or trying out different takes or sound effects or music. And then just when I’ve made up my mind, BAM! I’m going to want to zoom out and sit back and watch how the previous scene dovetails into the one I’m working on and how ties into that next scene too. It’s a profoundly non-linear process–until it’s linear.

I worry about this too. I know there are bound to be some compromises and I anticipate over time investing more. Am I correct in thinking that if I switched to 8x 10/12TB spinning drives (~40-50TB usable space) or even just focused on making the 4-8x SSD pool that maybe now I’m in the neighborhood of that price range?

This is very helpful. I’m assuming this is agnostic of the drive type. So if we’re talking about spinning drives, I assume with a RAIDZ2 vdev pool of 6 drives wide, I could expect to to read at 4x what a single one of those drives would be capable of, and then write and IOPS would be the same as a single drive-- which is to say, kinda crap. But if instead of spinning drives those were SSDs or NVMEs, write and IOPS is maybe less crappy.

I see… Hrmm. As a hypothetical: Let’s say one (slightly obsessive but ruggedly handsome) editor is just blitzing that random read/seek/scrub thing with dozens of shifts and reads per 5 second interval when they’re not up getting another RedBull, and another (more sane and more beautiful) person is wanting to just go through and log the footage, maybe only starting a new clip at a less frantic 1-2 clips in the same 5 seconds. Is that an IOPS thing? And then (less frequently) there’s a sound person or colorist or VFX artist gently looking at maybe separate files from those (sometimes the same) but also occasionally writing rendered clips into the same project folders (but not overwriting.) I assume we’re definitely in IOPS red flag territory there?

I was hoping the biggish RAM would help. I don’t know this for certain, but I suspect there’s not a lot of writing happening as I edit. Maybe some meta-information about where ins and outs happen and weird L-cuts and stuff, but not writing large amounts of data until I render (or tell the NLE to render.) It’s mostly reading from a slew of very large highish-bitrate files, but it’s burst-y if that makes sense. Like I’ll want maybe a 200MB/s for ten seconds or 400MB/s for four seconds and then 200MB/s (from a different spot in the file) for a half second, etc. I imagine it’d be maddening for anything optimized to read sequentially.

I really appreciate your thoughts and wisdom on this, guys. I’m definitely in over my head.

There is too much detail here IMO. At the end of the day it boils down the following options

  • Local edit and archive on HDDs (likely RAIDZ2); that’s what I’m doing and probably my recommendation to start withu
  • Mirror SSDs on NAS for edit, and archive on HDDs
  • All SSD on NAS

Lastly, forget about bandwidth (i.e. throughput) for your use-case. The limiting factor for video editing is IOPS (i.e. latency). The effective speed of HDDs for your uyse-case is at best(!!!) a tenth of the linear speed. More likely will be something in the range of 1-5 MB/s plus seek times. In total, forget HDDs for editing off of them.

NVMes tend to be cheaper than SATA SSDs these days.

4 Likes

I think the most practical approach is have “current”’projects on an NVMe pool and then archive projects to an HDD pool.

An Asus Hyper m.2 should work well in an SRi and provides good cooling to the NVMe drives.

Ouch! You’re talking about random access into very large files, much too large to fit into 320 GB RAM (256 GB, since you only have 8 slots). I think that those doing editing directly on the NAS go for all-NVMe setups.
L2ARC can help with directory traverse and reads (but your working set is likely way too large). Special vdev can further help with metadata writes. But editing the data itself will always go to the pool and sustained writes will be capped by the slowest (spinning) drives.

The random I/O part requires high IOPS, i.e. multiple vdevs; with mirrors, that implies buying two or three times the capacity you want.
Raidz is more space efficient, but is not good for IOPS.

SATA SSDs have no future, so that’s not a good direction to take for a server that is going to evolve. (I have one with 16*3.84 TB, but it’s not required to evolve further.)
So, it’s rather:

  • HDDs for massive archival of the footage (copy rushes to workstation to edit locally, and then then send the final result back to the NAS); or
  • full NVMe for remote editing (in that case, I’d go for EPYC or dual Scalable to have more PCIe lanes, quite possibly by buying a whole refurbished server with chassis, motherboard and CPU).
1 Like

Someone please explain to me if this will destroy the ARC on the server itself.

The reason I ask is because when I loaded an 8-GiB video file from an SMB share, via a Windows client with Avidemux, it absolutely killed my ARC. “Services” RAM went sky-high, and ARC shrank accordingly.

I always thought that the application on the client would use the RAM on the client’s PC itself.

Requesting a video file, to be served up via SMB, I had believed would simply “send” the data to the client, which would be held in the client’s RAM.

This makes me tentative about editing large video files directly via SMB.

You don’t need to. If you go for HDDs, pick a chassis and backplane with SAS expanders to plug into the single HBA rather than staking multiple 9305-16i/24i.

There’s NO magic psychic solution.

1 Like

Is called bifurcation. …to any permutation and count.
Then you’d specify 8x8 or 8x4x4 or 4x4x4x4
If someone calls is quad, tri and bi … that’s a linguist, not a tech. person. The concept is forking.

That link to “Quad M.2 NVMe SSD to PCIE x16 Adapter Soft RAID Array Card with LED”, all you need is bifurcation. Stay away from anything that say RAID.

1 Like

X10SRi has one x16 slot, which I believe is bifurcatable.

Then a few more x8 slots.

Using enterprise SAS SSDs is a practical option. As realistically an old platform does not support that many nvme drives

1 Like

Wow! Thank you guys so much! You have probably saved me weeks-months of frustration.

Damnit! :slight_smile: Sounds like my dreams of fast massive (precognitive) storage may need to wait. It seems like now I’m focused on building an NVME RAIDZ2 pool for active collaboration work. That would probably have, 4 x 4TB drives (that price jump to 8TB drives tho!) And then separately (optionally) a pool of (spinning, yeah?) drives that will let me offload from that—ideally much faster than backing up to my UnRAID (last full backup took ~18 hours!) Years from now, when 16TB NVMEs are cheap and plentiful, I could upgrade and it would presumably still work assuming the standard is backward compatible? Is there a flavor of drive I should be looking at?

Awesome! Purchased. Thanks for the recommendation. And for the inspiration for this project with your X10SRi-F build!

Clearly I am all for recycling older but still viable hardware (Possibly owing to my advancing years.) Maybe not this exact thing, but something like this lot of 5 800GB SA SSDs is the sort of thing you’re talking about? I know it’s tricky with SSDs since there’s a limited number of reads/writes. My understanding is that ZFS is pretty great about catching/fixing errors to avoid corrupt files. Are there former-server red flags I should be looking out for?

I don’t really know about L2ARC or metadata special drives, but these are words that Wendell from L1Techs says. Maybe it would make sense to add another (couple?) NVME drive(s) in an X8 slot to handle that?

More like:

Point is you can fit more SAS SSDs than you can NVMe SSDs, but capacity is about the same.

1 Like

You probably don’t need to worry about it, the ARC should take care of it.

1 Like

You are going to have a problem with lack of pcie lanes on that x10sri. If you want to use old xeons then get rid of the x10sri and pickup a x10dri and an extra e5 cpu, about $160 on ebay. That get you 3 gen3 x16 and 3 gen3 x8. This would let you setup 2 hyper m.2 with 8 2tb nvme in a 2 vdev raidz2. 11-12 TiB of fast storage for a grand. You would still have slots for 10gig nics, 12Gb SAS controller etc. You could also use the 10 SATA ports for your slow storage.

1 Like

Or 2 pairs of mirrors. This has the advantage, that you could easely expand the pool, by adding 2 more if the need arises.

What do you estimate is the total data throuput gonna be on these drives, say per month ?

2 Likes

Like, a lot more it would seem. I’ve never used SAS drives at all, but I really like the idea. Thank you. Can I assume a 4x 4TB NVME pool (on my shiny Asus Hyper M.2!) would outperform a 4x 3.84TB SAS SSDs on this system? If it were 8 (or more?) x2TB SAS SSDs, would that be any better?

I like the sound of that. I think I read that using mirrors can make for better IOPS (presumably at the cost of capacity.)

I don’t really have solid numbers for, like, read/seek sorts of things, but in terms of ingesting new footage, and reading that back out to the backup, I’d guess that it’s probably something like 8-10TB. Depending on the project but maybe a little more for a busy month. It’s not crazy. With the SAS SSDs I was reading how some of them are rated to be rewritten once a day—we’re not doing anything like that!


https://www.techtarget.com/searchstorage/feature/NVMe-SSD-speeds-explained

…but you’re still limited by network speed.

I guess that number is no longer a point :laughing:

2 Likes