Drive Recommendations for Network Editing Setup

In many ways that diagram is misleading/incorrect.

SATA is 6gbps per “port”, ie drive.
SAS is 12 gbps per “port”
NVMe uses PCIe, and multiple lanes per drive… but in practise is limited to x4

The point is the above diagram should be reading 16GB/s not 16Gbps. Its 8x faster.

PCIe3 is 8GT/s, or about 1GB/s (Byte not Bit) PER LANE.

So, a PCIe3x4 NVME will do up to 4GB/s, and PCIe4 is twice as fast… and PCIe5 is twice as again… and PCIe6 is twice as fast again.

Ergo, The max speed you will get out a the 16x PCIe3 slot on the X10SRi is 16GB/s

Whether you’re going to 4x NVMe drives in a bifurcation card… or if you’re goingf to a 8 or 16i SAS 12gbps HBA.

24gbps SAS is actually a thing… but with PCIe3 gen technology you’re really talking 12gbps SAS.

ANYWAY, you can attach 16 drives directly to a SAS HBA which is in the x16 slot.

That gets you 16GB/s of bandwidth to the CPU… and 16x 12gbps to the disks,

12gbps is about 1.5GB/s, thus a max to the disks speed of 24GB/s… but the bottleneck is 16GB/s

Using expanders you could connect to 24 SAS SSDs (or more), but the most you will see is 16GB/s out of that slot.

Seems to me, if you like the idea of using SAS SSDs, then up to 16 12G SAS SSDs connected to a 16i 12G SAS card in the 16x slot would be a good idea.

And you can get chassis that supports that too.

BUT

I don’t think the CPU is going to keep up with 16GB/s anyway… and 10gbps networking is only 1GB/s…

Thus you’d start heading toward 25gbe or better, which would mean that you’d be able to saturate 10gbe to at least 2 editors simultaneously (assuming the right switching hardware)

Thus, if capacity is important… you don’t actually need NVMe.

… that’s the point I wanted to get at. No matter how fast the drive(s) are, if you’re at 10 gig, that’s the speed.

And just to confirm, that’s because it’s PCIe3 gen and I only have the one 16x slot, yeah? I would realistically either need something with more slots or PCIe 4/5 to exceed that 16GB/s bottleneck. (So we’re clear: as bottlenecks go, 16GB/s does not seem too shabby to me!) But it would mean starting over with a new motherboard, cpu(s? as @roberth58 suggests), RAM. I don’t think I’m going there, I’m just trying to understand.

I don’t think a 25Gbps setup would give me much and I suspect I’m going to have a hard time soaking even the 10Gbps with our current hardware (other than when I’m backing up).

I really appreciate you guys weighing in. I’m sure it’s apparent how ignorant I am about these things, so your wisdom is hugely helpful. I think I will stick with the 4x bifurcated NVME in my 16x slot. And populate that with 4 TB drives for now in either a RAIDz2 pool or a 2x2 mirror. That will be the active work area where I at least experiment with editing across the network.

With the 16x slot occupied with that, would it make sense to have/use a x8 slot with a bifurcated 4x2 NVME setup for special metadata ZFS pixie dust? Could I get away with SATA SSD for that? How much capacity would I need for that? I’ve heard this can help with IOPS/latency which @ChrisRJ points out will likely be my bugbear.

Later down the line (because of budget mostly.) I’ll look at adding a larger capacity HDD pool or possibly some kind of high-capacity U.2 SSD (they come in 16TB!) pool to feed into and out of the speedier pool at less-frustrating speeds.

and 30 TB too but, I ain’t got that type of money.

No. I would, given the money thing, use a 8~12 HDD Z2 in an 8x slot. Speed wise should be enough for a 10 gig NIC. Not for live editing ( the latency ).

…at the end is, what can be done within the moneys available.
A fast 16 TB for projects ( unless that’s not enough for editing RAW ) and a still fast for what it is, HDD storage. You’ll have to think of storing elsewhere too for backup. ZFS is oops resistant but nothing is oops proof.

opps = WTF is going on, how can this happen, what am I gonna do. :face_with_raised_eyebrow:
oops, the power went out, came up and blew up the refrigerator along with everything plugged in, kind of unexpected thing. ( …maybe too much )
Opps, this new guy broke the box. That’s not unheard of.
Opps, the power supply ( PSU ), fried every component in the case. ( it can happen :frowning: )

Say you work in a project or two. How much space you’d need to work comfortably based on your experience ?

It depends a lot on the project (documentary is a pretty high noise-to-signal shooting ratio) We’ve been working from 2TB and, more recently, 4TB external SSDs. Which works OK for short films, but we’ve got some larger/longer projects we’d like to work on. So the idea of even 8-12TB of active project space is pretty appealing.

Parts have been arriving all week. The ECC RAM was on my doorstep when I got home tonight.

I have an older Crucial 128GB SATA SSD I was assuming I’d use as a system drive unless you guys tell me different.

To populate the slots on my ASUS Hyper M.2 X16 PCIe 3.0 X4 Expansion Card V2 I’m shopping for some less-expensive-but-still-decent M.2 NVME drives. I see some ~$200 Silicon Power or is Patriot better? Opening a can of worms here, but any recommendations for decently long-lived drives 4TB drives to go into the bifurcator for this?

1 Like

That is fine. No need to change that.

I have a card like that and 4 NVMe drives in a Storage Spaces ( windows ) running VMs for about a year now using Samsung drives. Not a hint of wear.
I don’t have personal experience with any other brand.

…also have a mirror NVMe here at home of:
1 Samsung SSD 990 PRO 4TB
1 WD_BLACK SN850X 4000GB

well, the WD_BLACK just died. Tried the RMA and WD is now SanDisk ? and the whole RMA is a pain. Got so frustrated that I’ll look at it again tomorrow.
So yes, I double down in that Samsung advise.

Sorry to hear about your WD Black. I’ve also had great luck with Samsung SATA SSDs (and those little t7s have been a godsend.)

I did something sort of foolish. I bought a pair of used U.2 SSDs (1.92 TB Samsung) as a test just to see if SAS SSD might be viable for editing. The prices are still pretty steep for higher capacities, but I do like the idea of a mirrored pair of 8TBish (and maybe 16TB when those become more affordable) as responsive larger capacity storage. I’d never even heard of U.2 until I came here. This is just an experiment. I don’t know if it will work. But if it does it’d sure be interesting.

U.2 SSDs are NVMe. Awesome and all, but you’ll need a PCIe or M.2 to U.2 adapter.

1 Like

Uh oh. Won’t that HBA LSI 9211-8I with an IcyDock 5.25” to 4x SATA/SAS “backplane” get me there? I’ve seen those PCIe cards where the drives sit directly on the card. Will I need one of those?

Yes one of those will work as long as the pcie slot supports bifurcation. You can also get a pcie controller with 4 SFF-8643 ports for about $30, needs a x16 pcie slot.

1 Like

Making me want that dual Xeon setup (with 3 x16 lanes!) you suggested more and more! I think the SuperMicro board I’ve got is pretty great about bifurcation. My single x16 slot will be occupied with the 4x4x NVME that I’ll probably make a RAID z1-2 or mirror pool. I do think I’ll be adding more (possibly A LOT more) capacity with bigger HDDs in the future. But the U.2 thing is interesting. Those 15+ TB drives are going for ~$1000-1500 used. I don’t have enough $$ for them right now, and I’m not 100% they’ll work but could be a neat way to add some pretty dang fast™ storage. So I want to try it with these smaller drives (that I got for a steal.)

For the NVME drives, do I need/want to make sure I get Gen3 NVME (vs Gen4)? I think they’re backwards compatible and I don’t know if it matters for single drives, but might it be a problem with my bifurcation card? The prices per drive are similar, and getting Gen4 would be more forward-looking, but if there’s some arcane overhead where the drive is vastly outrunning what the card/my bus is capable of, I don’t want to get bit by that.

I know I’m asking a lot of questions, but I did just look up SFP28 NICs and I see some pretty reasonably priced options. Would it make sense to future-proof (ha!) with one of those in place of the SFP+ NIC I have? Would my e5-1650 and x10sri-f be capable of handling 25Gbps (even if my drives are struggling to saturate 10Gbps)? I know it’s not ethernet, but SFP is pretty backwards compatible isn’t it?

Get the Gen4’s if the price is close and you think you will upgrade to a Gen4 motherboard in the near future. Gen5 is the current new thing so if you upgrade in a few years your Gen4’s will still be old tech. Do you have a switch that supports SFP28? Ethernet is a protocol, SFP, SFP+, QSFP+, QSFP28 and SFP28 all support ehthernet. SFP backwards compatibility depends on the NIC, the transceiver and the switch. As an example my brocade switch SFP+ ports do not support 2.5gig but a microtik transceiver will connect at 2.5 but tell the switch its a 10gig connection. SFP can be complicated, do your research before jumping in. However they are cheap, the 50gig QSFP28 mezzanine card in my server cost $15 and happily connects at 40gig to my main switch with a QSFP+ cable.

1 Like

@Orlando_Furioso , why are you already buying stuff when so many questions seem still open?

1 Like

Possibly foolhardy, but I’m eager to start bumping my head into problems. Which questions seem open to you? As you laid out before: HDDs and SATA just aren’t going to have the IOPS to let me randomly scrub 4K timelines, so we’re stuck with SSDs and as @etorix stated there’s not much future in SATA SSD. So for NVME on this x10SRi-f with only a single 16x lane we’re sort of at the end of that particular road, aren’t we?

A x16 slot alone is good for 4 NVMe drives. I’d think U.2/U.3, possibly as refurbished Data Centre drives, rather than M.2. With a PEX 8747/8749 switch, that could be 8 drives.
Beyond that, you’re looking at another platform, EPYC or (dual) Xeon Scalable.

2 Likes

As I said earlier, sell the x10sri on ebay and get a x10dri, e5 cpu and heatsink. That would be the cheapest upgrade, cost you about $80. It would give you a lot more pcie lanes, so use could use the cheap asus hyper cards and give you the time to research future upgrades.

1 Like

It may be further down the line, but I certainly am curious about getting something with more PCIe x16 lanes. (Seriously thought some of those AMD specs were typos.) To the extent it’s possible, it’d be great if the ECC RAM I purchased (10 X 32GB 2Rx4 PC4-2400T DDR4-19200 RDIMM ECC) could work in a new setup. Some of the options I saw with the 10DRi were cheaper than the (few) PLX PEX splitter boards I found (and only on Ali Express)

I went into this sort of assuming I was building a SATA thing with a mix of SATA/SAS SSDs and spinning drives and so my PCI slots would mostly be for NIC and HBA.

Had no idea this was a thing. Seems like magic! How stable/reliable is PLX PEX PCI switching specifically for PCIe 3 and U.2 NVME? Getting 8(!) U.2 SSDs into some kind of a pool sure sounds like the dream. But if it’s flaky/brittle, I suspect I’d be happier with smaller and solid.

It is stable and reliable, BUT the switch chip comes at a cost, uses power and requires cooling so, as always, make sure yours has appropriate airflow—especially if you end up fitting a card designed for servers into a consumer-style case.

Maybe a little blower fan. I’ve never ordered from AliExpress before, but it’s the only place I could find anything. Warning: Ali link is thoroughly obnoxious, but this x16 to 8 NVME card seems like it might work to give me up to 8 U.2 drives. I watched a video about a beta TrueNAS feature for expanding ZFS pools—so maybe I could start with a more modest allotment and then add more as I can afford/need them.