Curious if anyone has recommendations for a smaller TrueNAS build using all ssd.
My goal is reduce power consumption mostly, but not at the sacrifice of capability/speed and i/o if we do need it. It will essentially be a proxy storage server, only it would hold all 10-bit h265 versions of our media leaving the big 3.5" platters idle until we actually need them. If we need them at all.
The #1 reason I want to use truenas for this is to utilize zfs send for offsite. If it’s a small and fast enough server, we can afford to it synced at all times.
Originally, I was just going to try something wild, like see if it was possible to just start with a pair of 8TB mirrors of nvme. Or do a 3way Z1 with good really hardware. But then you get into the lane battle and buying used or questionably sourced. And a little out of my depths.
I do have a AMD 7900 a Supermicro H13SAE-MF with 96Gb of RAM up and running.
But I’m a little stumped where to go from here. The motherboard has 4 sata ports, and an x8 and x16 card slot. If the x16 is fully occupied, it drops to an x8 x8. Bifurcation support on slot 6 pcie 5.0 x16 is Auto or x4x4x4x4 only in the 2.0a bios.
My HBA knowledge/experience is pretty limited. Since it’s a smaller tower, I can’t image any way I would use more than 6-10 drives.
Should I keep it simple and just go all sata SSD or are we hitting a point where nvme options can be utilized for smaller servers as well?
This points to SATA. You can get 3.84 TB and 7.68 TB enterprise SSDs, possibly refurbished.
The catch is that you’ll quickly need a HBA (+10 W), but you have the lanes: x8 for the HBA, x8 for a SFP+/SFP28 NIC and you’re set.
IcyDock makes pricy but convenient docks for 8xSATA in 5"1/4 bay and 16xSATA in 2x5"1/4 bays.
M.2 NVMe are limited in capacity. U.2/U.3 NVMe can go very large, but these can have quite high idle power, and this adds up.
I currently have dual 10gbe on the server in the x4 slot, and just 3-4 clients on the switch. No VMs or apps. Just boring storage needs over SMB. Sometimes productions get crazy though, and we bring more clients to the LAN party.
The goal here was to keep it manageable and synced in the cloud and not having a rack mounted jet engine near a desk. Something between 12-16TB at the start would be pretty nice.
Currently in a Fractal Pop Silent with 2 front facing 5.25" bays. I had originally planned to install 2x 6-bay 2.5" icydocks in the front but work got in the way and I sidelined the project for a pretty long time. The AMD 4004 has since come out.
So I guess that’s the catch. I’m not looking for big build. I want to keep it small and fast and capable of handling 25Gbe and as many clients as I can.
I kept falling back to a sata ssd solution due to complication and cost. HBA is fine. The motherboard only has 4 sata ports. But if you start looking at 8TB enterprise sata ssds or 8-16tb mirrors of nvmes, there are options and the costs arent wildly different.
I could add 4 more lanes with a 4464P on this motherboard, if it is needed.
Interesting about NVME and power. I will have to read into that.
As others have said, there is no point in having light-speed disks if your data is in the networking slow lane. So some questions…
What are your use cases for storing these videos?
Simple storage with occasional access?
For editing (which requires repeated access to various parts of a large single file?
Streaming to your TV?
What is the access pattern?
Most recent videos most frequently?
Random access by strangers over the internet?
What are your performance goals?
Response times for accessing an individual file?
Throughput for e.g. network transfers in bulk?
What is your network environment?
End-to-end LAN bandwidth?
Internet connection speed?
But largely speaking I would agree with what has already been said.
HDDs and more RAM is the cheapest way to deliver performance. An HDD RAIDZ array may well probably provide more bandwidth than your network making your network the limiting link.
So perhaps a small, cheap NVMe for boot drive, and 4x 6TB drives in a RAIDZ1 would suit your needs.
Rationale
RAIDZ1 rather than Z2 because there is another copy offsite and resilvering should complete in a reasonable time.
4x 6TB drives in RAIDZ1 would give 18TB useable disk space. (I have no idea whether this is enough when you take into account TiB vs TB, 80% max utilisation, metadata storage, snapshots etc.)
4x SATA in RAIDZ1 would give you c. 1.8GB = c. 10Gb of I/O bandwidth
With 96GB of memory, almost all metadata will be cached and a lot of most active actual data. I would expect to see c. 99.9% cache hit rate in most cases.
This is probably the cheapest possible solution. 75% storage utilisation on relatively cheap HDDs.
If you want to speed it further you can buy an NVMe to use as L2ARC.
Editing video is the only use case that isn’t health and backup/automation. Editing can be simple and very complex in terms of file access and location. This particular use case is all proxy / h265 media.
Video files in NLE timelines. Footage can range from all over the pool though.
Nothing concrete. It’s a proxy server, so lots of streams of h265 pushing to edit bays (clients). And I want to keep it synced offsite at all times. 90% of the time 2 clients are editing, 40% of the time 3-4. If we have a larger production more clients get added.
I currently have all 10Gbe at home and work, internet has nothing to do with it.
HDD is what I am trying to avoid. I have mass storage for video files. This unit sits on a desk and serves the lightweight version of our infrastructure to everybody, so lowering overall resource utilization is my main goal
Video editing is a specialised use case. My advice (best guess - I am not a ZFS expert) would be…
Monitor your ARC hit ratio when you have 4 users editing. If this is not 99%+ invest in additional RAM. This will get you best UI responsiveness when editing open videos.
If you want to speed up initial loads of videos then you might want to store all video on large NVMe SSDs. If your video access is NOT random and is across a subset of videos, you could use a large NVMe L2ARC instead to cache the active subset of videos.
If your clients are Windows via SMB, nothing will speed up writes because they will be asynchronous writes stored in RAM before being written asynchronously as soon as TrueNAS/ZFS is able to write them.
Make sure that your network is fast end-to-end so that it is not the bottleneck.