I want to build a rather beefy NAS. Primarily for storing 4k videos and need to be able to handle a good 10 streams. May also run a few iscsi shares for VMs
I have the following drives available and just would like some thoughts on the array setup.
2x 400Gb SAS SSDs
16x 8Tb SAS drives
4x 800Gb Sata drives
They are all enterprise drives.
The 2 400gb sas ssds are going to be in a mirror for boot drives regardless of storage pool layout.
The two options I see are:
Option 1: 2x raid2z vdevs 8 wide
Or
Option 2: 8x mirrored vdevs 2 wide.
Will probably use all 4 800gb sata drives for meta and small files unless someone tell me otherwise.
The system has 320Gb ddr4 ecc and dual e5-2698v4 cpus (may downgrade these as overkill)
Which pool layout would you guys and girls recommend?
I wouldnât bother with with separate meta/small file special vdevs. Especially for primarily video (large file) storage. Itâs just added complexity that will give little benefit for your use-case. If you decide to do it anyway, donât forget redundancy.
I wouldnât sacrifice capacity by using mirrors to store video. The highest bitrate movies are going to average 100mbps (12.5MB/s) or less. âEncodesâ and web-sourced video generally uses a far lower bitrate.
CPU performance isnât really important for video streaming or file sharing. I used a Xeon e3-1275v6 in my previous mATX build. CPU utilization stayed around ~10% with ~40 containers running and a few large pools. The storage will be the bottleneck before the CPU will.
Not sure what youâd need 320gb RAM for from your post, but canât have too much.
I didnât see any mention of graphics. Unless youâre sure all clients will be able to direct stream all formats, youâll want a GPU of some sort. My old build used the Xeonâs integrated graphics, which was more than sufficient for transcoding high-bitrate 4k hevc videos. My new build has a low-profile Arc A380. I donât know how many simultaneous transcodes it would be able to handle.
I was leaning towards the raidz2 setup for the reasons you noted.
I was considering the special device vdevs as I thought this could help when scanning libraries etc. Also thought it could help for small files like subtitles and thumbnails etc.
Think Iâll downgrade the cpus and lll move abit of ram to one of my proxmox servers.
I have a separate server with dual v100 in that is used for some AI stuff so the steaming will be through this server.
All networking is currently 20Gb between servers the servers.
Sure, a special vdev for metadata would help and I considered it for my own build, but ultimately decided the small benefit it would give wasnât worth permanently dedicating 2+ drives per pool. ZFS will cache metadata/folder structure in RAM. I donât know how easily it gets evicted, but I believe itâs a tunable.
For subtitles and thumbnails/cover art⌠Well subtitles are small files that can easily be loaded from an old, slow spinning disk before the first line even needs to be displayed. Thumbnails/cover art are stored in my Plex metadata on flash.
In the end, I just donât think âthe juice is worth the squeezeâ on a system that might be used by 10 simultaneous users. Any situation where a VM would benefit, Iâd just use a flash pool. Not trying to talk you out of it, just rambling the reasons I talked myself out of them If I ever built a draid (another thing Iâve talked myself out of), Iâd consider one for small files to reduce wasted space.
It sounds like this will purely be file storage. In that case, CPU is even less important. My old e3-1275v6 utilization was always low despite hosting a lot of active containers and sharing a large amount of data.
Just started building it out and I can also get 2 sata drives in anyway I could wire the rear two bays to the onboard sata and then keep the current dual nvme pcie cars and then I could fit them all, it I really canât be bothered
Onto downgrading the ram and cpus. First for everything I guess