Build 12/16/20 disk NAS server - rack or tower? (In Europe)

Hi. Im completely new in TrueNAS world. Im currently having 11 disks in Synology and looking for upgrade.

I have two ideas - invest in synology (about 2500€) and Unifi UNAS Pro (550€) to have place for 19 HDDs or go TrueNAS.

My needs are rather simple. Single, big storage pool handled by NFS and another pool for some backups. Hardware configuration Im looking for is simple, energy efficient file server or file server + plex server with HW transcoding.

It must be power efficient and semi silent. I consider existing Synology DS1621+ with expansion unit RX517 holding 11 20TB disks ompletely unnoticable. I cant hear fans from dedicated room. Im affraid that rackmount server will be MUCH MUCH more noisy. Is that correct statement?

What about building tower server? Fractal Define XL 7 holds up to 20 disks. Would it be that much loud as rackmount?

Whats the deal with HBA cards? Do I understood correctly that 12GB SAS HBA allows me to connect 16 sata disks in 4x4 configuration? What about other 4 disks? Build in sata controller can handle truenas? Or should I look fot another HBA card?

What about disk spin up power peak? Rackmount chasis backplane handles sequential start? And what to do in tower build? Is there any other way than putting 1200W PSU? From my research, disks are taking about 6W of power, but spinup can take up to 25W and thats the problem. Or im missing something?

Why I cant find any E-ATX motherboard with more than 3 pcie slots? I initially assumed that E-ATX should be full of connectors.

Does virtualized TrueNAS works well if I passthrough HBA cards? If so, I would consider a bit more powerful server to handle few VMs.

My best idea is Silverstone 4U rack chasis (about 800€) with E-ATX mobo (500€), intel 14600 (250€, for intel QSV with AV1 encoding/decoding), 64GB ECC Ram, SSD boot disk and HBA card(s) to handle disks. Powered by like 550W PSU?

Im DevOps, I built a lot of computers in my life. And I feel completely lost in real servers world.

Bonus question: it is impossible to change raid-z1 to raid-z2? And is it possible to at least add more disks to raid-z1 after initial build?

Thank you for any clarifications.

I’ll go for the bonus.
No, it is not possible to change from raidz1 to raidz2, apart from destroying and rebuilding.

Yes, it is possible to add new drives to a raidz1 pool. It is a new feature in EE, takes up quite some time, is a bit confusing with regards to space reporting and in many cases needs a rebalancing scipt ro run.

Answer to the bonus question:
Yes AFAIK it is not possible to increase the Z-level. However, with ElectricEel it’s possible to expand the VDEV size with addional disks. This is a new feature. And I personally wouldn’t trust it with critical data without backup yet. But in theory, it should be possible.

Thank you for clarification about extending raid-z. I found a lot of answers but I can see that things change over time quickly :slight_smile:

Also, I’m not sure if it’s possible to expand a pools VDEVs asymetrically. So do some research and tests on your own, just to validate it works as you expect it to.

Before being able to give detailed advice, I think we need to understand your needs.

  • How much useable disk space do you need?
  • Would running VMs under TrueNAS meet your needs rather than virtualising TrueNAS under e.g. Proxmox?
  • What network sharing will you be doing? By default NFS does synchronous writes, so you might benefit from an SLOG SSD/NVMe small mirrored vDev. And don’t forget that you will also need a small SSD/NVMe (mirrored?) boot drive, and if you are going to run VMs or apps, then you should also be thinking about SSDs to hold their data.

Don’t forget that the recommended maximum number of drives in a RAIDZ2 vDev is 12 i.e. 10x for data, 2x for redundancy. So if you want 20 drives, you are going to need 2x vDevs each 10x drives in RAIDZ2 - so you will get only 16x drives worth of useable disk space.

A few rules of thumb (but there will always be exceptions):

  • The more powerful the CPU, the higher the power consumption.
  • Most power consumption will be spinning rust. Fewer disks = less power consumption.
  • Rack mounted server MBs typically have less support for spinning disks down to save power.
  • Rack mounted servers typically have better support for hot-swap disks.
  • Rack mounted servers tend to use more power than tower units (essentially because most corporate data centres are less concerned about power draw because the cooling systems probably use at least as much).
  • As you say, rack mounted servers are louder - in my experience because they have a narrow profile for air in/out and so need more smaller and more powerful fans to force the air through (and because corporate data centres are less concerned about noise - and because the air-con cooling is also noisy).
  • How much useable disk space do you need?

I need 1 volume of 200TB+ and some smaller volumes

  • Would running VMs under TrueNAS meet your needs rather than virtualising TrueNAS under e.g. Proxmox?

Im considering two options. One is super power efficient build with bare metal truenas with nfs + smb only. Just simple NAS. Second is more powerfull server with multiple virtualized vms (incuding virtualized TrueNAS). In first case, I dont care even about docker. In second case, I would like to run more workloads than TrueNAS (vms for k8s, docker, ha, database etc).

  • What network sharing will you be doing? By default NFS does synchronous writes, so you might benefit from an SLOG SSD/NVMe small mirrored vDev. And don’t forget that you will also need a small SSD/NVMe (mirrored?) boot drive, and if you are going to run VMs or apps, then you should also be thinking about SSDs to hold their data.

Yes Im aware of boot drive. I assumed I will run one or two ssds on motherboard SATA controller. I assume then it wont be possible to passthrough this controller to truenas? I must read more about slog cache.

Don’t forget that the recommended maximum number of drives in a RAIDZ2 vDev is 12 i.e. 10x for data, 2x for redundancy. So if you want 20 drives, you are going to need 2x vDevs each 10x drives in RAIDZ2 - so you will get only 16x drives worth of useable disk space.

Thats bad news. But I can live with this. 200-220TB per pool/volume is enough for me.

  • Most power consumption will be spinning rust. Fewer disks = less power consumption.

Thats why Im switching from 16TB to 20TB :confused:

  • Rack mounted server MBs typically have less support for spinning disks down to save power.

Im looking mostly on consumer motherboards from Asus, Gigabyte etc. Its tempting to have IPMI etc. but I can live without it.

  • Rack mounted servers typically have better support for hot-swap disks.

To be honest I dont need hot swap. But I found 12 and 16 bay chasis from Silverstone that supports hot swap.

  • Rack mounted servers tend to use more power than tower units (essentially because most corporate data centres are less concerned about power draw because the cooling systems probably use at least as much).

Initially I was betting on Dell R740xd2 but I dropped this idea due to reasons you mentioned. Thats why Im looking into more consumer hardware.

  • As you say, rack mounted servers are louder - in my experience because they have a narrow profile for air in/out and so need more smaller and more powerful fans to force the air through (and because corporate data centres are less concerned about noise - and because the air-con cooling is also noisy).

What about 4U rack mounted servers? Are they also that loud? There is no such pressure generated. Fractal XL 7 is still viable option for me if its less noisy. But it doesnt have backplane and Im afraid of initial spin of disks.

In TrueNAS you probably don’t want volumes in the traditional sense (because spare space is not shared between them) but rather have a small number of pools which are related to the storage type i.e. RAIDZ vs Mirror, HDD vs. SSD etc. So you might have a HDD RAIDZ pool for bulk data storage with a small mirrored NVMe vDev for SLOG, a mirrored SSD pool for VMs with a small mirrored NVMe vDev for SLOG, plus a mirrored boot-pool.

When you say 200TB, do you mean TB=200 x 10^24 or TiB=200 x 2^40 because there is c. 10% difference between them.

Then you need to x1.25 because you can only sensibly load a pool to 80% full before having write performance issues.

Then you need to add an allowance for estimating error and growth.

You can run VMs under TrueNAS SCALE - you don’t need e.g. Proxmox to do this. Proxmox has more bells and whistles, but if TrueNAS virtualisation meets your needs, then IMO you would be better off running TrueNAS bare metal and doing TrueNAS virtualisation than running e.g. Proxmox and running TrueNAS as a guest VM.

Docker is a different TrueNAS technology - with TrueNAS you can run Docker apps alongside VMs. Or not run Docker at all. But if you want Docker, then you won’t need to spin up a VM for it.

Yes - if you run TrueNAS as a guest VM, then you need to passthrough the SAS/SATA controllers and prevent e.g. Proxmox from seeing them and mounting ZFS. Once you start to have various technologies e.g. data vDevs on HDDs on an HBA, extra vDevs on the MB SATA/NVMe, this passthrough can get complicated. Another reason to use TrueNAS bare metal and use TrueNAS virtualisation for your VMs.

See my comments about space above. 16x20TB is c. 288TiB, so you can realistically store up to 230TiB on this. So you are going to need 20x20TB to get the space you need.

Also remember that HBAs might not support spin-down, and that consumer hardware doesn’t have the reliability and resilience of server hardware e.g. it is harder to find ECC boards. Servers can also have redundant power supplies.

Oh god, I read a lot and Im totally confused. I wrongly assumed it would be easy but it isnt - at all. ARC/L2ARC/SLOG…

My main use case is to store movies. About 150-200TB of rather big files. 10-150GB/file.

My definition of success is to utilize as less resources as possible. Eg. setup 12x20TB disks in raid-z2 or equivalent and just use that storage. Use it for next 10 years and forget. Simply replace failed drives from time to time.

There is nothing write/read intense. Actually, I would benefit from something like LVM if losing one disk is not equal to losing all data. Im ready to lose 10% of data in case of disk failure. Perfectly if I can mix different size disks.

Do you think further research in TrueNAS makes any sense for me? Or go for synology (which I really dont like). Does TrueNAS fits into my use case? I got interested in UnRAID but it lacks of any documentation beside youtube movies.

Sure, I can add some SSDs/NVMEs for cache - but not idustrial ones. I definitely dont want to invest money in 512GB/1TB of RAM to run this storage. In that case - I would stay with synology with 32GB :slight_smile: I just like to save disks from dying too fast.

I would suggest going for a 24 bay 4U rack mount server.

Here’s a link to my build:

Supermicro makes good storage chassis cases, and if you go with one of those, best to get the sas expander backplane.

SAS Expansers are a little bit like Ethernet switches… so you can connect you CPU to sas or sata drives via an HBA… which may support 8 or 16 internal drives (8i or 16i) etc, or you can use an expander, and that might have 8 lanes of bandwidth to your hba… but then have hundreds of downstream drives… or in this case… 24 :wink:

The number of PCIe slots (or more technically lanes) is dependent on the CPU.

Consumer grade CPUs only have 20-28 lanes, which is only enough for a couple of slots.

Older server CPUs (Xeon E5 etc) used to support about 44 lanes.

These days, smaller workstation/server CPUs support 64-96 lanes and the big ones support 128.

A good approach is to gwork out how many lanes you want (think about future NVMe flash devices you want) and the pick a CPU and board which supports that.

ECC and IPMI are features you should consider.

Or just push the easy button and get an X10SDV micro atx board with a sas hba built in and 10gbe.

Example

NOTE: X10SDV boards come in a variety of core counts… from 2 - 16. And a variety of add on features, ie you can get 2x10gbe + 2x1gba + SAS etc. All built into the board…

And then a x16 fully bifurcatable PCIe slot… and others on the non miniITX boards.

One of the major caveats is no iGPU for transcoding.

So most data is at rest, and write performance is not an issue.

TrueNAS is a good fit for this.

The idea with ZFS is not to lose any data, ever. But ZFS pools are pretty much lose nothing or lose everything, on the basis that they design it to never lose data. (But that is not a reason not to have a backup, because as you can read in these forums, bad things do occasionally happen to pools.)

HOWEVER…

ZFS is designed to use the same size disks in each vDev. It will cope with different sized disks in a vDev but only by choosing the smallest disk, and wasting any size over this on the other disks.

But you can definitely have a single pool with two vDevs each of which is using different sized disks e.g. 6x10TB RAIDZ2 and 6x20TB RAIDZ2.

TrueNAS definitely fits your use case.

You do NOT need extra cache SSD/NVMes for this use case. ZFS already has a read cache called ARC which is in main memory.
You do not need 512GB+ of memory either.

My NAS has 10GB of memory i.e. c. 2GB of ARC and performs brilliantly - the metadata is cached, and the data reads of large film files will benefit from sequential pre-fetch.

You will need a little more memory to hold the greater amount of metadata for the larger amount of disk space, but likely 16GB will be fine and 32GB almost certainly great.

If you want to run apps (e.g. Plex Server for watching your films) then you will probably want to put both Plex and the Plex data (i.e. its own index of your films, titles, actors etc.) onto an SSD pool, which is separate from your HDD pool. This would ideally be a mirrored pair of 1TB SSDs/NVMes but it could be a single SSD which you backup with replication to your HDD pool.

Most likely. But 12-20 spinning hard drives are going to make noise.

Possibly not, but quite a pain to properly wire up compared to using a backplane.
And also expensive, and not that good at cooling drives.

12G is interface speed (not drive speed!), and has nothing to do with the number of drives. Using expanders you can pretty much attach as many drives as you want to a -8i or even -4i SAS HBA.

Yes.

You’re obviously not looking at the right kind of motherboard. I can find you regular ATX motherboards with 6 or 7 PCIe x16 slots if you want… (hint: EPYC Siena)

Save for ECC RAM and HBA, there’s nothing I like in this list. Dubious rack (cooling? expanders?), consumer-grade CPU, and massively underpowered PSU for a large drive array.

I’d suggest to narrow down your requirements with respect to cores and CPU power for VMs, and with respect to the number of drives. Then we can try to fit that with the right class of server-grade components. And quite possibly a cheapish Arc A310 GPU for transcoding duties if need be…

2 Likes