Making my first TrueNAS Build... 20TB x24

I have been a long-term QNAP user. First I started with an 8-bay unit and then added 2 more 8-bay units via thunderbolt. They have worked great. Then I got a 12-bay QNAP. (In every instance, I use RAID5 to add the 8- or 12-bay pools; I have never knock on wood lost a drive.) I keep everything backed up with other servers.

Now I am ready to go the TrueNAS route. I just want the ability to add storage at a lower cost per TB. Adding JBODs seems the way to go and connect everything with 12Gbps SAS cables and HBAs. I will be documenting my progress along the way. This is for posterity and if anyone wants to see what successes or mistakes I have made. I can’t find much about performance when I see people making their TrueNAS builds. I have a 10gbe network. I am used to 500-1000 MB/s reads and writes at all times with my QNAPs (all HDDs in the 10-20 TB ranges).

I am planning to base the build off an ASUS WRX80 mobo. I want as many PCIe slots as possible to add many HBAs in the future. I will be using an AMD 5955wx CPU. I am going to start with 128GB of single rank ECC RAM. I’ll have 2TB x2 NVMEs installed on the mobo for the TrueNAS OS. I plan to mirror them. I’ll use a Broadcom 9305 16e or something similar in the first PCIe slot. I’m planning to buy a Sliger case (CX4170a) and let them also sell me an AIO water cooler compatible with the CPU/mobo. I will buy two Areca 12-bay SAS towers, cable them together, and have one SAS cable going back to the HBA on the mobo.

I do not know if I need a GPU to install TrueNAS, but I have one.

I am planning to have the 24 drives total organized as such: two vdevs, each 12 disk vdev as RAIDZ2. Essentially that should work out to 24 drives with 4 drives of parity.

Things I worry/wonder about:

  1. I’ll team the mobo’s two 10gbe ports. I hope file transfers are fast; i.e., I hope a 10-20Gbps connection is the file transfer bottleneck here, now and forever.
  2. I should have in the neighborhood of ~350TB usable space. I want to fill that space as much as possible and it will be with 10+ GB files almost without exception (sometimes 1TB files!). Will TrueNAS get squirrelly if I use up 90-99% of the storage space? My QNAPs work fine using 90+% of the storage space with no performance degradation seemingly.
  3. How painful/pleasurable will life be when it comes time to add two more Areca 12-bay towers? I’m inclined to add 24-bay chunks in the future as their own separate pools rather than just adding vdevs to the pre-existing pool.

More to come.

If all works well, I will build the same exact system for backup.

4 Likes

Looks like you have a pretty sweet build planned. Only half-useful advice I have is that you’re going way overkill on the boot drives; two 2tb drives is a lotta waste for the boot drives imo.

If you’re running any VMs or apps, I’d use the 2tb mirror for that, and just get whatever is half reputable & cheap for boot mirror.

GPU ain’t really necessary, as I think your motherboard has IPMI… unless you’re running plex/jellyfin, need it for LLMs, or some other such usecase for VMs.

Hard YES. Performance may be impacted >80%. Will be impacted >90%. I’ve also personally witnessed systems become completely unresponsive at 100% full - luckily it wasn’t my platform, but therefor, sadly, I wasn’t on the call with vendor so no clue how it was fixed - which considering it wouldn’t respond to anything, including CLI on boot, is still a small mystery to me.

Otherwise, I’m genuinely envious of your planned build, excited to see build progress, and curious on what you’re going to use it for!

Edit:
Raidz2 is generally recommended for drives of that size instead of Raidz1 (raid5 equivalent), given the length of time it’d take to resilver & the risk of a 2nd drive failing during the process; therefor causing data loss. You mentioned it hasn’t happened to you, but I like to think that it simply hasn’t happened to you yet.

3 Likes

Thank you for commenting and the kind words.

This part troubles me, a little, in that the ~20% unused becomes its own parity in a way. And again, in my previous QNAP RAID5 life with 8-12 drive pools, it seems mighty happy w/ 80+% usage.


Here I have QNAP storage pools at 92-98% use levels. They have been this way for years. Perform exactly as they did on day one.

  1. What is different about ZFS and TrueNAS vs how QNAP is behaving?
  2. What kind of performance hits are expected >80%, or is it unpredictable?
  3. total system unresponsiveness seems like a kind of catastrophic failure… I wouldn’t want to risk that, but there are performance degradations (i.e., 10-20% slower R/W speeds) I’d easily accept…

Most is covered in the ZFS Primer.

The big issues for you will be the above 90% usage. Static data that is just read will be easier on a system than one that has a lot of file changes. Above 95% is considered dangerous aka catastropic failure.

The way you have to add space to ZFS. The drives need to be the same size or larger in a VDEV but it uses the smallest to compute capacity. 8TB + 16TB mirror would be the same as two, 8TB in a mirror. There is Raid-Z expansion now with the latest Scale versions. You can take a 5 wide Raid-Z and add another same size or larger drive to make a 6 wide Raid-Z VDEV. It is a bit smaller in usable space than a freshly made 6 wide Raid-Z VDEV.

The other way to add space is to add anohter VDEV to your current pool. Your choice of a two, 12 wide Raid-Z2 VDEVs would require another 12 wide Raid-Z2 VDEV for a total of three VDEVs in your one pool.

iX Systems pool layout whitepaper

1 Like

Mind you, my example was in an enterprise environment with MANY users. I guess it could be argued if you’re doing WORM workload, that you wouldn’t experience much degredation at all & that I’m still following old recommendations that aren’t relevant to SOHO use… But given that your setup is pretty beefy, I also not certain that SOHO is your use case?

Additional reading; you could be fine… But would I ever recommend someone to actually hit 95% full? No.

1 Like

Yes, I am just a ā€œlowly hobbyistā€ and in fact this will be for SOHO use. The server will need to hold large video files. I have ~15K of such files at present accumulated over ~20 years. The TrueNAS will function as a Pl#x library, but not a Pl#x server. I want to overkill on the front end, and then just easily expand until I die basically. I probably need ~40 more years of ongoing use. As mentioned, I presently have 2 servers (which are not the ā€œPl#x serverā€ either… that’s a stand-alone PC). So this will be the third, and I want it to be the last. Based on my long-term experience thus far, the QNAPs have never let me down. They’ve been very fuss-free. (When I started ~20y ago, obviously QNAP didn’t exist, and I did my own custom RAID thing in a big 6U server chassis… and then didn’t touch it for years… then went the QNAP route ~2017, but I digress.) The QNAP expansion abilities are a little limited, and I don’t want to keep adding servers just to get more storage space; I’d rather just add chassis and disks. I create ~4-5TB of new data/month that I need to keep… ā€œforever.ā€

1 Like

You are going to want to do research on your files and ā€˜record size’ for ZFS. You can set ā€˜record size’ by dataset. Your storage will be more efficient if they are closer.

1 Like

Damn, that might be the coolest Plex collection ever!

2 Likes

Hey welcome!!!.. that is some mighty fine build you are proposing there!
My story is a lowly hobbyist story a little similar to yours (without the massive sized library)
I started on a QNAP 470 PRO
I then purchased a 2U server grade FreeNAS based NAS way back like 2012 (when things were kinda starting up) - as a secondary backup of my QNAP (without really knowing anything about ZFS and what I was doing - letting the NAS builder maintain it)
Needless to say they went out of business - forcing me to get my head around things the hard way…
I let things go a bit… long story short after a couple of years on inaction, I fired up the old FREENAS server and she wouldn’t start… without the NAS builder I was left stranded to DIY work out from scratch - to literally rebuild that sucker re loading up on TrueNAS Scale CE. Now up and running thanks to this awesome community.

Now I have learnt so much literally trialling and error myself - I plan to build a new NVME FreeNAS Scale based server (I don’t need the storage you need) - to get all the exciting apps etc

My advice

1/ Don’t assume it is easy - there will be stumbling blocks
2/ Don’t hesitate to use this forum… some wildly awesome great people …
3/ Don’t throw away yr QNAP quite yet… keep it going until you are very happy with the TrueNAS set up… this might take awhile (just sayin)
4/ Understand the difference between TrueNAS Scale CE and TrueNAS CORE - understand the company behind it makes it’s money from large server grade systems - the size of like yours (what you are planning) that are ready built and sold to enterprise (who just want stuff to work) …basically testing new features on you… the Scale CE user
5/ If you want a fantastic recent utube video on all this (and for other awesome resources what’s going on in the TrueNAS community) go follow and subscribe to their dedicated utube TrueNAS tech Talk (advertised up top of the forum) where Kris and Chris chew the fat… The most recent episode ā€œVirtualization, Community Edition Features and Future, and Agentic AIā€ was truly awesome! Some dude asked them via email (and they discussed it around 9-10 minutes) ā€œwhat was the future of TrueNAS Scale CE…and why they were basically giving it away for freeā€ā€¦

Such enlightening transparency from a great development team!!!

I have also found the Lawrence Systems channel a fantastic (more general) resource

Enjoy the journey as much as I am…

:slightly_smiling_face:

4 Likes

Do you know you can rely on SAS expanders instead?

This being a server, EPYC would be more indicated than Threadripper.

Not with ZFS or any other CoW filesystem, I suppose?
ZFS needs large chunks of free space to write efficiently. If you only add files and never delete anything, you might be happy with 90% occupancy. If files are deleted as well as added, stick to the 80% guidance or do not use ZFS.

1 Like

In relation to the 80% and 90% figures, with the size of the pool in question, I think Kris has said that fall off in a a WORM environment would be much slower.

Ie a 500TB pool at 95% capacity still has 25TB of free space. Adding in files ranging in size from 100MB to 20GB is less of an issue compared to a pool size of 20TB at 95% capacity (1TB free).

1 Like

Based on this looks like I need 1M record size (e.g., my last QNAP storage pool of ~80TB had about 5600 files; average file size ~14 GB). Can I set the record size greater?

Yes, I will certainly be using a SAS expander for my JBOD builds, if I want to make my own JBODs (still debating). But I didn’t want to add them ad infinitum on a single external HBA port. I wanted to ideally have no more than 2 JBOD chassis (at 12 drives per chassis) per external port. And I even wanted to try and see the benefit of the ā€œperformance connectionā€ on an Areca ARC-4038 whereby I use two sff-8644 cables out from my HBA to the ARC, and two sff cables out from the ARC to another ARC (multipath routing, link aggregation, etc.), and then back to the HBA…

I think for my use case it will be fine, and again I am prioritizing having lots of full length PCIe slots here; the WRX80 (or 90) wins. One day, I may add in a 25Gbe NIC e.g. Also, I was able to get that WRX80 mobo and a 5955wx CPU for ~$1500.

That’s what I do!

Due to above suggestions I see no reason to install the OS on an NVME. Am I reading right that apps can’t get install on the same disk as the OS? Anyway, I think I will do the OS install with SanDisk SSD PLUS 240GB in a 4-way mirror.

The OS / boot drive is a quickly replaceable item as long as you have a current configuration file. If you really want to you can do a 2 way mirror.

You can do a new install of your current OS on the boot drive and upload the configuration file and be back to 100%. Mirroring the boot drives is usually when you need to be up quickly with a boot drive failure and you have someone to change the boot order to the second drive. It’s more for SMB or Enterprise.

I don’t know if this will help

2 Likes

So while that would in theory provide crazy uptime; in reality I’d argue that it’d just wear out 4 ssds. 2 boots in a mirror and 2 cold spares are likely a better use of hardware. Or just occasionally back up the config file.

Reason is that if your primary boot drive (one bios tries to load) is partially dead - you may be stuck unable to boot until bios setting changed or failing drive physically removed.

Still clutch to have a mirror & hotswap to replace a drive that contains your OS while the OS is running.

As others mentioned, boot is only for boot, no apps, no partitions, etc.

1 Like

That makes good sense; 2 boots in a mirror and 2 cold spares it will be.

Make that 2 for boot, if you do want a mirror here, and another mirror for your app pool.
No need for SSD cold spares: These need no burn-in. Get new drives as you need.

2 Likes

+1 for mirrored boot drives. You never know when something goes wrong.

I did a BIOS update and one of the boot drives picked up errors, and a pool went down as the birfurcation setting was reset from x4x4x4x4 to auto. It took 15 minutes to fix both problems, but without the second boot drive who knows how long it would have taken to fix? Well worth it for £25.

TrueNAS does not support Multi-Pathing. I had my external SAS expander MPT and ran into issues with duplicate devices. A very bad situation. I was able to power down, remove the second path, power back up and all was OK.

Chris and Kris do a good job explaining why MPT was dropped in the T3 podcast here: https://www.youtube.com/watch?v=O0SsoFB9VUE&t=821s (and yes, I am the one who asked that question).

2 Likes

what about link aggregation?

I think it’s looking more likely I will build my own JBOD with a standard SAS expander card!