Questions RE: New Overkill NAS

Good morning! First-time TrueNAS future user… Also, sorry for the long post, but as we learned from Reacher, “details matter” :rofl:

I am planning on repurposing my current workstation used for WFH home work into a TrueNAS box since I’ve just ordered my replacement parts. I’m well aware that this is going to be a complete and total waste of resources, but since I own them already and will be preventing E-Waste, it’s both cheaper and a better option than buying new lower spec components.

I will also be using this to replace some of the roles that a couple of my SBCs are running now (Rock 5B and RPi 5) via docker, though, those won’t consume much in the way of resources. This will be replacing my NAS that has made it to “I’m not dead yet! - You will be in a moment” Netgear ReadyNas Pro 4 that I rescued from a client’s site when it was replaced a decade ago for going EoL/EoS.

The components will be as follows:

  • Motherboard - Asus Prime B550-Plus - 2x M.2 NVMe, 6x SATA
  • RAM - 32GB DDR 4 Unbuffered 3600 MHz
  • CPU - Ryzen 9 5900X
  • NIC - Dual-port Intel X540-T2 10GBps RJ45
  • Storage Pool - 4x Seagate Exos X14 12 TB drives (7200 RPM Spinning)
  • Boot - 2x Samsung Evo 250 GB 2.5" SATA SSD
  • Slog - 2x Samsung M.2 NVMe 250GB

Note about the hardware, I work at a mid-sized MSP and we have hundreds of the NVMe and 2.5" SSDs in our reclamation bone yard that were removed from OEM systems and replaced with larger SSDs for client deployments. For all intents and purposes, they’re both free and nearly unused. I get that the capacity is WAY overkill for their purposes, but, free is free :rofl:

My thought process, which is the part that I’m hoping to be challenged and educated on is as follows. Since I have all of the hardware except for the Exos drives, I thought I’d use 2 of the MB’s 6x SATA ports for a striped OS pool on the 2.5" SSDs, use the 2x M.2 NVMe drives as a SLOG to reduce latency on the spinning pool, then use the last 4x SATA ports for the Exos drives. This will obviously prevent future expansion, but since I’m going from a 4x 4TB RAID 5 NAS to a 4x 12 TB NAS, and my current NAS is only 60% full, I think I’ll have a long time to consider future expansion, though, I do plan to re-rip my BD collection from 265 HVEC to lossless since I’ll have the capacity, but I should still have lots of breathing room to rethink my life choices.

Where I’m still on the fence is regarding the consideration of using all of the on-board M.2/SATA connections vs offloading the SATA to an inexpensive LSI HBA like the SAS3008 for ~$100 USD. Will using all of the on-board I/O ports end up saturating the pipes on the MB and become a performance issue that spending $100 more would prevent? If so, is this likely to be an issue outside of the initial data transfer in day to day Multi-media/backup use? Realistically, the hardest I see this getting hit is when my desktop is running a backup while 2 devices are streaming via Jellyfin and 2 devices may be syncing to Immich, which is a relatively minor load.

If I do end up determining that the HBA is the way to go, I’ll have to lose either the 10Gbps NIC, or I’ll have to lose the GPU. I’d probably remove the GPU first since I don’t currently have plans for HW acceleration uses, but I am unsure if the device will boot/post properly without it since the CPU does NOT have an integrated GPU (More testing required).

The last thing I’m on the fence about, the CPU and Mobo do support ECC memory, and I know the recommendation is roughly 1GB of RAM per 1TB of storage for ARC, currently I’ll be sitting right on the cusp of that with 32 GB of RAM and ~32-34TB usable in the storage pool (assuming it’s RAID 5 type overhead). Is ECC RAM critically important enough to spend the money to find compatible RAM and swapping out to ECC DDR4? It has to be ECC Unbuffered according to Asus’s Tech Spects page (PRIME B550-PLUS - Tech Specs|Motherboards|ASUS USA)

I dont see anything unsuitable for truenas in your hardware.
But be aware, that you will only benefit from a slog, if you are doing sync writes.

Mirrored boot pool is not a must. Its good for minimum downtime. Eg. 2 minutes vs 15 minutes.

All you need is a up to date config file.

I would instead use the SSDs as a pool for apps/VMs.

Important to know when talking about expansion: you are stuck to whatever “raid” level you choose at the beginning. You cant go from a 4 wide raidz1 (similar to Raid 5) to a raidz2 (similar to raid6) later on.

It looks fine except the use of one of the NVMe SSDs for SLOG.

IMO, regardless of whether you have synchronous writes or not, you would be better off using this as an apps-pool (for your docker stuff to use) than using it for SLOG.

If you feel that this needs to be backed up, then use replication to create a backup on your HDD pool.

1 Like

Thanks for the enlightening replies so far.

@Farout, if I’m understanding correctly, the mirrored array for the boot pool is unnecessary since the data pool on the Exos drives will be recognized and all I’d have to do is reisntall TrueNAS on a new SSD and import/load the config and I’d be back up and running? If so, is there a convenient method to do auto-config backups off the device itself? I reviewed the documentation hub but I didn’t see anything about where the backup is stored and if that’s configurable or not. I did see a number of community scripts for performing said backups, one of them said that the auto-backups are stored at “/var/db/system/configs*/”, but I don’t have an instance spun up yet to confirm. If the config backup files are relatively small, I’d likely opt to have the files backed up to Google Drive.

As for the SLOG, this was more or less to play with the hardware I have available to me, even if it mostly hums at idle, but would have it available upon the rare request that it’s needed :rofl:. I doubt I’d need a SLOG at all, but was considering doing an ISCSI to my desktop for storing my VHDx files for my local dev work VMs. That said, they don’t run 24/7 and likely won’t really benefit from a SLOG for typical home use.

@Protopia, Thanks for the insight. I think I may pivot on the SLOG piece given 2 in a row recommendations. Most of the heavy storage will simply be mapped to the HDD pool directly for music and movie libraries as well as the file storage for Immich. I will likely put the DB storage on NVMe with replication to HDD for faster indexing and responsiveness, but all of the actual footage/photos will reside strictly on the HDD pool.

System Config backups need to be done on another box because you won’t have access to them during a rebuild.

The easiest way is IMO to implement @joeschmuck 's Multi-Reports script which will email the system configuration file to you once per week. But Google Drive would be an alternative I guess. My system config file zipped is 56KB.

Yes.
Also in most cases the motherboard will still try to boot from the ‘broken’ boot pool. So you would have to manually boot from the 2nd mirror anyway.

There is a way to boot from a RAID controller for the boot drives only. But ZFS hates hardware RAID so there will be dragons…

IMO no. But thats my opignion. I see a proper pool layout and a good backup as more important.

Speaking of pool layout. If you wanna play with block storage, you really should do mirrors instead of raidx.

1 Like

The keyword is “roughly”. So you’re absolutely fine with 32 GB RAM.
ECC is nice, but not strictly necessary; you may keep that for later in your ZFS journey.

As already said, a SLOG is not necessary, will NOT improve latency on regular NAS work. It also need NOT be mirrored but it absolutely needs PLP to fulfill its duty, so a 250 GB Samsung drive “for free” is not suitable. Use one of these free NVMe drives as boot device instead of an EVO, another one as application pool, and keep all your SATA ports for storage.

If you are concerned about cost enough to be asking this question and you already have RAM, then IMO it is not important enough to do so.

But if your MB supports ECC and you have to buy memory anyway, then IMO definitely worth spending the extra.

I don’t know who made this recommendation, but it seems to me to be a money-no-object type of recommendation, and also that depending on your circumstances buying an SSD could be better for performance than buying memory.

iXsystems say that for a small setup, c. 3GB of ARC will give reasonably good performance and if you don’t have apps or VMs, the recommended minimum of 8GB will give you this.

What you want to do is to hold all the metadata you will need in RAM, and have a bit left over for serial read-ahead/pre-fetch. And the size of the metadata will depend on a lot of factors esp. whether you are storing video files of several 100MB or several GB, or thousands of smaller files of 1-10MB.

To put this in context, I have c. 16TB of useable space of which c. 5TB is actually used of which 75% is large video files. So according to this I would need 5GB-16GB of ARC, yet my 10GB of RAM results in c. 3.5GB of ARC and I still get 99.9% cache hit rate.

If you are cost conscious my advice would be:

  1. to start with 16GB of memory and add more memory to 32GB if you find your ARC hot rate is below 99%; and
  2. plan to put your active data (which are normally apps and VMs and iSCSI) on SSDs if they are of a size that you can do that.

That’s more or less what I was thinking. I’m already going to be spending a bit for the storage drives, so I think I’ll hold off on the ECC RAM for now. I don’t think it will be an issue, but I am new to ZFS and this will be my first NAS build. All of the commercially available NAS devices that we have deployed and manage for clients come with ECC RAM from the factory, so this bit is a new topic for me.

I’ve not been in the TrueNAS world long enough to know where that recommendation originated from, but I’ve seen it in a ton of posts. That said, the desktop that I’m using now, that will become the NAS is a 32 GB system, so it’s all going into the NAS.

One more point that occurs to me here.

I think you might be better off using 1x NVMe for boot, and 1x NVMe for an app-pool (backed up using replication to HDD) leaving all 6x SATA ports for HDDs.

  1. This would enable you to buy e.g. 6x8TB drives (48TB in total) rather than 4x12TB drives (48TB in total) - I would expect the cost to be broadly the same. When run in RAIDZ1, 4x12TB gives you 36TB usable, whereas 6x8TB gives you 40TB usable.

  2. Or you can still use 4x12TB now, and keep 2x SATA ports available for future RAIDZ1 expansion.

Nothing wrong with a mirrored boot pool. I use a pair of the same drives as my boot pool.

  • Slog - 2x Samsung M.2 NVMe 250GB

Consumer grade SSDs typically don’t make good SLOG drives.

The issue is that when doing synchronous writes, as they don’t have on device Power Loss Protection (PLP), the writes have to be flushed to flash, and they run at a few MB/s with very high latency (seconds)

If you want a SLOG device (and they are good if you have synchronous writes) then the best solution is still to buy a second hand Optane drive (not the very lowest end ones)

900p or p4801x etc.

Otherwise enterprise class SSDs with onboard PLP.

Check out this thread on benchmarking slog drives:

Hello! I wanted to first say thanks for all of the insights and follow-up with my final config.

Because I discovered that the SATA ports 5/6 share the same pipe as M.2 port 2, I was forced to drop my drive count to 4 spinning disks for the RAIDZ1 VDEV. But so far, so good! I’m currently migrating the 11 TB of data from the old ReadyNAS Pro 4 to the new TrueNAS build and am tinkering with some of the apps available to run on this rig to offload some of the other containers from my SBCs.

I opted to go 256GB NVMe as boot/system, 512 GB NVMe as apps pool, then the 4x12 TB in RAIDZ1 as my main storage pool. If I find myself eclipsing my current storage capacity, I’ll add an HBA and move the SATA drives over to it.

Usually this apply only to the SATA lane. A M.2 NVMe drive lets you use all SATA ports.

My observations and the manual disagree. The dive in there is an NVMe, but the SATA ports are disabled simply because the M.2 is populated.

1 Like

Then you have the unfortunate exception…

But you also have three PCIe x1 slots, one of which does not share anything, and which have no use in server, save for hosting a NVMe boot drive in an adapter.

1 Like

Unfortunately, that’s housing the GPU, which the system won’t post without because it’s a Ryzen 9 5900X (No integrated graphics). The 2nd PCIe is currently populated with an SFP+ card which is using a DAC cable to my switch for 10G. While not necessary, it’s a nice addition that only cost $30.

If necessary for future expansion, I could just replace the mobo with another AM4 board with more I/O for about $150, or try to leverage PCI (non-express) SATA card for around $75-100 as well.