Realistic Pool/vdev Config for Casual/Hobby User

Hi everyone,

I have been testing out TrueNAS SCALE for the last year or two in HyperV on my older W10 gaming PC as I evaluated things and learned a bit about running a home server. I realize this is not necessarily the best idea or the “right” way to do things. With this in mind and my desire to get away from Windows on this machine, I am looking to refresh the system and run TrueNAS on bare metal as a dedicated server. Current specs are:

MB: ASUS Z170-E
CPU: Intel i5 6600K
RAM: 64GB Ripjaws DDR4 3200
GPU: MSI GTX1070
OS/Boot SSD: 500GB Samsung 970 EVO Plus
Non-NAS HDD: WD 1TB (will likely remove this drive from the equation)
NAS HDDs: 2x 6TB 5640RPM CMR WD Red NAS drives running as a mirror
NIC: Solarflare 2x10G SFP+ Intel NIC

With this refresh, I am looking to buy a few (4-6) 7200RPM HDDs with the main goal of increasing capacity, followed by resilience, and lastly performance. I realize my MB will/has run out of SATA ports and I am prepared to add an HBA or something to increase I/O.

My current plan is to use the existing two 6TB drives in one pool only as file storage/one of many backup locations and then use the new drives in a second pool for my main file storage w/SMBs, media storage, etc.

Within TrueNAS I am planning to run Jellyfin and possibly some other apps like Immich, NextCloud, Syncthing, Tail Scale etc. Nothing too heavy duty. I don’t have much aspiration to spin up any VMs within the server, so no need to factor that into the discussion at this time.

I am very much in the casual/hobby user space and don’t really plan on using the NAS for heavy work, content creation, or dev work so I don’t need blazing fast performance or mission-critical resilience. I am also not terribly concerned with downtime if a drive fails. I will be the only one using the system 90% of the time. As I mentioned, I will be keeping backups of all my “would be devastated if lost” files on other machines/locations in case the NAS nukes itself. As painful as it would be, I can always re-rip all my movies/shows and re-download all my music.

My question for the community is; Considering my non-critical, non-business, casual use, what would be a realistic/reasonable drive/pool/vdev configuration for my use case given the information above? Which flavor of RAIDZ (if any) would offer the best balance of capacity and resilience for someone like me? I’m currently between a RAIDZ1, RAIDZ2, or 2 or 3 mirrored vdevs depending on how many drives I buy. I know this will ultimately come down to personal choice but I’m just looking to get some opinions to better inform my decision. Most of the videos/posts I’ve seen thus far have been catering to people with much higher needs/aspirations than me.

Please feel free to let me know if I can provide any more information or clarity (it’s early and I need more coffee). Thank you all so much in advance for any advice or guidance you can offer!

Well, you have a 10gbe NIC. Do you want to be able to saturate the NIC? If yes, the answer is mirrored VDEVS. If no, then just go raidz1. Raidz1 is going to be a bit slower than mirrored VDEVs because of alll of the parity calculations, etc. But if the utmost speed is not an issue, you can get by with fewer drives in raidz1

Yeah the 10G NIC is definitely overkill for my situation these days. When I was building out the network I was in the midst of a video editing kick and I was planning to use the capabilities for faster footage transfer to my desktop which has the same card. I’ve left that all behind but retain the 10G because my switch has the SFP ports and the fiber is already run so, may as well use it.

I would personally avoid the 7200 RPM drives. It sounds like you don’t need the performance. Plus they create more noise and heat. I would get a bunch more of the 5600 RPM WD Reds, also in 6TB, and either expand your existing pool, or if you are down for the work, back that data up somewhere else and recreate one main pool using raid z1. Six 6TB drives in a raidz1 would make a pool with roughly 30TB of space, and the ability to lose one drive. If you really wanted to go to 8 drives, you could create two four disk raidz1 VDEVS and stripe them into one pool. That would give you approximately 36 TB of space, a little more speed because of the striping, and the ability to lose one drive in each VDEV. I know its technically not striping, but that is the easiest way to describe it.

Fair point. I was looking at the 7200s mainly because I can find refurbed 8TB Exos drives on eBay for about $50 less per drive than the brand new 6TB WD Reds. I hesitated to mention that idea here as I’m sure that would not necessarily be recommended (what do I know though). I would certainly like to keep costs as low as possible but also totally understand your logic in going for the slower drives.

All of my gear is racked up on my workbench in the basement which stays fairly cool year-round. The added noise wouldn’t be an issue as I don’t spend much time down there anyway.

This is exactly why I hopped on to ask though. I have so many options swimming around in my head, I wanted to see which of those ideas were ridiculous or not.

I also need to get a grip on reality so far as how much capacity I actually need. I still haven’t filled up the 6TB I have now with all the data I’ve collected over the last 15-20 years. So the question I need to ask myself is, at my current rate, will I realistically be able to fill 30-36TB of drive space before they all need to be replaced anyway? I would hazard a guess that no, no I won’t.

1 Like

That is very vague IMO. I think that the topology of the pool should be considered from the use cases. For example, I pulled the trigger for the mirrored VDEVs only because it is explicitly required by Proxmox Backup Server (which ironically I still didn’t set up).

Mirrored VDEVs are not faster than RAIDZ in general. They are faster for random RW. And slower for sequential write for one. Here is a good description.

Regarding raidz1 vs raidz2, IMO raidz1 should be ok even with bigger drives as long as you have backups. Because raidraidz is not a backupz (c).

So, you were using windows, and didn’t fill 6TB… Seems like you didn’t make backups (or perhaps you were using a much more efficient solution than the built-in backup&restore). In the case you don’t have backups, I suggest you go for raidz2 rather than raidz1.

Even if your underlying pool can not write 1.2GB/sec, you will still benefit from 10G for non-sync writes. Because for non-sync writes ZFS can hold up to 4GB of writes in the RAM. So theoretically you could transfer a 4GB file in 3-4-5 seconds even if your underlying pool consist of one slow laptop hdd. There is a similar story for reads if your ARC has a high hit/miss ratio.

P.s. I’m new to truenas (long-time lurker though). Do not blindly trust my opinion.

When I built my first FreeNAS box is went with a 5 drive RAIDz2 configuration, which worked fine.

But then I wanted to grow it :frowning: Today you can add a drive to an existing RAIDz vdev, but there are some downsides (largely in terms of space reporting). I tore it down and rebuilt using 2-way mirrors and have not looked back. Whenever I need to add capacity I just add 2 more drives (of any size). A zpool will work fine with drives of different sizes, but 1) you will get unpredictable performance and 2) you will almost certainly have wasted space. But as long as long as all the drives in a given vdev are the same size you do not have any loss of space (you might still have variable performance as one vdev might not perform the same as another).

Also consider resilver times when you size drives. If you need 12 TB, you can go with 6 x 2 TB mirrors (12 x 2 TB drives) or you can go with 3 x 4 TB mirrors. The 4TB drives will take roughly twice as long to resilver as the 2TB drives. All other things being equal, your performance will scale with the number of top level vdevs, so the 6x2 will be faster overall than the 4x3 (assuming both are 2-way mirrors).

Consider adding a hot spare as that permits a drive replacement to start as soon as the drive fails and does not have to wait for you to notice it :slight_smile:

Think about backups and reliability. That may not be a priority right now, but after a year or two and you have a bunch of data there it will matter.

For disk controllers (called Host Bus Adapters), I like the LSI controllers. If you don’t need SAS3, then used SAS2 controllers are very affordable and will permit you to use SAS or SATA drives. I have been finding new SAS drives are often cheaper than SATA at the same capacity since they can only be used in servers (with SAS HBAs). Avoid the MegaRAID controllers as they are RAID controllers and not just HBAs and do not work well with ZFS. I also make sure I use IT firmware (most used HBAs from reputable sources will already have IT firmware, if not it is fairly easy to do, it is just an extra couple steps).

Here is one example https://www.serversupply.com/CONTROLLERS/SAS-SATA/HOST%20BUS%20ADAPTER/LSI%20LOGIC/9207-8I_287587.htm but I have seen them as low as $30.

Avoid at all costs SATA enclosures with port multipliers. Multiply your problems with SATA Port Multipliers and cheap SATA controllers

SAS bus expanders are fine.

Hi all,

Thanks for all the replies and apologies for the slow response. Lot of good info to process.

I think my most pressing concern now is to minimize wasted space. I think a bunch of mirrored vdevs may be my best option if I’m reading you all correctly.

I will need to research some options for housing all of the drives, as my current 4RU chassis is getting cramped as-is. Any recommendations for a relatively budget conscious rackmount solution? Also open to non-rack solutions with some kind of SFF interface.