TrueNas Scale Hobby Setup Directions and help

Hi,

I have a few questions about TrueNAS Scale and was hoping that you guys can help me with that.

A bit background:
I am new to TrueNas and ZFS but I like to learn new tech for me and use it in my home network if it looks promising. I like TrueNas at the moment, especially scale with its VM and docker integration. With those in mind, my plan goes from “Nas” to homeserver with those features (like standard jellyfin, nginx, home automation etc).
I test this all on my consumer hardware (Asus gaming motherboard, i5, 600w gold PSU) what I had lying around. For storage there are 3 ssds in there (one for system, 2 as a pool with mirror and 1 TB Seagate Ironwolf HDD as an extra pool in stripe setting.
My Plan is to upgrade this step by step storagewise to get the SSD part at least to raidz1, better raidz2. The HDD part is not mission critical so maybe Raidz1 would be sufficient.
So now my questions and I hope this is enough background for this:
As far as I read I am now playing with fire, because I use hardware which is non-ECC for ZFS. To get this in a proper setup do you guys have suggestions for a cheap setup in ATX formfactor? I only know server grade hardware for 19" racks which are loud as hell. This setup should be placed in my office at home.
Is this even a feasible plan with that technology to stuff even more software on it through docker (apps in Scale).
I know ZFS is maybe a little overkill for my home purpose and a “normal filesystem” would maybe be cheaper or better for that. But as I said, I really like TrueNas and want to use it more and learn more about it.

I do all that as a hobby and don’t have much storage experience (I am a DBA, storage is a thing for other guys in my work setup).

Any other point that you can think of based on your experience which can direct me to a good setup overall?

Thank you very much in advance for taking the time to help a newbie on this matter!

And sorry for my English, it is not my native language.

With ZFS it is not possible to change the vdev layout. So you cant switch from stripe to raidz or from mirror to raidz. You can also not switch from raidz1 to raidz2.

The next big release of Truenas will allow you to expand. So you will be able to go from a x-wide RaidZ to a x+n wide Raidz.

Both ASRock Rack and Supermicro offer mainboards in the atx formfactor with IPMI and ECC support. They are often available for reasonable prices as bundles with a CPU and RAM on ebay.

Edit: for a simple NAS and a few apps even older XEONs will do, or a Ryzen with a mainboard that supports ECC.

EDIT2: you can go from single disc → Mirror

1 Like

thats quite some useful information. Haven’t read that until now, that I can’t switch the vdev layout. I am still new to this.

So my best bet would be, that I wait until the update or maybe the better option, I think of my setup beforehand on how I am going to design it. So my plan with, expand while learning goes down the river.

What would you suggest for my HDD-Part (Big Storage for mostly media (movies, audiofiles etc. which is not that Important) raidz1 oder z2?

If I have to build from scratch, it is not that big of a problem. my setup now was just to test the whole thing.
At which point would you add a cache disk to a pool?

I will check those for the needed hardware. Thank you very much for that.
I am aware of the fact, that you probably answered questions like mine a million times already. So thank you very much for your patience.

Depends on your fault tolerance. RAIDZ1 will allow your pool to survive a single disk failure. RAIDZ2 on the other hand will give you resilience against two disk failures, this is particularly useful during replacement of a disk, as a second failure during the resilver process can be withstood with Z2, whereas you’d lose all data with Z1. The number of disks, size of the disks, type (HDD/SSD), and importance of the data all have some significance here.

As you mentioned the files you plan on storing are not important, RAIDZ1 would probably be fine for you while keeping the largest amount of your storage free (without striping, of course). Just note that as mentioned above it’s only resilient to a single disk failure, so replacing a disk becomes more of a priority than an “I’ll get around to it when I get around to it” sort of thing.

For a pool made up of SSDs, I typically would prefer a mirror stripe (multiple 2/3-way mirror vdevs). It’s much less efficient in terms of how much usable storage you get, but for block level storage it is significantly faster than a single RAIDZ vdev, which will have IOPS there around equivalent to the slowest drive. Multiple mirror vdevs, on the other hand, will be significantly faster (as data is striped across these multiple vdevs). This is the same for HDD, but I personally just don’t like storing VM data etc on spinning rust!
Give this a read: Some differences between RAIDZ and mirrors, and why we use mirrors for block storage | TrueNAS Community

Again it depends on xyz. L2ARC should ideally always be on SSD, and should not exceed 10x the amount of system memory. What sort of memory capacity are you looking at? You should really see how things are before adding L2ARC as you may already have a very good ARC hit rate and have no need for it.

1 Like

Only once you have sufficient RAM (not less than 64 GB), and system statistics show you have a use for it.

Most folks feel that RAIDZ1 provides inadequate protection for your data, given the likelihood of data loss during a pool rebuild. See:

3 Likes

That’s a whole load on information I need to digest first.
A read a bit (not enough time right now) in each topic you guys provided.
My new plan (probably not the last alteration) would be, based on your information, the SSD part which should have the more critical things on it (photos, documents etc.), a mirror of 2 sufficient disks with an periodical external backup.
Somehow I feel like there is not necessarily the need for raidzx because those a lot of small data files.

For the media part which I plan on storing on HDD for now, I think raidz2 would be the better choice. With 2 disks which can fail and the big blocks of data (for example movies) this seems good enough for my understanding.

For the hardware part, well it is a bit overwhelming for now to tip into this new sea. Can’t really decide which ASRock rack would be good enough for my purpose.
Any suggestions?

Any more information you can think of which alternates my plan again to be a bit better? Is this version even a good one?

Thank you very much for all your input.

I know there are a ton of opinions and different attempts to get things done. For one like me, understanding different approaches and mindsets helps me a lot to form my own.

Usually even mini itx will fit in a atx case.
So here is IMO a good bundle.
6x sata
1x super DOM for the OS
10G network

If you need more drives, get a cheap LSI HBA 9300-8i

Great board and CPU

1 Like

You could do this with a scheduled replication task to your RAIDZ2 pool (or vice-versa), and that would add another layer of data protection, yes. Keep in mind that your pool occupancy rate should remain low-ish for best performance, so if you aren’t building out a big pool of mirrored VDEVs then it might be worth just keeping these on the RAIDZ2 pool and occasionally replicating over the super important stuff with a scheduled task.

Documents and media will be fine on a RAIDZ pool, mirrors are generally more preferable for block storage (database storage, VM disks, iSCSI). Here’s even more reading on that. Jgreco’s resources are generally very insightful, though a lot to take in!


I can’t really recommend hardware, @Farout’s suggestions are good. I like a bit of jank myself so I tend to use ‘not recommended’ hardware (there’s no fun in something that just works!)

@essinghigh don’t give me even more ideas to play with. Database storage wasn’t even in my scope yet and now I have to surpress the urge to test that too. :smiley:

But overall very helpful your insights, thank you very much.

@Farout you are probably right. I could fit a lot more “consumer grade formfactors” in my high tower. it even has the holes pre-drilled for different forms.
would something like this hardware work too?
Sadly I need to cut down the inital cost for that bit.

If you’re going to go for media storage & will be using Plex/Jellyfin you may want to consider hardware transcoding so that way you can fully get lost in what equipment will fully suit your needs and meet your budget :slight_smile:

I see nothing wrong with X9SRI-F as a choice as it has ipmi, ecc support, 2 3.0x16 and 1 2.0x16 slot; so realistically you can slap an HBA if you want extra sata ports, a gpu for transcoding (you’re unlikely to be seriously impacted by the pcie bus speeds for transcoding), and a 10gig NIC if you feel like it! 2 sata 6 ports for some SSDs, and 8 sata 3 ports should be good for spinning rust.

The only two things I don’t like about the board are no nvme slots, and possible efficiency vs power cost ratio due to platform age. But considering the price for something that is kitted out with 12 cores and that much ram, it really ain’t that bad at all.

Like always the hard drives are likely going to be the biggest cost of the build; avoid SMR drives if you enjoy your sanity.

I’d really recommend at least a X10 board instead–much better IPMI that doesn’t need Java for the remote console.

3 Likes

mmm those extra pcie slots too

Yeah thats a good deal. Plenty of performance, even for a couple of VMs.

But upon arrival i would switch out the CMOS battery and maybe get a CPU cooler from Noctua. The supermicro coolers can be noisy.

:point_up:

For apps/VMs (small files/blocks), mirrors will serve you better. Keep raidz for capacity storage and large files (much preferably raidz2 or raidz3 for HDDs).

“Playing with fire” is possibly a bit too strong.
As for server harware, it is… hardware. The transition to a “recommended”, server-grade solution, involves replacing the gaming motherboard by a server motherboard in whatever case you’re using—requirements for cases are to be able to hold and cool the desired number of drives. Noise is primarily a matter of how many spinning drives you have.
If you’re in the EU, Gigabyte MC12-LE0 is a very affordable server motherboard.

When you have at least 64 RAM already AND arc_summary shows that you still have significant cache misses. (Not very likely in a home setting…)

@Philipp_F you have been given some good advice here, but I think the first thing you have to determine is what your data storage configuration should be. You did not say how much storage you might need or what your priorities will be for using the new system.

If your first priority is storing files for occasional access and serving media for Jellyfin/Plex, then I would definitely recommend that you consider RaidZ2 - which will give you two redundant drives to protect against failure. It is very uncomfortable to lose a drive and be without redundancy while rebuilding the pool - even when storing “non mission-critical data.”

For instance, if you use four 4Tb hard drives in a RaidZ2 configuration (generally referred to as 4x4Tb RaidZ2) then you will have 8Tb of capacity with two drives redundancy. If you use six 4Tb hard drives in a RaidZ2 configuration (6x4Tb RaidZ2) then you will have 16Tb of capacity. I hope this makes sense as understanding ZFS pool configuration is a key part of configuring a good TrueNAS system.

I would not use SSD’s for storing data - you will not see much performance improvement for the vast majority of tasks. Save the SSD’s for system software and possibly VM’s.

As far as motherboard/cpu combinations, I would personally shop for used server hardware as has been suggested. However, I would avoid really old hardware. Not only are old machines very energy inefficient but newer CPU’s will be faster.

Good luck.

With that, you had me, I am definitly not a java guy.

Thank you very much for that recommendation too!

you are right. With all that insides here I am now more capable of saying what I am planing to do. It will be RaidZ2 with 4x8TB Seagate Ironwolf for my “mass data” like the mentioned media files for jellyfin. With that I should be able to add additional drives if needed quite easily?
2 SSDs in mirror for some homelab projects (now I want to try a for example a DB).
1 little SSD or NVME for system (I got my hands on some notebook ssds used but not very much used and in good shape for next to nothing).

For the hardware part, I know a lot more than before thanks to you guys. I keep my eyes open for a good deal. Changing to server hardware is something I want to do, because it feels safer and at least ECC wise it should be too.
Energy inefficiency is a thing yes, but it does not concern me that much, because I am upgrading my solar setup already to counter my energy bill.

Thank you very much again for all your help and insights.

Edit: forgot to mention RAM. With 16TB storage there should be 16GB enough? I am planing on 32 anyways. The question is more to confirm my memory of 1TB storage needs around 1GB RAM?

Not quite. Recommended expansion would be in batches of four drives for addtional 4-wide raidz2 vdevs (need not be the same 8 TB size), or replacing existing drives by larger drives.
When raidz expansion eventually lands (“expected soon” for at least the last seven years…) you may be able to widen the existing vdev a bit (5- or maybe 6-wide), but this is not as space efficient as a native wider vdev.

This is a rough rule of thumb rather than a hard requirement. For storage, 16 GB is fine and 32 GB should be comfortable. Now it depends on what you add as app/VM on top of storage duties…

Emphasizing what @etorix said… I would start with enough storage to meet your expectations for the next three years or so. If your budget won’t allow that, then you can add an additional pool later - but that makes the system a little more complex.

The Seagate drives are 7200RPM drives - make certain you cool them properly.

1 Like

@etorix so to get this right, if I want to expand I need to setup a second 4x drives setup to get in my case additional 16tb of space?
Is there a possibility to merge those somehow?

They have en extra airflow channel in my case, so this should be fine

Much prefrably another 4-wide raidz2, but this need not be 4*8 TB.
You can mix and match different geometries (mirror, raidz1, raidz2,…) and different width, but the resulting pool is limited by the least secure and the least performant vdev, and a mix involving any kind of raidz does not allow to remove vdevs to get back to sanity. So it’s better to stripe vdevs of homogenous geometry.

No. Raidz# is immutable: Once created, a vdev stays forever at its raidz level and width. The “width” part may eventually change, with caveats; the “level” part is set in hard stone.