Practical Limits for Data Storage for Home TrueNAS Users

Is there a point where TrueNAS becomes a less-practical solution when the dataset reaches X number of terabytes? Ask another way, how big of a dataset can be managed before costs really begin to escalate?

It seems to me that once you exhaust the ability to use a pair of 16-channel disk controllers on the same mainboard, you’ve hit your first practical limit. The next would be point in HDD capacity where the GB/$ cost goes from arithmetic to something moving towards geometric, and the final would be when the mainboard’s RAM capacity is maximized.

I ask because my dataset will finalize at somewhere near 375TB if I can afford to continue. I probably can, but I have to ask at what point the content continues to hold value in relation to the investment to contain it. Of course, only I can answer that question, but practical limits for home use may dictate the answer…

Thank you,
-AK

What other alternatives to TrueNAS is there?
It seems to me you, either you are using TrueNAS to store any files that come along regardless of their importance or you have snapshot management issues of some kind.

Does your 375TB dataset holding that much valid data or have you been editing or deleting some of it and snapshots are just preventing the old data from being released, taking up valuable space?
Maybe you need to review how your are making use of your storage space within TrueNAS and most likely learn how to manage snapshots to keep your system fit.

Well… ZFS originally stood for " Zettabyte file System". So a couple of 100TB of data is nowhere there.

Also you are technically not limited by 16-port controller cards when you would be using expanders.

Also RAM. Xeon Scalable and EPYC support multiple TB of RAM.

For an enterprise user, the practical TrueNAS limit is 20 PB.
You can achieve it with iX TrueNAS M60 + 12 x ES102 expansion shelves.
This large system requires an extra tail 52U server rack cabinet in a datacenter.

Therefore, for a home user the main limits are money, electrical power and cooling.

4 Likes
  1. This is an archival server - the 375TB is the raw data and only the raw data. Once stored, it is not subject to edit or modification. We have never used snapshots, nor is there a desire to do so, since each data item is about as static as is possible in a digital environment. If TrueNAS had something akin to Synology’s High Availability configuration - which is what we inherited - I would use it.

  2. It was expanders - I could not remember them for the life of me. Thank you for chiming in - I have no idea how much longer I’d have been pulling my hair out over that one.

  3. Thank you. Money, electrics and cooling are on the list, so we are off to a good start. The problem is not the money, at least on the front-end. It’s the ongoing electrics we have to be realistic about.

So, Apollo brought up a good point: are we just warehousing anything that comes along regardless of importance? We asked this question multiple times. The answer is no. This archive is the size equivalent of about 7,500 dual-layer Blu Ray movies cloned bit-for-bit as ISO images. That’s huge, right?

Not really.

Another way to look at data is like this;

  1. Very important data and needed to be online
  2. Common data, needed occasionally
  3. Archival data, needed for legal or historical purposes

Using that separation, (or another), means you can setup different datasets or better yet, different pools, for different purposes.

For example, archival data can have 3 separate striped backups, that are not online. Thus, not using electricity. Further, if you still use ZFS on them, you can bring in one copy at a time, and scrub the set. If you have a simple error, get another backup and scrub it. Then if good, copy over the file(s) that gave errors.

Using offline copies with ZFS allows data integrity checks and fixes. Another fix is that if you have file errors in all 3 of your backups, hopefully they are in different files. That would allow you to make all 3 backups whole again.

Error detecting file systems, even without correction, (like RAID-Zx), does allow you to KNOW what was bad. Then potentially fix it.

Of course, snapshots provide good protection against crypto lockers.

And if there’s no modifications, then the snapshots don’t take any appreciable space…

6 Likes

With SAS expanders, all things are possible… sorta.

I work for a software company that sells an archive product targeted at the medical industry. For our size in terms of employees, we are exceptionally storage heavy, being slightly north of 15PB of capacity. 4.5PB of that capacity is TrueNAS Core backed by NVME or SATA flash. Another 1.5PB is TrueNAS Core backed by 12GB SAS mechanical drives in JBODs. A further 2.5PB is TrueNAS Enterprise M40 and M60 HA servers. There are alternatives of course, but if you are after cheap and deep, I’m not aware of anything that works better than TrueNAS.

6 Likes

Why would this be any kind of a limit? First, you can easily get motherboards that have five x8 PCIe slots, which means you could support far more than just two HBAs.

And, the number of channels on the HBA isn’t in any way related to the number of disks you can connect. I have a -4i card that connects to 12 disks, because the chassis has an SAS expander backplane.

It’s fairly easy to run TrueNAS on a motherboard inside a chassis that supports 36 disks, and connect two 60-disk JBODs through external cables. Even with 4TB drives, that’s far more than enough to support your 375TB requirement. Even with just a single 36-bay chassis, you’d hit 375TB with 12TB drives.

2 Likes

I’d say that 375 TB of data is quite a bit more than (almost) any “home” user is going to see, but it’s totally practical. Load up my chassis with 20 TB disks in 8-disk RAIDZ2 vdevs[1], and you’ve got 480 TB (~420 TiB) of capacity. All done in 4U, with four bays free for spares or whatever, and no need for external drive shelves. All with a single 8-port SAS HBA, and expander backplanes in the front and back. Of course, you’re spending some bucks on the spinners.

Something like this would do fairly well, though I’d prefer an X10-series motherboard over the X9 board this includes:

And it probably has enough RAM already given your stated use case, though DDR3 RDIMMs are cheap if you wanted to add more.

Edit: Here’s just the chassis with SAS3 backplanes. It’s a newer revision than I have, including two 2.5" drive bays at the rear for the OS. But you’d need to provide your own motherboard–I’m happy with the X11DPH-T I’m using, but there are plenty of options:

Edit 2: Here’s the same motherboard I’m using, complete with 2 CPUs and heatsinks:

…and this should be a suitable HBA, though you’d still need the SAS cables:

Add however much RAM you think you need, maybe some internal cables, and of course drives. I don’t have any connection with any of these sellers; they’re just what I found on eBay searching for appropriate hardware.


  1. I’m not saying this would be the most sensible way to do it; it may be worth investigating dRAID as well ↩︎

3 Likes

4U servers with 36 disks are very compact but they are so heavy that the servicing become a nightmare.
For example if you need to replace a faulty DIMM, you have to pull out all the hard disks and the power supplies before in order to decrease the server weight before pulling out of the rack.
After the intervention, you have to push in back all the hard disks and power supplies.

The iX TrueNAS appliances, in particular the M-series or X-series are easier to maintain because you can pull the motherboard while keeping the server in the rack.

Therefore, when you cannot afford an iX TrueNAS server, you can limit the pain by using a 2U server + a JBOD.
It takes more space in the rack but it is easier to service.

  • you can pull the 2u server without removing the harddisks or power supplies
  • the JBOD has only SAS expanders inside so they are more reliable than a regular server.

Therefore now I prefer having a 2U server with JBOD than a 36-disks server.

3 Likes

A good middle ground can be 2U servers with mid-planes. Dell and Supermicro come to mind for those.

I don’t know where this idea comes from, but this definitely isn’t my experience–that’s why they come with ball-bearing rails. What doesn’t seem to be reasonably available is a cable management arm, so I do need to either have adequate slack or disconnect all the cables at the rear, but my server slides in and out quite easily without any of the disassembly you mention. Getting them into the rack in the first place takes a second person or (better yet) a hydraulic lifting table, but once they’re there servicing is no big deal.

Edit: On review, I think I may see the source of the confusion. Using the rails Supermicro provides, I’m able to easily slide my server forward to service it without completely removing it from the rack–it’s still attached to the rack via the rails. If you’re talking about removing it from the rack completely, you’re right that I wouldn’t want to do that without help (or after emptying the drives). But you don’t need to do that in order to service it; slide forward, lift the lid, do the work, installation is the reverse of removal. Same as with my R630, though that does have a cable management arm.

2 Likes

I’ve hot serviced 60-bay JBODs in racks, so weight is not the issue. The 12 bays in the back of a 36 bay chassis do cover up the RAM and CPU, so that does make it tricky, but a standard 24 bay 4U is as easy to service as a 1U or 2U chassis.

Also, just anecdotal, but once I have finished burn-in, I’ve never had anything other than a disk fail. RAM, NICs, etc., all run until I’m ready for an upgrade.

2 Likes

At home, I had no issue racking even a 4U chassis by myself, as long as I removed all the power supplies and occupied drive sleds first. Since it’s a one-time pull and re-install, it’s not really too time consuming.

Admittedly, I don’t have any servers higher than about 24U above the floor, since I use the higher spaces for things like PDUs, networking, and KVM.

1 Like

Not in the SuperMicro 847, at least; the motherboard tray is on top of those bays. Not sure how other manufacturers implement it.

It’s been some years since I did mine, but it was probably mostly full of drives at the time. I’ve never found the PSUs to be a significant factor in terms of weight, though.

1 Like

Comes down to how well those racks are attached to the ground and ideally the ceiling also. 4U servers with a lot of drives can produce an awful lot of bending moment - not something you want to pull out to the limit unless the rack is lashed top and bottom.

For my tiny SOHO Lian-Li A76 setup, the 12 HDD slots inside are perfect - easy to service, no bending, etc. If the Data is actually totally archival and likely never will be serviced again, I’d consider looking over LTO tape drives for the off-site storage part of the equation. Likely very dense re: data for little weight and hence interesting re: storage at Ironmountain or wherever.

…or there’s just enough other weight in the rack that this isn’t an issue, which I’ll admit I was assuming.

1 Like

I’ve made a similar assumption in the past with my large tool cart, with near-disasterous results. Particularly in a SoHo setting where the 4U may be the only object with serious mass, and said mass is concentrated in the front of the chassis, you can generate a lot of bending moment and seriously shift the CG of the whole rack once you fully pull out the chassis.

If I were to have one of those 847-series units at home, I’d hang it vertically with one of those wall adapters. Virtually no bending moment and all the drives are just as convenient to get at as they are presently in my Lian Li.

1 Like

Same thought - a rack full of 4U servers won’t topple when you pull one at a time. A rack with just one on the other hand … :smile: