LTT DOES NOT fork TrueNAS

More information about HexOS, Eshtek, and TrueNAS is here:

1 Like

I would be in for that.
I like TrueNAS a lot but it was a big time and money investment. Even space investment, the home server I have takes a big chunk of space (and I live in a condo) and makes a lot of noise.

False, statistics is not science, it is math. They are not the same. Science is base on inductive logic (moving from observations to generalizations to construct explanatory frameworks) and as such it does not prove things.

And yet perfect for my needs:
Already paid for hardware, it’s a back-up for my always on NAS, maximum space, 99.99999% static data–so I don’t have to turn it very often or for long, good enough performance, don’t care how long it takes to resilver, replacement drives are cheap. (And all the flashing lights are really cool in a dark room.)

That’s more like cold storage than a NAS :slightly_smiling_face:

Well, HexOS seeems to be an extra layer of (cloud-based?) interface on top of TrueNAS. HexOS’ site even explains that the TrueNAS interface is still locally available even when the HexOS interface is not because the system is disconnected from Internet.
And, of course, ZFS hardware requirements still apply, so for a properly designed system the money investment remains as is. And proper designing means that the time investment cannot change much.

2 Likes

I agree with @etorix here. If you read the Planning section of Uncle Fester’s Beginners Guide to TrueNAS you can see just how much effort needs to go into planning for your NAS - mainly sizing, hardware selection and disk layout - and you still need to do this with TrueNAS or HexOS alike.

This will in the future also have a section on pre-production stress testing, and you should probably still do that with HexOS too.

So what HexOS will likely do - aside from providing a cloud-based interface allowing for remote management) (privacy and security questions still to be answered) - will be to hide or simplify the technical complexity of both initial setup and ongoing monitoring, management and pool repairs.

Full disclosure: Uncle Fester’s Guide is currently in the process of a major enhancement, mostly written by me - so if there is anything factually incorrect you can blame me. Any help for fixing errors, completing the new sections (configuring storage, stress testing), updating the existing build and configuration instructions from CORE to SCALE, and editing it all for consistency, correctness and (dad-)humour and graphics is welcome.

3 Likes

This may be true if you are talking (say) 7 days vs 1 day, but if the resilvering were to take (say) an entire year, then you might feel differently - and if you use SMR disks just because they are free, that is not an outrageous expectation.

OTOH if this is only a backup for data help elsewhere, who needs to resilver. You can destroy the pool, replace the disk, recreate the pool and then replicate the data from scratch again.

That said, in general terms each of us is free to either accept or reject advice from more experienced people, particularly where they have a logical rationale for it like @pjrobar Philip does.

It is, however, just tragic (and to a small extent humorous) when e.g. a different user where this is their primary copy of their data ignores such advice, and we get a “told you so” moment a few months later when first they ask a question about their resilver having been running for more than two weeks, followed by a call for help another two weeks later because another drive failed during the resilver process and the pool is now offline (permanently).

2 Likes

I don‘t see any problem. It‘s not about usability, it‘s not about ux design, it‘s not about a certain number of technical features, it‘s not about configuration … it‘s all about knowledge (and the ability to avoid that Dunning-Kruger-effect). As long as everything is documented …

„Not a single cultural advance in human history has occurred without people shooting off a foot or losing a hand.“ - Book of JEDI-Wisdom

it seems to be more aimed at the people using synology/qnap for a nas imo, all the news from it has been disapointing tbh. cloud only, no importing from truenas scale, and no plans to ever add it, I’m totally new to nas, and have built my system few months ago, been waiting for a beta of of hex os some time, but think it might be a bit to basic for me, I don’t want anything cloud, I don’t want to rely on having to connect online to it. I don’t know, maybe I just expected more.

But we all agree that a 20-wide VDEV is bad on so many ways.

TL;DR: Choose a RAID-Z stripe width based on your IOPS needs and the amount of space you are willing to devote to parity information. If you need more IOPS, use fewer disks per stripe. If you need more usable space, use more disks per stripe. Trying to optimize your RAID-Z stripe width based on exact numbers is irrelevant in nearly all cases.

–Matthew Ahren, ZFS Project Cofounder

ZFS RAIDZ stripe width, or: How I Learned to Stop Worrying and Love RAIDZ

1 Like

From the same source:

To summarize: Use RAID-Z. Not too wide. Enable compression.

…with particular attention to the “not too wide.”

4 Likes

And 20 is too wide for most sane users.

3 Likes

No, it’s too wide for other people’s uses, but perfect for mine.

Yes, for some uses wide RAID-Zx vDevs have acceptable uses.

However, their are problems that less experienced or no experienced ZFS users will have, that they did not expect. Or even want. Having weeks long re-silvers or scrubs is not something new users likely expect. Or worse, making a very wide RAID-Z1 and experiencing a second disk failure during the first disk replacement.

Anyone who wishes to exceed common practices is quite free to do so.

I personally have 2 configurations that are “odd”:

  1. My very old laptop, (2014 but using 2 year old tech), has only 1 storage port, a 2.5" SATA disk bay. I have a 1TB SSD partitioned in to 3 partitions, 2 for the ZFS OS Mirror and the remaining space for misc. non-redundant files, (also using ZFS).
  2. My miniature media server has 2 storage ports, 1 mSATA and a 2.5" SATA disk bay. I Mirror the OS via ZFS across both, but, I stripe the remaining space with ZFS for media and other storage.

Neither of those use TrueNAS and I am well aware of the potential problems;

  1. Loss of the 1TB SSD means total loss of both halves of the OS Mirror.
  2. Bad blocks in media files means having to restore the file from backups. Have had to do that more than a dozen times already.

My opinion is that a “user friendly” interface to ZFS should either have severe warnings to “odd” or “extreme” configurations. Or simply not allow them at all.

Anyone wanting “odd” or “extreme” configurations should have the knowledge of any potential problems. And can make it happen regardless of a GUI limitation.

3 Likes

Just out of curiosity, what possible benefit is there from having two partitions mirrored on a single disk?

Bad block protection.

Without something, “copies=2” or 2 partitions Mirrored, a single disk can be a source of data loss for files. If that file was critical for booting and running, I would end up without a bootable laptop.

Is it a great solution?
NO.

Would I select that for a new laptop?
NO.

In fact, I went to a lot of effort to find a new laptop a few years ago that had 2 bootable media, NVMe and 2.5" SATA. That made mirroring the OS an easy choice. As for the speed, I want reliability over speed for a portable computer where I have little recovery options, (aka in a hotel far from home).

I bought that old laptop early 2014, and had 2 year old technology in it. It was cheap at a time when I needed to save money. And just before I standardized on OpenZFS for everything.

ZFS on Linux 0.7.x versions were a bit raw at times. 2013 & 2014 time frame saw a lot of ZFS on Linux changes. One of which made using ZFS on Linux much better. This had to do with static API entry points. Before then, if you did not update your Initial RAM Disk properly on ZFS userland update, you could have an un-bootable computer.

Thanks for the explanation. Wouldn’t a mirror simply copy the bad block on both partitions? Anyway, cheers for explaining.

Nope, when a block goes bad on one side of the mirror ZFS recognizes it and uses the other copy to fix the bad block.

2 Likes

The fact that’s perfect for you (which itself could be questioned) doesn’t change the fact that’s a questionable layout; a slightly better response could have been “Maybe it’s too wide for other people’s uses, but perfect for mine”: please note the absence of negation.

But it’s semantics, and it appears none will be able to convince you otherwise (even when quoting the same resource you use to validare your claim).