TrueNAS SCALE 24.04.0 Now Released!

Lazzyness has nothing to do with professionalism either, especially when that can lead to total loss of people’s data in software they trust.

To serve You I found a quick way to do it, but it has to be tested:

@@ -547,7 +547,8 @@
     fi
 
     _disksparts=$(for _disk in ${_disks}; do
-       echo $(get_partition ${_disk} 3)
+       _part_tmp=$(get_partition ${_disk} 3)
+       echo '/dev/disk/by-uuid/'$(lsblk -n /dev/$_part_tmp -oUUID|xargs)
     done)

Grub has to be checked additionally, had no time to check it.

And I really recommend to set by default all pool creations using the web interface
with GPT/GEOM labels and/or UUID and to indicate that in the documentation.

It will solve a lot of trouble when things go wrong.

For the nth time, this is the boot pool you’re hyperventilating about. Loss of the boot pool will not lose a single byte of data (unless the data pool is encrypted and the user doesn’t have a backup of the encryption key, which is a separate problem). It’s disposable, and that’s by design. This at least borders on FUD.

I think we all agree you’ve raised a valid point, but you’re blowing it wildly out of proportion.

TrueNAS, and FreeNAS before it, have always used UUIDs for pools created through the web interface, back to the days of 8.0. But I don’t know why you care that the documentation reflect this.

1 Like

Tried upgrading this morning after taking a checkpoint of my Apps pool. Not going too smoothly. ix-zfs.service timed out after 15 minutes, only managing to import my main data pool (4 x 6-disk RAIDZ2 vdevs); my apps pool (2x NVMe SSDs mirrored) and another pool of mirrored spinners were not imported on boot. The system did eventually import both of them without intervention by me, though.

Then into the web GUI. The dashboard never did fully load–the CPU and Memory sections just showed the spinning “wait” indication, and the network section showed an error getting the graph data.

On to the apps, which was my bigger concern. As I feared, it was showing “Error in Apps Service” and the apps weren’t running. Suspecting this was because the system had failed to import the apps dataset by the time it finished booting, I went to Settings → Choose Pool and set it to the pool that had since been imported. That didn’t change anything.

After about a half hour of uptime to try to let things settle down and hopefully fix themselves (with no change), I rebooted. The behavior on reboot was the same: ix-zfs.service timed out after 15 minutes, only the boot pool and main data pool were available when the boot process finished, but the other two did come online eventually.

The GUI issues, at least in the dashboard, were resolved.

Apps still weren’t running. Alert is showing this error:
image

Those datasets do exist; I continue to assume that the problem is that the system somehow failed to import the software pool in time. Going to Choose Pool and selecting the software pool doesn’t seem to change anything; the error persists on the screen.

Not sure what the problem is here; looks like time to revert to Cobia.

I tried to report a bug through the GUI, but I don’t know if it “took”–the window disappeared without giving me any confirmation that the ticket was entered, and I don’t have an email as yet.

Can you maybe send me a private message with the contents of /var/log/middlewared.log?

Confirming we got the bug report

1 Like

Were there any changes in GPU supported, comparing to Cobia? My Frigate install relies on NVIDIA GeForce GTX 750 Ti and would be nice to know before upgrading.

Welcome!

You’ll want to look at the release notes, this section specifically:

We’ve started listing the driver versions for GPUs so you can check that your hardware will be well supported.

new ZFS arc memory fixes seem to be working great! thanks!

also jailmaker is awesome.

3 Likes

Honestly I did not go that far yet to test any further in terms of the datapool.

Secondly, a positive point in all this is the fact I see a git repo to create
myself a bootable iso and to test my proposed tweaks…

So maybe I was a bit over the top when I did mention it, but we have common ground it should be solved.

I will investigate further… but I am stuck on more issues now.

git clone https://github.com/truenas/scale-build.git
cd scale-build
make checkout

Works…

sudo make update
. ./venv-05408a3/bin/activate && scale_build update
[2024-04-26 17:21:45,937] WARNING: Running with less than 16GB of memory. Build may fail...
[2024-04-26 17:21:45] System state Validated
[2024-04-26 17:21:45] Manifest Validated
[2024-04-26 17:21:45] Dataset schema Validated
[2024-04-26 17:21:45] Bootstrapping TrueNAS rootfs [UPDATE] (./logs/rootfs-bootstrap.log)
Traceback (most recent call last):
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/bin/scale_build", line 33, in <module>
    sys.exit(load_entry_point('scale-build==0.0.0', 'console_scripts', 'scale_build')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/main.py", line 89, in main
    build_update_image()
gnupg  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/update_image.py", line 21, in build_update_image
    return build_update_image_impl()
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/update_image.py", line 34, in build_update_image_impl
    package_bootstrap_obj.setup()
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/bootstrap/bootstrapdir.py", line 27, in setup
    self.setup_impl()
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/bootstrap/bootstrapdir.py", line 61, in setup_impl
    run(['chroot', self.chroot_basedir, 'apt', 'install', '-y', 'gnupg'])
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/utils/run.py", line 38, in run
    raise CallError(
scale_build.exceptions.CallError: Command ('chroot', './tmp/tmpfs/chroot', 'apt', 'install', '-y', 'gnupg') returned exit code 100
make: *** [Makefile:41: update] Error 1

fails

sudo make packages
[sudo] password for amtriorix: 
. ./venv-05408a3/bin/activate && scale_build packages
[2024-04-26 17:13:14,268] WARNING: Running with less than 16GB of memory. Build may fail...
[2024-04-26 17:13:14] System state Validated
[2024-04-26 17:13:14] Manifest Validated
[2024-04-26 17:13:14] Dataset schema Validated
[2024-04-26 17:13:14] Building packages (./logs/build_packages.log)
[2024-04-26 17:13:14] Setting up bootstrap directory
Traceback (most recent call last):
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/bin/scale_build", line 33, in <module>
    sys.exit(load_entry_point('scale-build==0.0.0', 'console_scripts', 'scale_build')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/main.py", line 86, in main
    build_packages(args.packages)
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/package.py", line 104, in build_packages
    _build_packages_impl(desired_packages)
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/package.py", line 112, in _build_packages_impl
    PackageBootstrapDir().setup()
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/bootstrap/bootstrapdir.py", line 27, in setup
    self.setup_impl()
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/bootstrap/bootstrapdir.py", line 61, in setup_impl
    run(['chroot', self.chroot_basedir, 'apt', 'install', '-y', 'gnupg'])
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/utils/run.py", line 38, in run
    raise CallError(
scale_build.exceptions.CallError: Command ('chroot', './tmp/tmpfs/chroot', 'apt', 'install', '-y', 'gnupg') returned exit code 100
make: *** [Makefile:36: packages] Error 1

fails

sudo make iso
. ./venv-05408a3/bin/activate && scale_build iso
[2024-04-26 17:20:56,933] WARNING: Running with less than 16GB of memory. Build may fail...
[2024-04-26 17:20:56] System state Validated
[2024-04-26 17:20:56] Manifest Validated
[2024-04-26 17:20:56] Dataset schema Validated
Traceback (most recent call last):
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/bin/scale_build", line 33, in <module>
    sys.exit(load_entry_point('scale-build==0.0.0', 'console_scripts', 'scale_build')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/main.py", line 92, in main
    build_iso()
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/iso.py", line 19, in build_iso
    return build_impl()
           ^^^^^^^^^^^^
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/iso.py", line 29, in build_impl
    if not os.path.exists(update_file_path()):
                          ^^^^^^^^^^^^^^^^^^
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/image/manifest.py", line 76, in update_file_path
    return os.path.join(RELEASE_DIR, f'TrueNAS-SCALE-{version or get_image_version()}.update')
                                                                 ^^^^^^^^^^^^^^^^^^^
  File "/srv/data_jupiter/test2/scale-build/venv-05408a3/lib/python3.11/site-packages/scale_build-0.0.0-py3.11.egg/scale_build/image/manifest.py", line 69, in get_image_version
    raise CallError(f'{RELEASE_MANIFEST!r} does not exist')
scale_build.exceptions.CallError: './tmp/release/manifest.json' does not exist
make: *** [Makefile:33: iso] Error 1

fails

This kind of thing should be a new thread. But to answer your question you need to do a clean build first, some apt repo things changed.

This is somewhat unrelated to 24.04.0. If you’re having issues with the SCALE builder, please create a new thread.

Not looking good. I booted back into Cobia, and my apps came up. I then booted into Dragonfish to collect the middlewared.log for @awalkerix. Same as I described up-topic–importing the pools timed out, Apps service didn’t start because it didn’t see the appropriate datasets. So back to Cobia I go. And since the apps had come up last time in Cobia without intervention, I destroyed the checkpoint on the Apps pool. And now I’m seeing this:
image

I consider it important because in past I saw issues related to this.
it’s not always the case when a disk dies, the drives get rearrange,
but it can happen…

Yes, I did check again and Proxmox makes the same ‘error’.
I was confused.

XigmaNAS does it the right way, but they run on FreeBSD and
by experience I know in FreeBSD they did hammer a lot on the
use of GPT/GEOM labels or UUID, reason I did put it into the picture.
they use gpt/sysdisk-0 and gpt/sysdisk-1 labels

I try to create the ISO from Truenas Scale repo to test my patch,
because creating a hybrid ISO with a writable partition is not really my dada…

I upgraded from 23.10.2 to 24.04.0, without any issues. All TrueCharts apps are working as expected.

1 Like

ok, it was outside the main thread but it was to check my patch.

When I dive more profoundly into the issue, we see everywhere red flags NOT to use
device names but labels. Even in articles like ZfsOnRoot

ref: Ubuntu 22.04 Root on ZFS — OpenZFS documentation

For me it is a huge big stopper when device labels are used. Everything goes well until a disk or control has issues. A rearrangement of the disks can really lead to a huge amount of problems, even with zfs.

I did try to compile the iso from the git repo on kubuntu. I did try again with the master and the v24 branch. You see the results. Only make checkout works.

Then I see docker in TrueNas. Another huge issue for me seen it is a common attack
vector, seen most setup run on root. Docker is a no go for me at all.

I decided to go back to Xigmanas. Pure old FreeBSD and that’s it.

I was excited about the great WebUI but for me, there is still a lot of issues to solve
in TrueNAS, to have discipline to have working repo branches, to have a extreme safe and well build zpool setup and to exclude dangerous stuff that can ruin your NAS.

I test it in a few years again.

What makes you think Docker is in TrueNAS?

Seriously, are you just trolling? You don’t seem to know anything about TrueNAS or how it works, you joined this forum to make a mountain out of a molehill, you still don’t seem to understand how insignificant the boot pool is, and now you think incorrectly there’s something else wrong with it (neither the current nor the last major release of SCALE has Docker). And based on a mountain of false assumptions and conclusions, you’re going “back to Xigmanas,” which you (also incorrectly) describe as “Pure old FreeBSD” as though it doesn’t also have a GUI (it does, doesn’t it?) and other features.

Edit: and XigmaNAS is no better BECAUSE IT DOESN’T USE ZFS FOR THE BOOT DEVICE.

4 Likes

Yea, I’m going to have to agree with @dan here. This is either the worst kind of misinformation and trolling or else you’ve just fundamentally missed the point entirely. Boot devices on TrueNAS are designed to be disposable. The entire reason we store the full configuration and settings into a single config database is based on this. Your data matters. Your configuration matters. Your boot device? That is entirely replaceable at any time. That means you can withstand catastrophic hardware failure, move your data drives to a new system, install fresh and restore your configuration to be back in business in moments. Again, this is by design. You are free to disagree with the design, but you’d be going against 15+ years of FreeNAS/TrueNAS history which demonstrates that this is a really solid design choice.

4 Likes

So let’s evaluate the factual claims you’ve made in this topic:

  • TrueNAS uses device identifiers (e.g., /dev/sda) in the boot pool: True
  • This puts users’ data at risk: False; the boot pool is disposable by design
  • (implicit) TrueNAS uses device identifiers in data pools: False, and it always has been, back to the days of FreeNAS 8.0 (nearing 15 years ago now)
  • Proxmox uses UUIDs in its ZFS pools: False–though it does now use /dev/disk/by-id/* identifiers; I’m not sure when they made that change
  • XigmaNAS uses UUIDs for its boot pool: False–it doesn’t use a boot “pool”, as it doesn’t use ZFS for its boot device
  • Docker is in TrueNAS: False; it hasn’t been since Bluefin
  • XigmaNAS is “[p]ure old FreeBSD”: False; it has a web interface and middleware of a sort, in broad concept very similar to TrueNAS

The most charitable conclusion I can reach is that you have no idea what you’re talking about other than the generalization that ZFS pools should ordinarily be constructed with member identifiers that won’t change, such as UUIDs. The other possibility is that you know perfectly well what you’re talking about, and you’re just being dishonest. Whichever is the case, it would behoove you to take some greater care for accuracy in your posts.

1 Like

To celebrate the release of TrueNAS Scale Dragonfish, I spent the last few days producing a video on how to take advantage of the Sandbox feature with Jailmaker :slight_smile:

12 Likes

One small gripe I have about “Jailmaker” is the name itself. :sweat_smile:

It feels awkward invoking jlmkr (“jail maker”) any time you want to interact with existing sandboxes. You’re not “making” anything. You’ll often be “starting,” “stopping”, “entering”, “querying”, “updating”, or “maintaining” an existing sandbox.

It’s the equivalent of invoking a command called compiler, when you mostly use it against existing binaries or data.

It just feels weird…

Not sure how much say iXsystems has over the name, since as far as I can tell, it’s not originally an in-house project.

Some alternative names might include:

  • scalejail
  • sandjail
  • spawnbox
  • spawncage
  • spawner
  • linjail
  • nixjail
  • jailspawn
  • jailer (as long as @Jailer doesn’t sue for trademark infringement)
  • jailctl (to honor Lennart Poettering’s legacy)