Pool Upgrade Question - Is this a problem

*I was getting a warning won my boot pool (2 SSD ZFS Mirror) that I needed to upgrade the pool so I did a zpool upgrade from the shell. I am now getting a message “you might need to update the boot code”. Is this likely to be a problem, or can I ignore this warning. So far I have not rebooted the system for fear that it might not boot.

I have shown the zpool status both before and after the zpool upgrade below.

Guidance would be much appreciated.*

  pool: freenas-boot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
	The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:01:47 with 0 errors on Thu May  9 03:46:47 2024
config:

	NAME          STATE     READ WRITE CKSUM
	freenas-boot  ONLINE       0     0     0
	  mirror-0    ONLINE       0     0     0
	    ada3p2    ONLINE       0     0     0
	    ada2p2    ONLINE       0     0     0

errors: No known data errors
FN#>zpool upgrade freenas-boot
This system supports ZFS pool feature flags.

Enabled the following features on 'freenas-boot':
  draid

Pool 'freenas-boot' has the bootfs property set, you might need to update
the boot code. See gptzfsboot(8) and loader.efi(8) for details.

FN#>zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0B in 00:01:47 with 0 errors on Thu May  9 03:46:47 2024
config:

	NAME          STATE     READ WRITE CKSUM
	freenas-boot  ONLINE       0     0     0
	  mirror-0    ONLINE       0     0     0
	    ada3p2    ONLINE       0     0     0
	    ada2p2    ONLINE       0     0     0

errors: No known data errors

You should upgrade the boot code. Please post the output of

gpart show

if you need assistance with that.

2 Likes

Thanks @pmh - Here is the info you requested. I was really surprised to see [CORRUPT] - no warnings in the GUI.

FN#>gpart show
=>        34  7814037101  ada0  GPT  (3.6T)
          34     4194398        - free -  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.6T)
  7814037128           7        - free -  (3.5K)

=>        34  7814037101  ada1  GPT  (3.6T)
          34     4194398        - free -  (2.0G)
     4194432  7809842696     2  freebsd-zfs  (3.6T)
  7814037128           7        - free -  (3.5K)

=>       40  234441568  ada2  GPT  (112G) [CORRUPT]
         40       1024     1  freebsd-boot  (512K)
       1064  234440544     2  freebsd-zfs  (112G)

=>       40  234441568  ada3  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064  234440536     2  freebsd-zfs  (112G)
  234441600          8        - free -  (4.0K)

=>         40  15628053088  da3  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

=>         40  15628053088  da5  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

=>         40  15628053088  da7  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

=>         40  15628053088  da1  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

=>         40  15628053088  da2  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

=>         40  15628053088  da0  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

=>         40  15628053088  da4  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

=>         40  15628053088  da6  GPT  (7.3T)
           40           88       - free -  (44K)
          128      4194304    1  freebsd-swap  (2.0G)
      4194432  15623858696    2  freebsd-zfs  (7.3T)

  1. It is generally not recommended to upgrade the boot pool.

That’s why there is no indication in the UI. The zpool status message is only informational and can be ignored. Nowhere in that message does the word “WARNING” appear. So what gave you the impression there was one?

If you had done that in TN SCALE you would have rendered your system unbootable. Luckily in CORE FreeBSD can boot every pool the FreeBSD version in question can create. But one might have to upgrade the boot loader before the next reboot of the system!

  1. Where do you see “CORRUPT” in your output? I somehow cannot find it.

  2. To upgrade your boot loader.

It looks from the gpart show output that your system is booting via legacy/BIOS and not UEFI. So the commands to use would be:

gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3

HTH,
Patrick

1 Like

The Corrupt Message is on ADA2 (One of the boot drives)

Should I still go ahead and execute those commands or do I need to do something else first?

And yes, you are correct… legacy bios.

Ah - time to wipe my glasses I guess :slightly_smiling_face:

Yes - before you update the boot loader, try:

gpart recover ada2
gpart show ada2
1 Like

Looks like that got it.

FN#>gpart recover ada2
ada2 recovered
FN#>gpart show ada2
=>       40  234441568  ada2  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064  234440544     2  freebsd-zfs  (112G)

FN#>

Should I go ahead and run the 2 commands you gave me now?

Sure.

1 Like

I assume this is the expected output:

FN#>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2
partcode written to ada2p1
bootcode written to ada2
FN#>gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3
partcode written to ada3p1
bootcode written to ada3
FN#>gpart show ada2
=>       40  234441568  ada2  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064  234440544     2  freebsd-zfs  (112G)

FN#>gpart show ada3
=>       40  234441568  ada3  GPT  (112G)
         40       1024     1  freebsd-boot  (512K)
       1064  234440536     2  freebsd-zfs  (112G)
  234441600          8        - free -  (4.0K)

Absolutely.

Thanks so much for saving my a## - I really appreciate it.

Just a bit off topic, but where do you think Core is going? Is it in “Maintenance Mode” - so they will fix any serious CVE’s but not too much else?

I don’t know if I am interpreting the Jira info properly, but IIUC there are still a lot of serious bugs with SCALE.

There are various threads including some with participation and statements by iXsystems employees. You might want to browse/search the old forum.

What I think is irrelevant. It’s “just supported enough” for my personal tastes to update to 13.3 and then what will happen in the coming two years time will tell.

2 Likes

Thanks for that… I know you are (or were) very pro FreeBSD and have been very active in this forum and have a lot of experience with TrueNAS, so I really value your perspective.

Have you been following SCALE closely enough to have an opinion as to it’s “readyness”? I am getting to the point that I need a NextCloud and possibly a couple of other self hosted apps that I don’t really want on the cloud, and my TrueNAS spends most of it’s time idle.

SCALE seems like it would be ideal… my question is is it stable enough? I see a lot of issues-some of which seem like they might cause real problems–but I’m not experienced enough to know. Yet having said that I have seen this with CORE over the years as well, but really have never had a problem. That being said I tend to update very infrequently… near the end of each version (my recent history is 12.0-U6, 12.0-U8, 12.0-U8.1, 13.0-U5.3, 13.0-U6.1–based on what I could deduce were very stable update points.
).
I’m thinking I should make the jump near the end of the 24.04 branch of SCALE. I’d like to do it now, but I can’t afford to live with too much instability.

Any thoughts?

And thanks again for the assist with the boot pool.

I don’t trust the backup and recovery mechanisms just yet. Can you do snapshots and replication of the ix-apps dataset now? IIRC you cannot.

You can replicate a jail to anywhere, even drop the ZFS and just tar it up, then boot in on any machine. I stand by: jails are the most robust container technology available. FreeBSD jails or Solaris zones, that is.

I frequently restore ix-applixations from a snapshot as I mess about in there a lot and break things. Though I don’t know about replication, I would have assumed you can (though I will say I do not like k3s and am waiting for proper sandbox support)

Edit: also if you haven’t seen it, there’s the mobile UI :stuck_out_tongue: am forum posting from a train

I am assuming you mean for the apps (or do you mean TrueNAS config or both)?

Just want to make sure I understand… take a tar of the jail and uncompress on a clean drive on bare metal? So I assume the jail has the boot code and a bare bones FreeBSD OS?

IIRC you have Nextcloud running on CORE. I’m assuming that you have been running it long enough to go through a couple of updates?? If so how much work is it to update and did you have problems? IIUC the docker Nextcloud AIO is simple to install, and upgrades are a no brainer - just change the image name, and run. I see there is a plugin, but my past experience is the maintenance of plugins is a bit spotty,

Should I maybe consider a linux VM? (or are VMs going away?) As much as possible I want to insulate myself for issues when I upgrade TrueNAS.

Any thoughts/guidance?

Not sure my 2 cents are worth anything but after all the talk about this topic, i decided to stay with Core for as long as i possibly can. I started with FreeNAS back then and FreeBSD feels familiar and i know my way around. I am very happy with my Plex and Nextcloud jails and now that i have it set up just the way i want and all running just as expected, i see no reason to go through that all again if i would move over to Scale. I never bothered too much with Linux and never got any further than trying out Ubuntu on a dead-end laptop so that would be once again a learning curve for me and if i would be forced, i probably would change over but this is now and then is then.

3 Likes

Jailmaker makes “jails” on scale. It works on Bluefin and forwards I believe.

Each jail is a dataset in pool/jailmaker/jails and can literally be tarred or replicated to any other system. Doesn’t actually have to be truenas. Or even zfs.

A jail consists of a rootfs which is literally an expanded tarball of a linux distro, and two text config files.

As jailmaker discovers jails via directory iteration, there is no setup to restore a jail :wink:

And I made a video on it :wink:

Yes, I’d like to have a simple way to backup and restore either all of my apps’ or a single app’s state, transfer to another system, etc.

There has been very strong advice in the past not to take any snapshots of the ix-applications dataset and whatever is below or things would break.

Hence the question. Maybe the situation is different now.

I rely on a recursive snapshot and replication of all of /mnt/<pool>/iocage/jails. Hourly. Works great.

Untar on any sufficiently recent installed FreeBSD system of course. Extract, fire up. ZFS is not even needed, but of course with ZFS and replication instead of tar jails really shine.

We are talking about virtualisation/container infrastructure, right? So I think it is fair to assume that you have an assortment of ready hosts. Copy a jails directory structure to any host by whatever means, fire up. But I’m repeating myself.

None at all. Ever.

pkg update
pkg upgrade
pkg autoremove
# restart jail
su -m www -c "/usr/local/bin/php /usr/local/www/nextcloud occ upgrade"
# done

Login to the Nextcloud admin backend, check if it complains about “missing indices” or some newfangled stuff to put into your NginX config - then just do what it tells you.
Dead easy.

But then I have been a systems administrator for >30 years and my company offers Nextcloud as a service - in FreeBSD jails, but not on TrueNAS. So I could perform all of this in the middle of the night after a couple of pints just the same.

And this is the part where I always get a little bit disappointed up to downright angry.

If you switch the image and start Nextcloud you still must perform the php occ upgrade dance. Every time. It’s mandatory by Nextcloud’s documentation and experience.

So someone put in the work to perform this step when you upgrade your TN SCALE “app”. But the same company could not be bothered to do the same for their jail based plugins. So whenever a “naive” (not in any way meant as an insult!) user without my experience updates their Nextcloud plugin, it breaks.

That’s why they have such a bad reputation. But the switch to Docker does not change the fact that somebody has got to own the application including taking care of updates and the aftermath that follows them.

And they didn’t for jails. And now it’s just “see? everything is so much simpler with Docker!”

No! Someone put in the work to make it a pleasant and reliable experience. The amount of work necessary is no different than with jails. You just have to do it. We did - we have Ansible plays that update a Nextcloud installation, for example.

And to show you that these are not empty claims:

  • we run more than 1000 customer applications, two dozen installations or so of them even Nextcloud, in FreeBSD jails
  • we update the entire jail and the entire middleware to the latest FreeBSD patch release and the latest quarterly packages - every single month!
  • for all 1000 jails!
  • done by one person over the course of one working day (“patchday” as we call it following Microsoft’s lead)
  • downtime for each single jail: depending on the load and the shutdown time less than 10 up to 60 seconds

Kind regards,
Patrick

So Linux reinvented FreeBSD jails. Because that is exactly how a FreeBSD jail works. To the letter. Why should I prefer these over the original technology?

1 Like