Unable to SMB share 291TB 13 yo dataset after Core -> Scale upgrade

First, thanks to ixSystems in creating a community edition, it is a great service. I’m a proud owner of a 7 yo 45drive system which has served me faithfully for years.

I’m in the process of converting three ~300TB 13 year old Core systems to Scale. Back in 2011 the primary dataset “Volume” was created and shared without child datasets, which worked well for over a decade but now likely the source of my issues. Yes, I know that this is no longer recommended, but it has worked well up to this point.

Upgrading Core to Scale worked well. However, I ran into errors renaming the primary dataset “Volume” to “tank”, with the simple goal of renaming and remounting the primary dataset (previously on /mnt/Volume) to /mnt/tank. I followed these steps under 25.04.1:

zpool export Volume – successful
zpool import Volume tank – successful
zfs get mountpoint tank = /tank – successful (I think)
zpool status tank = state: ONLINE – successful

However, GUI: Storage | Storage Dashboard | ZFS Health = Pool Status: Offline, which contradicts CLI command output listed above

Clicking GUI: Datasets shows dialog “CallError”, [ENOENT] Path /tank not found

I’ve experimented with different mount points at /, /mnt, and /mnt/tank. I have been able to access the data at various points of testing, with the pool and data intact, but I can’t share it by SMB, getting the error: " [EINVAL] sharingsmb_create.path_local: The path must reside within a pool mount point"

I have full duplicates of the data on the two other servers, as well as an up-to-date .tar file of the configuration files as generated by TrueNAS during upgrades. I really would prefer not installing from scratch as the existing 166TB of data takes 12 days to duplicate on a 10GoE network.

Any thoughts are greatly appreciated. Thank you.

Very likely this happened because you did the pool exports and imports from the CLI, which didn’t give the middleware time to relocate the .system dataset and DB. In all odds it’s looking for an entry in the config DB for the /mnt/tank mount point, but it still exists as /mnt/Volume

Can I see the results of

zpool get altroot
zfs get mountpoint tank
2 Likes

That’s a helpful suggestion. As you mentioned I did operate from the CLI.

Because of deadlines I needed to revert back to /mnt/Volume using a saved config .tar file as well as remounting the pool back to /mnt/Volume through a “zfs set mountpoint…” command.

If this impacts anyone else please try HoneyBadger’s suggestion and report any successes or failures. I do think this is a potential issue for other users.

Many thanks.