Import zpool from existing Solaris -- UNAVAIL -- "Newer version"

Hi,
I am using Solaris 11.4 that i patched in the past.
my zpool version is 47

zpool upgrade -v
This system is currently running ZFS pool version 47.

When trying to get it to import to Truenas scale (tried also the current beta), I got error message when running “zpool import” saying the pool is UNAVAIL newer version.

I rolled back to my slowly dying solaris where the pools are still accessible.
I don’t have enough space to backup the data and recreate the pools in truenas

Perhaps this will help: (running on my Solaris box)

zpool get all t1
NAME PROPERTY VALUE SOURCE
t1 allocated 37.5T -
t1 altroot - default
t1 autoexpand on local
t1 autoreplace on local
t1 bootfs - default
t1 cachefile - default
t1 capacity 85% -
t1 clustered off -
t1 dedupditto 0 default
t1 dedupratio 1.00x -
t1 delegation on default
t1 failmode wait default
t1 free 6.16T -
t1 guid 12069961680274664224 -
t1 health ONLINE -
t1 lastscrub Aug_19 local
t1 listshares off default
t1 listsnapshots off default
t1 readonly off -
t1 scrubinterval 1m default
t1 size 43.6T -
t1 version 46 local

will appreciate your help

Welcome to TrueNAS and the forums!

OpenZFS used on FreeBSD, TrueNAS, (Core & SCALE), Linux and others diverged from Solaris ZFS a long time ago. Except for earlier Solaris ZFS pool versions, they are not compatible.

Your only option is to copy the data. Not even sure Solaris ZFS Send and OpenZFS Receive will work together. You may have to use a different tool like RSync.

1 Like

Thanks for the quick response,
I really want to move out from Solaris while the system still works.

As I got about 40TB there. I’m not sure what way is best to temporarily store that data someplace

Sorry, I can’t advise about this without more information. However, if you supply your pool layout from zpool status and the output from zpool list, we might have tricks up the sleeve we could use.

I actually like Solaris, and support Solaris 11.4 at work. (But, we have both hardware and software support, so I don’t worry too much about the systems working…)

FWIW, OpenZFS forked at v28.

4 22TB drives in a smaLL NAS will get you enough space to rsync 40TB to redundant storage… if rsync works.

1 Like

Depending on the context of what you must preserve from solaris, starting in ElectricEel we expanded our NFS and SMB clients so that it can understand and preserve ZFS-style ACLs. This means you can use this combined with syncthing to preserve the ACLs and metadata from solaris. You’ll want NFSv4.2 or SMB 3+ for protocol and probably some leg work to make sure accounts are present on TrueNAS.

1 Like

Thanks for the reply,
see following info:

zpool status:

(rpool is not relevant, yet its here)
meni@bigbox:~$ zpool status
pool: rpool
id: 2905207971296670260
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
Run ‘zpool status -v’ to see device specific details.
see: http://support.oracle.com/msg/ZFS-8000-8A
scan: resilvered 4.5K in 2s with 13 errors on Wed Sep 4 10:42:09 2024
config:

    NAME      STATE      READ WRITE CKSUM
    rpool     ONLINE        0     0    34
      c6t1d0  ONLINE        0     0     0

errors: 13 data errors, use ‘-v’ for a list

pool: t1
id: 12069961680274664224
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ‘zpool upgrade’. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub repaired 0 in 1d14h with 0 errors on Wed Aug 21 02:09:35 2024
config:

    NAME        STATE      READ WRITE CKSUM
    t1          ONLINE        0     0     0
      raidz1-0  ONLINE        0     0     0
        c2t1d1  ONLINE        0     0     0
        c2t3d1  ONLINE        0     0     0
        c2t4d1  ONLINE        0     0     0
        c2t5d1  ONLINE        0     0     0
        c2t6d1  ONLINE        0     0     0
        c2t7d1  ONLINE        0     0     0

errors: No known data errors

pool: t2
id: 11476240104031519784
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using ‘zpool upgrade’. Once this is done, the
pool will no longer be accessible on older software versions.
scan: scrub repaired 0 in 13h16m with 0 errors on Tue Aug 20 01:21:23 2024
config:

    NAME        STATE      READ WRITE CKSUM
    t2          ONLINE        0     0     0
      raidz1-0  ONLINE        0     0     0
        c9t1d0  ONLINE        0     0     0
        c9t3d0  ONLINE        0     0     0
        c9t0d0  ONLINE        0     0     0
        c9t4d0  ONLINE        0     0     0
        c9t5d0  ONLINE        0     0     0
        c9t2d0  ONLINE        0     0     0

errors: No known data errors

zpool list:

meni@bigbox:~$ zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 953G 731G 222G 76% 4.12x ONLINE -
t1 43.6T 37.5T 6.16T 85% 1.00x ONLINE -
t2 16.2T 13T 3.29T 79% 1.00x ONLINE -

Not sure what this means, yes, I’d like to preserve mostly my SMB and NFS shares, but its not critical.
my ACLs are quite simple so will be nice to preserve but not critical.

Can I avoid replicating the pool to new drives ?

Not as far as I can tell. Solaris 11.4 has a device removal feature that if you only used the storage space of 1 vDev out of 2, you might be able to remove one. But, the “t1” and “t2” pools use a single vDev each.

Further, the pools are RAID-Z1, so you can’t remove a 2 disks and use them for temporary storage.

@Stux has the only suggestion that I think would work. Though 24TB drives are now available from both Seagate and Western Digital, (in the Pro models only).

You can do more research, both here and elsewhere. No one said our answers are perfect, (well, except mine :slight_smile:)

1 Like