Close to fixing /mnt/mnt problem but not quite

I’m sure some people are aware of the problem where when you import an existing pool, truenas prefixes another /mnt on your mounpoint causing /mnt/mnt/yourpoolname

I followed the intstructions from the other posts to manually set my mountpoint to /yourpoolname

and that does make things appear normally BUT I noticed that the original datasets don’t appear in the Truenas interface where I would expect them.
New datasets do appear though. How come?

  • I only see newly created datasets in the Dataset tab of truenas
  • smb shares section in the shares tab shows my two existing datasets with paths /mnt/zpool1/OSM images and /mnt/zpool1/other respectively. and they work

They do appear correctly in smb share menu and work?

truenas_admin@truenas[~]$ zfs list
NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
boot-pool                                                   3.06G   141G    96K  none
boot-pool/.system                                           2.89M   141G   120K  legacy
boot-pool/.system/configs-ae32c386e13840b2bf9c0083275e7941    96K   141G    96K  legacy
boot-pool/.system/cores                                       96K  1024M    96K  legacy
boot-pool/.system/netdata-ae32c386e13840b2bf9c0083275e7941  2.11M   141G  2.11M  legacy
boot-pool/.system/nfs                                        112K   141G   112K  legacy
boot-pool/.system/samba4                                     288K   141G   288K  legacy
boot-pool/.system/vm                                          96K   141G    96K  legacy
boot-pool/ROOT                                              3.04G   141G    96K  none
boot-pool/ROOT/25.10.1                                      3.04G   141G   104M  legacy
boot-pool/ROOT/25.10.1/audit                                1.69M   141G  1.69M  /audit
boot-pool/ROOT/25.10.1/conf                                 7.54M   141G  7.54M  /conf
boot-pool/ROOT/25.10.1/data                                  288K   141G   288K  /data
boot-pool/ROOT/25.10.1/etc                                  7.45M   141G  6.48M  /etc
boot-pool/ROOT/25.10.1/home                                  108K   141G   108K  /home
boot-pool/ROOT/25.10.1/mnt                                   112K   141G   112K  /mnt
boot-pool/ROOT/25.10.1/opt                                  4.65M   141G  4.65M  /opt
boot-pool/ROOT/25.10.1/root                                  140K   141G   140K  /root
boot-pool/ROOT/25.10.1/usr                                  2.88G   141G  2.88G  /usr
boot-pool/ROOT/25.10.1/var                                  43.7M   141G  4.37M  /var
boot-pool/ROOT/25.10.1/var/ca-certificates                    96K   141G    96K  /var/local/ca-certificates
boot-pool/ROOT/25.10.1/var/lib                              28.5M   141G  28.1M  /var/lib
boot-pool/ROOT/25.10.1/var/lib/incus                          96K   141G    96K  /var/lib/incus
boot-pool/ROOT/25.10.1/var/log                              10.6M   141G  1.74M  /var/log
boot-pool/ROOT/25.10.1/var/log/journal                      8.87M   141G  8.87M  /var/log/journal
boot-pool/grub                                              9.02M   141G  9.02M  legacy
zpool1                                                      32.7T  70.6T  32.7T  /mnt/zpool1
zpool1/Panoramax-OSMBE                                       205K  70.6T   205K  /mnt/zpool1/Panoramax-OSMBE
zpool2_less-critical                                        1.18M  64.2T   162K  /mnt/zpool2_less-critical
truenas_admin@truenas[~]$ sudo zfs get -t filesystem -r mountpoint zpool1
[sudo] password for truenas_admin: 
NAME                    PROPERTY    VALUE                        SOURCE
zpool1                  mountpoint  /mnt/zpool1                  local
zpool1/Panoramax-OSMBE  mountpoint  /mnt/zpool1/Panoramax-OSMBE  inherited from zpool1

I do see that I can ‘upgrade’ the zfs on my pool but not sure if that’s the solution to this problem or not.

hey,
just fyi, if you use code-blocks like the following it will be easier to read your pastes.

```
truenas_admin@truenas[~]$ zfs list
NAME…

```

just to make things more clear here:

truenas_admin@truenas[~]$ zfs list
NAME                                                         USED  AVAIL  REFER  MOUNTPOINT
boot-pool                                                   3.06G   141G    96K  none
boot-pool/.system                                           2.89M   141G   120K  legacy
... <SNIP>
boot-pool/grub                                              9.02M   141G  9.02M  legacy
zpool1                                                      32.7T  70.6T  32.7T  /mnt/zpool1
zpool1/Panoramax-OSMBE                                       205K  70.6T   205K  /mnt/zpool1/Panoramax-OSMBE
zpool2_less-critical                                        1.18M  64.2T   162K  /mnt/zpool2_less-critical
truenas_admin@truenas[~]$ sudo zfs get -t filesystem -r mountpoint zpool1
[sudo] password for truenas_admin:
NAME                    PROPERTY    VALUE                        SOURCE
zpool1                  mountpoint  /mnt/zpool1                  local
zpool1/Panoramax-OSMBE  mountpoint  /mnt/zpool1/Panoramax-OSMBE  inherited from zpool1

you should be able to see the difference with newly created datasets from the commands you used. Your post only has one child-dataset listed (zpool1/Panoramax-OSMBE) so that’s hard to differentiate.

i will make a guess here and say that these paths are “real” paths and no datasets at all.
you can verify it in the shell easily with something like this

df -h /mnt/zpool1/OSM
# which will return something like this if i'm correct in that assumption
Filesystem           Size  Used Avail Use% Mounted on
zpool1  11T   99G   11T   1% /mnt/zpool1

Otherwise please provide a full report of

  • zfs list -o name,mountpoint -r zpool1 zpool2_less-critical
    or/and
  • zfs get -d3 -t filesystem mountpoint zpool1 zpool2_less-critical

As an example how it usually looks

root@nas-01 /mnt/p0-25-8z2/temporaer # zfs get mountpoint -t filesystem -d 2 p0-25-8z2/pve
NAME                       PROPERTY    VALUE                           SOURCE
p0-25-8z2/pve              mountpoint  /mnt/p0-25-8z2/pve              default
p0-25-8z2/pve/iscsi        mountpoint  /mnt/p0-25-8z2/pve/iscsi        default
p0-25-8z2/pve/nfs          mountpoint  /mnt/p0-25-8z2/pve/nfs          default
p0-25-8z2/pve/nfs/disks    mountpoint  /mnt/p0-25-8z2/pve/nfs/disks    default
p0-25-8z2/pve/nfs/iso      mountpoint  /mnt/p0-25-8z2/pve/nfs/iso      default

Notice the difference in the SOURCE tab

1 Like

my bad for the formatting. I selected all the text and used the code block formatting button but I guess it applied wrong? I’ll use markdown formatting then (EDIT: upped trust level, now I fixed my original post)

df -h "/mnt/zpool1/OSM images" 
Filesystem      Size  Used Avail Use% Mounted on
zpool1          104T   33T   71T  32% /mnt/zpool1
zfs list -o name,mountpoint -r zpool1 zpool2_less-critical
NAME                    MOUNTPOINT
zpool1                  /mnt/zpool1
zpool1/Panoramax-OSMBE  /mnt/zpool1/Panoramax-OSMBE
zpool2_less-critical    /mnt/zpool2_less-critical
zfs get -d3 -t filesystem mountpoint zpool1 zpool2_less-critical
NAME                    PROPERTY    VALUE                        SOURCE
zpool1                  mountpoint  /mnt/zpool1                  local
zpool1/Panoramax-OSMBE  mountpoint  /mnt/zpool1/Panoramax-OSMBE  inherited from zpool1
zpool2_less-critical    mountpoint  /mnt/zpool2_less-critical    default

It’s weird, they were datasets on my original truenas install…

well, i can’t tell you how you arrived at this but i’m guessing this was already happening on the other system where that pool is originaly from.

I would recommend just (r)syncing/copying the data to another pool if you have enough space left or to another system, recreate the pool and then start fresh.

It can be done manually but i think this will be the easier and cleaner way.

1 Like

Just to make this thread as solved, I’ll be just be doing this. Starting fresh (have some spare hdd’s to copy data to). thx