I am sharing dataset 1 over SMB 3.1.1 to a Linux machine.
If I am to create any file, folder or sub folders directly under “dataset 1” - I can delete the folders normally and they can go in a “.Trash-1001”
However - if I am to descend to either “subset A” or “subset B” and try to delete any file or folder - the files can’t go to “.Trash-1001” - they can only be deleted right away.
All the sets were created with the apps preset permission. To the ones that are “subset A” and “subset B” I added a user. The user (and I am connecting with this user) has allow/modify.
What am I doing wrong?
The reason why I created the structure the way I did
I can let the user (s) have one share only but access the 2 datasets under this share.
I can lock one of the datasets with a password (password encryption - currently it is a key).
I can segregate the users a bit more - deny access to one of the datasets.
I can also further expand with different datasets under “dataset 1” for different purposes but at the same time I don’t need to create other shares.
Can you please point me in the right direction why I can’t use the “.Trash-1001” once I descent under the sub datasets?
I assume you’re talking about the network recycle bin? If so this is set at the share level hence why it works on your parent and not your children. You would need to share out the sub-datasets and check the box for recycle bin for each share.
It’s worth noting that the network recycle bin feature has been removed in the latest version in favour of snapshots and previous versions in windows. Therefore you may want to rethink this setup.
I am not using Windows - just Linux so the trash is visible as a hidden folder “.Trash-1001”. I am assuming it is similar to windows but I don’t have to enable it on the advanced level.
Snapshots work well with SMB shares but for some reason they were not visible on NFS share. To be specific - they are visible on the server - when I connect over SSH but the client sees the folders empty under the .zfs folder. Then I am on 25.04.2.6 (so I am a newbie for the system)
I did suspect that was the reason - the data sets are one level deeper than the share itself but I was hoping I can get around it. Will see how it will work with the 15.10 release.
Yes - I tried cloning the snapshot to a new dataset - it works with the caveat that if the snapshots are frequent it can be quite a task if one is to go back and mount different versions of the snapshot.
If one is to go back and check the changes in say 10 snapshots (based on a daily snapshot task) that can be managed - not elegant but doable.
However - if the snapshots are hourly - it would be quite a task.
It is possible to make .zfs visible but not rely on NFS - rather SSH and rely on the command prompt within the server to find the file(s) that are needed then clone that one specific snapshot for restore. But the approach is not elegant at all.
I have not tried a dual share NFS and SMB. Maybe this will work with identifying the files and the snapshot. It is a niche case I agree. Most of my shares are SMB (even I am on Linux) but I keep one on NFS - I think it is better on the speed.
This was fixed in FreeBSD because what “used to work” was considered a bug.
NFS shares are not supposed to transgress filesystem boundaries. You can permit mounting of arbitrary subdirectories with the -alldirs option in exports(5) (in FreeBSD - Linux probably similar). But you cannot cross a filesystem mount point.
That’s why it does not work, anymore. Even on FreeBSD.