I have been experimenting with mounting folders between datasets to save space on my NAS, and simplify the management of game files etc so I don’t have duplicate data in two places. But I’m unsure if this is an appropriate solution, or what other options are available to me.
Background
I have two datasets /mnt/library/media/games and /mnt/library/devices/MiSTer. The former is storage for game images/roms, the latter is the same, but specifically it is mounted via a share on my MiSTer FPGA in directories that the different cores are looking for to avoid hogging storage on the SD card.
To be more specific, up until now, If there was a system that the MiSTer supported, for example SNES, I have had a folder in each of these data sets that has held identical data. This was never ideal from the perspective of storage efficiency but as the MiSTer wants the game storage folders to be named something specific (e.g. SNES for the SNES), and I want it to be named something else on my games storage for organisational purposes (Nintendo - Super Nintendo Entertainment System) it felt like necessary evil without introducing complexity.
I have time to fix this now, so lets do it
I’ve got a working solution for this problem now, for a couple of the directories on the MiSTer dataset, I have deleted the contents (kept the folder) and added a new line to my /etc/fstab to mount the equivalent data from the games to the relevant directory in the MiSTer dataset. For example, here is my line for the WonderSwan console: /mnt/library/media/games/Bandai\040-\040WonderSwan /mnt/library/devices/MiSTer/WonderSwan nullfs rw 0 0
It works, it will mount on boot* because it’s in the /etc/fstab file and it meets the requirements of:
Flexibility of directory naming conventions between datasets
Manage the data from one place (one copy)
MiSTer only has access to data I expose via the mount definitions in /etc/fstab
But I have new issues…
It feels a bit hacky
Now I have to worry about backing up my /etc/fstab (I assume it’s not part of the TrueNAS config and i’ll have to manually re-copy the file in the event of a fresh install?)
Just because it works, doesn’t mean it’s the best solution
What other options do I have to meet my requirements, is there another way that absolves some of my concerns?
Thanks in advance, looking forward to hearing from you guys
Edits
*I’ve not tested this and i’m not convinced this is true now after discovering this. My grandmother always told me that “to asume makes an ass out of you and me” and now I do feel like an ass.
Maybe a post init script would be an appropriate solution to this for mounting the directories instead of using /etc/fstab? Or appending to the /etc/fstab or something like that.
Since you’re dealing with wanting to deduplicate across datasets (filesystems), you can leverage block-cloning.
Unfortunately, this does not work across datasets with FreeBSD 13.x, from what I understand. Since TrueNAS Core will not see a release based on 14.x, then you might want to seriously consider upgrading to SCALE.
With the following prerequisites, you should be able to block-clone across datasets:
Latest SCALE
Datasets are not encrypted and use the same recordsize
Pool is “upgraded” (or feature@block_cloning is enabled)[1]
Your tool to copy supports this (Windows File Explorer, KDE Dolphin, standard cp, …)
This will mean that if you copy the WonderSwan folder, it will not consume any extra space. Yet anything that accesses those files will see them as legitimate files (which they are).
This will break backwards compatibility. You will not be able to import the pool with an older version of ZFS. ↩︎
Thanks for the info Winnie, block cloning is completely new to me, i’ll do some reading in to that. Maybe a stupid question but does this support having a different directory name for the source and destination directories?
Directory names have no bearing on the actual data blocks for the files.
The filesystem metadata behaves as expected (paths, directories, filenames, …)
But block-cloning simply points a newly “copied” file to the same exact data blocks that already exist on the pool.
If you later rename the file or folder, nothing changes with regards to the actual data blocks. Applications, file tools, and userland tools will treat them just like in any traditional filesystem.
Noted, I’m just weighing up the thought of upgrading to scale now, or waiting until Fangtooth is out. I’ll have to migrate at some point, but I had inteded to wait for the unification
I’ve put a pin in this for now, I decided that I shouldn’t format the shape of the storage to the requirements of any specific device. So instead I have just deleted my MiSTer dataset and created a script (reusing sections of this cifs_mount.sh) that will mount the folders from the .../media/games dataset in the way that the MiSTer needs them organised
For anyone who has a MiSTer and wants to do something similar, I will be sharing my script at some point soon. If there isn’t a link here yet and you want it, message me
Thanks for the heads up, do you have any examples of what to avoid?
My script is on the side of the client and is only using simple mount command with the usual options (mount -t cifs source destination -o username=...,password=...) etc