Hello,
I got a TrueNAS Mini X+ in 2023 and it is equipped with
nvme (boot)
64 GB RAM (ECC)
5 WD red 10TB (Z2, datapool)
2 SSD 500GB (Log, cache)
it came preconfigured like that but I suspect it might not be an optimal layout.
iXapps is part of datapool. In the moment I have only immich as an app. I am looking into jellyfish next but read in the recommended setup to separate the app data from the datapool.
now the question: given the use as a home NAS (no video editing on datapool files or other big loads), timemachine target and some apps for photo backup and media consumption - should the cache-SSD be removed from datapool and repurposed as an app-SSD? Is this possible without nuking the entire system (i already have 25 TB of media files and backups in my datapool)?
What would be a recommended configuration? Adding another NVME would be probably helpful but the Mini X+ already uses the NVME slot for boot…
Any best practice using the default specs of the Mini X+?
Thank you all for this wonderful forum and thanks to iXSystems for this challanging new gadget
(and if someone might have suggestion to further decrease the noise emission I would be grateful as well)
Use both cache (L2ARC) and log (SLOG) drives to create a mirrored app pool; you should be able to detach both from your datapool without much issues, but might need to reformat the SLOG drive due to the way iX usually ships those (with a lot of overprovisioning, which is correct for their use-case).
You can also use only the SLOG drive for your app pool given that there is way more benefit in having L2ARC unless specific cases.
L2ARC might not be necessary for my use case - but I was under the impression that a separate SLOG is more or less always recommended. You would recommend to abandon a separate SLOG for a mirrored 500 GB-pool for app installs and app data? Can I transfer the ix-Folder to a new pool or would this wreck my system?
If this is important: I am running TrueNAS scale in the latest stable version (24.01.1 I think)
Unless you know otherwise, a SLOG is never recommended. Separate ZIL, (ZFS Intent Log), is only useful for NFS, iSCSI shares and remote databases. Unless you have such, or manually set datasets to “sync=always”, a SLOG is not used. (Or not used much.)
Note that all ZFS pools have an intent log. The built in one is called ZIL, ZFS Intent Log. Thus, we always refer to the external one as a SLOG.
Yes to both. Remove SLOG and L2ARC from the pool, through the GUI.
Then set your free drives as a new pool, single vdev mirror, and move your apps there.
I see no use for a SLOG. TimeMachine does sync writes, but this is a background process and nothing relevant is gained by trying to speed it up.
ok, removing log and cache via GUI should be easy, setting up a vdev mirror with the freed SSDs as well.
How can I move existing apps there or the ixapplication folder - I have to use CLI I assume? Are there decent how-to’s around? ( I am probably not the first with this kind of requirement)
Any help or information where to find it will be appreciated
Any pitfalls I should avoid? Is naming the new mirrored pool expected along a naming convention? Do I have to alter any configs in my boot drive?
many thanks
bisento
(I might not be able to realise any suggestions due to private scheduling conflicts in the next 20 days but I am eager to try any tips at once as soon as I am back)
I followed your advice and detached both SSD from my datapool. It shows both drives are not assigned to a pool, but the overprovisioned SLOG is still at 16GB. I tried cli with storage disk resize disks={"name":"sdf", "size":} in order to set it back to full capacity according to this guide
But I only get an error Expected end of text, found ‘d’
Where is my error?
Thanks for any hints
Additional question if everything happens to work: how do I transfer the ixapplication folder to the new vdev? CLI? Or is there a possibility in the GUI?
ok I found it: storage disk resize disks={“name”:“sdf”} was the correct nomenclature. I am pretty confinced I tried this as well with an error but now it just worked…