I recently moved from a general RHEL 7 based SoHo NAS/KVM host over to TrueNAS Scale on some new hardware. I initially considered only using TrueNAS for the storage, but the build in KVM and App (container) support ultimately means I can avoid running multiple boxes.
Now there are a couple of rough edges, but overall it is working well.
Hardware
CPU - AMD Ryzen 5600G
MB - Gigabyte X570S UD ATX Rev 1.0
RAM - Crucial Pro 32GB (2x16)
PSU - NZXT C Series V2 750W
ServeRAID LSI SAS9340-8i in HBA mode
Fractal Design Define R5 Mid Tower
Boot
ADATA SU650 Ultimate SATA 256GB
Team Group TEAMGROUP T-FORCE VULCAN Z 256GB
Storage
2 x Crucial MX500 512GB Drives - Mirror - VM Storage
6 x HDDs - Mix of 3 & 4 TB slowly being replaced as they age out - RAIDZ2
Build Issues
I wouldn’t recommend the SU650 as it doesn’t deal with sustained writes.
Budget means I’ve only got 32GB RAM at present
Initial software was 23.10.2 and I’ve now upgraded to 24.04.2.
As I mentioned this is for SoHo uses so I’m not aiming for ultra high performance storage, I’m more focused on reliability and workload management including.
Local media storage
Photo Archive
Email server
Small number of service VMs
No I understand the appliance centric model that TrueNAS is focused on, but there are a couple of issues I’ll highlight in this topic.
First major issue is lack of swap support. I’ve actually partitioned the boot devices so there are two 8GB partitions on each of them configures as mdadm raid1 swap. This worked fine with 23.10.2 but fails to start under 24.04
In fact at boot time I can see the swap being added and then removed
Jul 16 18:27:21 tnsbuilds01.tn.ixsystems.net systemd[1]: Reached target swap.target - Swaps.
Jul 16 18:27:37 tnsbuilds01.tn.ixsystems.net systemd[1]: Starting ix-swap.service - Configure swap filesystem on boot pool...
░░ Subject: A start job for unit ix-swap.service has begun execution
░░ A start job for unit ix-swap.service has begun execution.
Jul 16 18:27:40 tnsbuilds01.tn.ixsystems.net kernel: Adding 8379388k swap on /dev/mapper/md127. Priority:-2 extents:1 across:8379388k SS
Jul 16 18:27:40 tnsbuilds01.tn.ixsystems.net systemd[1]: dev-disk-by\x2did-dm\x2duuid\x2dCRYPT\x2dPLAIN\x2dmd127.swap: Deactivated successfully.
░░ The unit dev-disk-by\x2did-dm\x2duuid\x2dCRYPT\x2dPLAIN\x2dmd127.swap has successfully entered the 'dead' state.
Jul 16 18:27:40 tnsbuilds01.tn.ixsystems.net systemd[1]: dev-disk-by\x2did-dm\x2dname\x2dmd127.swap: Deactivated successfully.
░░ The unit dev-disk-by\x2did-dm\x2dname\x2dmd127.swap has successfully entered the 'dead' state.
Jul 16 18:27:40 tnsbuilds01.tn.ixsystems.net systemd[1]: dev-mapper-md127.swap: Deactivated successfully.
░░ The unit dev-mapper-md127.swap has successfully entered the 'dead' state.
Jul 16 18:27:40 tnsbuilds01.tn.ixsystems.net systemd[1]: dev-dm\x2d0.swap: Deactivated successfully.
░░ The unit dev-dm\x2d0.swap has successfully entered the 'dead' state.
Any tips welcomed… And “not using swap” is the wrong answer.
The current templates for applications are really useful, but sometimes a bit heavyweight. For example the default roundcube template requires a database container. I’ve deployed a much simpler single container approach using sqlite.
I do have an issue with my photoprism deployment where I didn’t use a hostpath for one of the storage mounts, and I now can’t change that from an ixVolume without deleting and re-adding the workload. I would be great to have an “advanced” mode where some of these settings could be edited/modified.
Afraid I have to disagree. Just a little bit of swap helps the Linux kernel optimise its overall memory usage.
Anyway it turns out that that tip doesn’t work. For some reason my md device isn’t able to be started as swap as something on the system appears to have a lock.
fuser and lsof can’t detect the process that has locked the device - odd
Not sure what you mean by that, it’s a fact: You are changing things in an appliance OS that the user isn’t meant to alter. Support will be nonexistent and you are essentially treading into the unknown.
My post boiled down to “Don’t do this, but if you do, here’s a sensible place to start looking at how to do it”.
So one other gap for me is a number of tools i use to manipulate backups of data and virtual machine images are currently missing. I’m currently enabling developer mode so I can pull in
kpartx
lsscsi
idle3-tools
lzop
Out of these kpartx is probably the one I use most to inspect old backup images I’ve got archived on the local storage.
Strange. I do the opposite. I disable/remove swap in all my OS installations regardless of what OS if I have 32GB+ of RAM. I have plenty of RAM and basically never want the OS to ever resort to slow disk-based swap. I run all my servers and workstations this way and have done so for the last 5 years.
Fair enough. For a few TB of storage and some apps, it should do.
The consumer motherboard with Realtek NIC, non-ECC RAM (and non ECC-enabled CPU) is far from optimal for TrueNAS. Still you could simplify the setting by booting cheap NVMe, moving the HDDs to the chipset and getting rid of the RAID controller “in HBA mode” (dubious).
I’ve been using Linux for well over 25 years, and unless this is a very small footprint embedded style system using cheap flash storage I’d always add a little swap.
Today most of my systems have NVMe backed high performance storage or at the very least SATA class flash storage for boot and swap.
Currently I’ve pretty much maxed out all of the the SATA ports. There are a couple of old 8TB drives running in a mirror for archive purposes on top of 6 HDDs runing RaidZ2.
I see a lot of homelab / SoHo users look towards Proxmox, but personally the key reason for looking at TrueNAS was the excellent ZFS mangement.
Proxmox is a hypervisor, not a NAS.
The issue with your SAS controller is that ZFS does not work well with RAID controllers and really want a plain HBA. Carefully check what your “HBA mode” really is!
What are you planning on running in VMs? Just curious. I’ve posted a few resources (see my signature) for virtualization on SCALE. I converted my VMUG lab to SCALE about 2 years ago and haven’t looked back.