I’m going through the process of hardening the TrueNAS Scale deployment. So far, I’ve configured the following:
Enabled MFA (Globally)
Disabled SSH Password login universally, bound it to only the interface that I want it accessible on (which is on my management VLAN that only my desktop has access to)
Disabled Console Auto-Login
Standard RBAC for user deployment (NAS Admin is not SMB user, SMB users have no SSH or sudo access, only access to the datasets they need via ACL etc)
Configured Datasets with appropriate ACLs
Configured all docker containers that need access to files on datasets to use the appropriate PUID and/or GUID to match the ACLs
None of the containers have access to the management VLAN and all belong to a sub-interface via macvlan networking. This is both to keep them from being on my management network and to force them to follow my network ACLs and firewall rules.
I’m using the CSR via my CF API to pull a SWAG cert to the device
GUI is only accessible to management VLAN and HTTP > HTTPS redirect is on
It is not WAN accessible, only from my management LAN for SMB/SSH and via my Cloudflared container for remote UI access which is configured to only allow my google account to bypass the CF edge via Google Developers OAuth2 SSO app I’ve configured.
At this time, I’ve opted not to encrypt my datasets because frankly the data that’s on there isn’t that critical (personal media files and device backup), however, I may migrate to an encrypted dataset in the future. For now, I need to look more into what the encryption options are, how exactly the function, whether or not there’s a viable way to do something like “passphrase” based encryption, but also push that passphrase to the TPM so that it can function similarly to Bitlocker, etc.
While not really security related, I do have weekly short and monthly long SMART tests scheduled for all disks, monthly scrubs for my HDD DataPool, weekly for my NVMe AppPool, 3x weekly snapshots for AppPool, replication from AppPool to DataPool and daily Rsync from my DataPool to my old ReadyNAS… until it dies
Any items I’m missing here? Any pitfalls to avoid?
Not really Hardening but I take zpool checkpoints every week which gives me a recovery point if myself or someone on the team does something dumb or if someone nasty gains access and does bad things.
100% but remember if you have boot environments before this was enabled it would be trivial if someone gained access to your system to boot into them thus removing 2FA and potentially even your console lock settings. Again all very unlikely and as others will say once someone has physical access to your system the games over unless you have encryption (passphrase).
I have these configured on the app pool, and I may enable them on my data pool, but I’m reluctant now because that data is pretty static, not critical at all (yet) and I’m wanted to see what kind of overhead the snapshots have on the app pool first before determining the value of doing it on what’s currently just my Jellyfin storage.
Remember snapshots and zpool checkpoints are different. Checkpoints protect you again dataset deletion or for example adding a stripe to your pool when you really wanted another mirror etc.
I’m not sure I understand what you mean by this. You can’t enable MFA until your bootpool is created and enabled.
And yes, physical access means that things are toast usually. Barring SEDs or some other HW layer data protection for the data pool, if someone steals the box and has skills, they will get in.
Fortunately, this one is just my home NAS. I’m doing this for both fun/personal reasons, but also, because I do most of the R&D work for my company as well, so I get paid to play sometimes. I’m significantly overbuilding this in my home environment to understand the limitations, implications, complications, order of operations, etc. so I can better understand the place of TrueNAS and these configuration settings in a production environment for one of our clients.
I regularly get flack for an over-engineered home network, but a lot of it is simply thought exercise and equipment testing purposes. Hell, I’ve had 4 sets of OEM network hardware racked/configured in the past 2 years for this reason… and because of that, I was able to identify that Aruba Instant-On is great, but over-priced for what it is, what the imposed paywall limitations are (No SNMP unless locally managed, if locally managed, no SSH/Config management), etc. so that I’m able to identify which subset of clients it may be feasible for… but 9 times out of 10, the new Unifi gear beats them in performance per dollar and feature set. Even if they are extremely limited in L3 capabilities.
You’ll also want to take a careful look at what capabilities are being granted to apps as well. For instance, if an app has procfs access with some enhanced capabilities, it’s trivial for a process in the app to move laterally on NAS. So usual caveats with running non-trusted code and figuring out your own risk tolerance.
I like it. I haven’t considered this aspect, however, I’m currently only running docker natively via compose (not in dockge) and all of my containers have the security option set to disallow escalating to root.
security_opt:
- no-new-privileges:true
I’m only a novice in Linux, so I’ll have to see what else needs considering from a docker app perspective.
Good call. On Electric Eel, I don’t see the checkpoints listed under Data Protection or under the Storage Dashboard. The only thing visible in either Data Protection or Datasets dashboards are the snapshots.
Where are the Pool Checkpoint settings at? Are they CLI only?