Fail2ban? What else?

TrueNAS shifts, at least from a SOHO/Workgroup perspective very desirably so, ever more from a pure local storage NAS to a NAS/Virtualization platform: Containers, Docker, VM.

With that however comes the need to manage things from the internet, because for as long as the NAS is just local storage, I can literally turn it off when I travel. But when it starts providing services, it needs to be up and maintained no matter where I am. Further, many of the services offered need to be exposed to the internet, be that Paperless NGX, Nextcloud, etc.

As of now, my NAS isn’t in production, still just toying around with it (waiting for some 26.04 critical bugs to be fixed). And already I get messages like these:

Warning
631 SSH login failures in the last 24 hours

New features like web sharing etc. will when properly used expose even more attack surface.

Now, I’m not advocating turning TrueNAS into a full-on firewall, but just like Apple presumes that Macs are used behind a firewall, and still has a software firewall in the OS that’s active, so I think some simple to use, basic firewall functionality is missing (or if it’s there, I can’t find it), because even if we for the moment assume, the LAN is protected by an ironclad firewall, attacks from the LAN side aren’t excluded: someone hacks into the WiFi, there are guests, employee/teenage kid feels adventurous and wants to see “what boss/daddy is hiding”, etc. So it seems the assumption that: it’s supposed to be behind a firewall and that’s enough will increasingly stop being applicable, the more things operate on the system beyond basic storage services.

Something like Fail2Ban, while not solving all problems, significantly slows down and locks out the vast majority of scripted/brute force attacks, and it’s optionally built into some services like Nextcloud and Stewart mailserver, so these people have recognized that just being behind a firewall doesn’t cut it.

How do people secure their systems that have/need services exposed to the public internet?

Fail2Ban only protects against weak passwords. I think something like zero day attack is more worrisome. Exposing out-of-date software is another big concern. Using things like docker has made upgrading more difficult. Before docker you could at least run apt-get update && apt-get upgrade regularly, now you need additional software to check for updates - and even then you rely on the maintainer of the docker image to still maintain it. Docker images are also less transparent with how they are built, allowing maintainers to sneak in malware more easily (as compared to distributions like debian where everything is built centrally).

I would recommend to not expose things to the public internet directly. It doesn’t take long before anything you expose to the internet is indexed by database and attacked routinely (see https://search.censys.io/). If you get a SSL Certificate (for example via letsencrypt), that domain you request the certificate for will be attacked instantly. That isn’t an exaggeration, go and put a regular HTTPS server on the web and watch the logs.

It is a sad reality, but there is no internet police. Automated attacks are running rampant, without anyone seeming to care. AI will probably only make the whole situation worse. There is a reason why so many things are behind cloudflare proxies.

Running a wireguard VPN does help a lot. The nice thing about Wireguard is that it cannot be detected by port scans (unlike other VPN products). If you want to have remote access to network shares you do need a VPN - things like NFS/SMB are simply not safe enough to be exposed directly to the internet.
There are ways to run a VPN without any port forwards - those reduce the attack surface even further. I think tailscale is a good option in that regards - but it is a proprietary third party service.

There are of course further things to do. You can add security on many different layers. But reducing that attack surface is #1 priority in my opinion.

2 Likes

As a side note: If you only expose your service on IPv6 only that also does increase security against automated attacks. But if you want a SSL Certificate you need to make sure to not leak the hostname (use wildcard certificates only, don’t use a name server that is prone to zone walking).

Lots of good points made, however…

  1. Fail2Ban does more than just to protect against weak passwords. While on the surface it technically protects against weak (or leaked!) passwords, the mere fact that typically attackers will try some default passwords first before launching more sophisticated attacks has a different effect: before they even are done testing for low hanging fruit, their IPs are blocked (and with my policies, typically for a VERY long period, like months/years) So dirty ranges accumulate quickly, and things silence down noticably.

(If a state-level actor wanted to attack, they would probably get through easily, but that’s not whom we’re typically defending against as a SOHO/Home user. (Sure, in a corporate setup, that is a concern, but they have multi-level intranet and extranet firewalls, intrusion detection systems, security admins, etc. so that’s an entirely different ball game…))

  1. as much as things like WireGuard/Tailscale/Headscale/NetBird do what you say, they can only protect some things, not others. If I want the coming web share feature, e.g. send a link to a file to a tax accountant, then I can’t ask them to first install WG/TS and become part of my net. So these services need to be on the public internet, or else they become useless. Similarly things like Nextcloud are useful exactly because one can sit down at any public computer, and access files, edit documents, etc. (of course best with 2FA, otherwise that’s rather dangerous). Similarly, having a media library is of interest specifically when we can access it from anywhere (travel, friend’s house, etc.) without first setting up VPNs.

It’s obvious that the risk goes up, the more things are opened up, but something like F2B or UFW would seem like low-hanging fruits.

PS: what you mention about Docker I fully agree with. That’s why I’m looking very much forward to 26.04 having LXC support. Plan, particularly given all the hassle various docker images give, to simply install things like Nextcloud in an LXC, which I can manage the good old fashioned way.

I argue that your need to do this to share documents with other people doesn’t change the fact that exposing the whole server directly to the internet is associated with considerable risk. TrueNAS has not be hardened for that as it’s not an intended usage case.

Fail2Ban will not save you if a vulnerability sidesteps the ordinary login process, by exploiting a bug in a protocol or similar.

1 Like

As noted, the biggest concern is exposing a system that could have a vulnerability one day, and it is not identified fast enough.

So this is more about risk acceptance vs a need and a want.

Sure you could use CloudFlare tunnels to lessen the direct exposure of your network, but with the automated bots scanning 24/7, if anything is open on your side and you can not lock down NAT rules with source IP’s, it will show anyways.

Would tailscale be an option?

Well, the system IS behind a stateful firewall. So protocols can be inspected/sanitized.

Also, it’s clear that security is always a tradeoff. But having web logins and ssh logins means people have something they can get at with brute force attacks, and that’s where f2b would put a hard brake on things.

1 Like

I use tailscale for some things, but

  1. it won’t work for things that require public access (web page, web sharing as it’s coming to 26.04)
  2. hinges on the tailscale contained coming up reliably (not the case currently, likely due to bugs, so that situation hopefully will improve)
  3. won’t help when having to troubleshoot from afar, if something doesn’t work as it should, if that in any way involves apps not starting up as they should. If Tailscale were a fundamental part of the system, and not an app add-on, that would be a stronger case. Maybe TrueNAS Connect will help there in the future, but that’s beta at this point, so how that will look in the end, who knows…
1 Like

Valid points, I do tend to think a similar way, if I can avoid adding additional layers that could break, I will.

But I also do not have to share anything out from my network either. I would say so long as you have a good firewall, segmentation of your resources via VLANs with proper ACL’s to limit if an incident does happen (if not when right?) That is often the best you can do from a base layer.

From there, locking down SSH (keys/certs only, not allow direct logins) , enforcing MFA where possible (phishing resistant), you have already become a harder target to get into..