TrueNAS on VPS how bad is it?

I know it’s not recommended to set up Truenas without some kind of firewall, but I did it anyway so I can utilize zfs send from the web UI.

How bad is this really? Even if compromised, everything is encrypted and I’ve never decrypted the datasets on the VPS. In theory that’s fine, right? Even if a bad actor were to gain access, what’s the worst they could do, delete my tertiary backup?

Also are there any better solutions to this? I don’t like Backblaze and S3 because it does its own file snapshot thing and if I encrypt everything, I can’t tell which file is which to restore if needed.

I know I could use something like rsync.net which has a zfs backend but doesn’t maintain my snapshot history. And I still run into a similar problem of encrypted file names if I want to restore 1 file.

Any better solutions to this other than “ask a friend” or “leave a box at my parents place” (They would have a fit, “what is this computer?, why is it here? does it have to be running all the time?, why can’t you just keep it at your place”, why does it have to be so noisy, it keeps getting in the way etc…).

VPS is using virtual disks - not recommended for truenas, or did you get a dedicated server?

It’s a VPS. No way I’d pay for a dedi.

I’d say replace parents and friends :stuck_out_tongue_winking_eye:

The VPS is already running a ZFS or something for their own redundancy.
Just how much data are you thinking of ?, GB or TB of data ?
Google will give you a good deal too compared to a VPS, I think.

Lol. It’s a tertiary backup, so I’m not really worried about redundancy. So far my backups are around ~600GB.

I want another TrueNAS box/something with ZFS exposed to the user, so I can utilise ZFS send/recv.

1 Like

Already then. Mount it on the VPS. A single drive should be enough. Could use ZeroTier to have a shared network and do your thing. After the ZeroTier ( or whatever tunnel you use ) is set, you can set it to not take WAN access I think.

Is a good experiment. I hope it don’t bother the VPS in the bandwidth usage.
Let us know how it went once it all got setup. I just may give it a try too :smile:

I use Servarica ever expanding vps. To do that, I had to load ZFS, since it doesn’t come that way. ANd you can’t just load zfs, as your initial drive will then be ext4. So, I had to custom load Debian, they give you the ability with remote console to an installer. Once setup, works great for backups. Been running for 2 years now.

1 Like

Same. I got the Chameleon Hybrid 2TB. I could do a normal OS like Debian I guess, if TrueNAS turns out to be insecure. But I like seeing my backup jobs in the WebUI and get notifications.

Mine is up to 8TB, about 1TB/year increase. I am using zfs-autobackup instead of Truenas UI, too many times I had to start over from scratch. In the past year, not once using zfs-autobackup as it seems to have perfect error recovery.

Do you have autoexpand=on? And it grows by itself? Or do you manually do it? Thanks.

It does not grow by itself, you have to reboot in the servarica web portal, and then do a

zpool online -e poolname /dev/device that grew

I do it about twice a year. I don’t have any performance issues at all surprisingly. And the network is fast!

1 Like

DISK - 8TB Raidz2 Disk.
Is this what you use or is this a new service ?, or, I’ve got it all wrong ???

Pretty sure that’s what he uses. I went with the hybrid to separate boot and storage pools. And I don’t think I need more than 2TB for the foreseeable future.

Mine was during Black Friday when they have special offers, either or both cheaper or larger. But same idea as those.

Some might say, 8TB, or 4TB, whatever you have is too small, my pool is 30TB. Well… I do not replicate to this machine everything. Using zfs-autobackup,it’s easier to cherrypick stuff, but my pools also segregate things by type. My main types are general data (things that might change daily, various backups like phones, other machines, etc.), Archive (stuff that basically never changes though something might add to it, and, stuff to not backup up as it’s never ever needed (so workspace, temp files, stuff that’s easy to reproduce, etc.). I only replicate General Data, which is a small subset. I use other backup techniques for the other stuff.

So, as an example, nextcloud, it is stored on the never backup pool! What, you don’t want your nextcloud data!? Of course I do, but I simply use a nextcloud backup and store that on the General data pool, and it’s much smaller. Same for my mariadb db databases, etc. The mariadb backup is way smaller that the actual filesystem space used by the container.

1 Like