My “Beginner status” might shine through here.
I was thought that NFS was required as an rsync file dest.
But then if SCALE has an rsync server built in, i could just use that.
And maybe even backup wo. the need for SSH encryption.
I will be installing just as soon as i have disabled SecureBoot, and have verified that functionality with a DEB12 install , prob. today.
I would still like to have some NFS fileshare storage, but then just for filearea.
I need to backup some physcal linux (deb) servers (weekly).
Today i use rsync (daily diff + weekly full) to a separate local disk.
And i’d like to backup some Proxmox linux servers too.
What backup method (client) would you suggest ?
For the physical servers, I have been looking at borgbackup, and it looks nice.
But how do i implement such a server on SCALE.
For Proxmox.
If i decide to use a Proxmox Backup Server (PBS) , how would i utilize the SCALE as PBS storage ?? - NFS or ??
TrueNAS Scale has rsync client built in, but if you want an rsync server then you will need to install a (standard) app for it. It will depend on whether you want to push from your other clients to an rsync server on TrueNAS or pull from your other clients which are themselves running an rsync server to TrueNAS.
If you are going to build any new servers, consider using ZFS as the file system on them as ZFS replication is probably the best solution but only if you have ZFS at both ends.
NFS server is available as standard with TrueNAS Scale, so you have no worries there. Just not the best solution for incremental backups.
I have no experience with PBS or BorgBackup and so will have to leave it to others to help you with this.
Took me 3 trys to get it right
Trick was : After the edit , to run the modified installer in the CLI w. ./true…
The first two timed i exited - And ran the installer from the menu.
That installer was the unmodified one …
Now i really need some guidance (i think).
I want to setup my 6 “spinnings” (sda…sdf) in a RAIDZ2 Pool.
And APPS on the NVME POOL (zpool export ssd-storage)
Well i have PDF printed the Getting started guide too.
And this is why splitting the boot disk is not really recommended.
Two or three years from now, when your current boot disk, will fail you’ll have long forgotten about the process.
Been there … Done that
I made an extensive guide, including shell commands + output.
Learned the hard way, that the “Easy to remember” isn’t going to last even 3 months.
And that spending 15 minutes now making some DOC, saves hours when you have to repeat/recreate it.
Besides … I think i’ll make a RescueZilla image of the boot disk.
RescueZilla is so easy to use - “Backup/restore - CloneZilla in a neat gui”
The system has already allocated some “db stuff” on the ssd pool.
Does the “db stuff” get allocated on the first pool defined ?
Seems like i “got lucky” … I think.
My “goal if possible” would be to be able to spindown the “Rust pool” , when not “active”. And i think a pre-req for that (spindown) is that the “db stuff” is on another pool.
I measured the “Box” Shutdown/Standby Power to be approx. 10W.
Since this will be a pure weekly backup box (power on once a week) to take backups, i went cheapskate to save another $40 in electricity.
I have now enabled in Bios “Action on AC restore : Always power on”, and connected the box to an APC-PDU Outlet.
Since I can remotely control the APC Outlets via SNMP, it’s no issue to turn the box on/off, from a bash script.
Approx. Time from AC Power On (APC PDU) to :
IPMI ping answer : 60 sec.
TrueNAS (Console) CLI login : 3m30s
I’ll have to read up on the SuperMicro IPMI tools too, but the “Always power on” in Bios, suits me fine here.
Still waiting for BlackFriday for new disks for this node, that has room for Max 6 Sata disks.
Plan is to get 6+1 x Tosh N300 8TB disks, would give around 32TB data.
7 x 8TB is around $1410 - The +1 is the “spare in the drawer”
But it hit me that i would not be able to expand with an extra vdev, as i’m maxed out. I was choosing 8TB disks, as that was where i’d get most TB for the $
I could get the same 32TB with 4+1 x 16TB
5 x 16TB is around $1750
And 36TB with 4+1 x 18TB
5 x 18TB is around $1870
Right now i have maybe 6…8TB of “backup/NAS” data.
3TB on NFS , and 3…5 TB on PC’s (including OS).
So maybe the 32TB would be enough for a long time.
And i’ll have to remember that i only will have 16TB on the “Backup NAS” (6 x 4TB) - The Backup NAS would only have “select” files (dataset) replicated.
Am i going “overboard here” ? …
This is for a two person home setup, and i’d expect it will take me longer that the “rust lifetime” to get to 12TB used.
The extra size is for snapshots/ransomware protection.
Please talk me out of it
Ps:
Would there be any obvious disadvantage of using such large vdevs ?
Besides resilvering time.
Or any advantage besides the expansion possibility, and a bit of saved power from 2 less HDD’s … Prob. eaten up by the extra cost.
Cache size:
The 16 & 18 TB have 512MB Cache , vs 256MB on the small disks.
Would that even count on a “backup only NAS” , where “storing” huge data amounts would be the primary task.
The 3TB NFS data contains many small files , as in GCC source and Linux kernel sources.
I won’t. Best $/TB should be around 16-18 TB drives these days.
For the sake of good cooling, I would rather populate all 6 bays in the Node 304—or maybe 4 bays in a symmetrical X-XX-X configuration. But you may not need to keep a cold spare, or at least not keep a cold spare from the very start, if you’re confident you can get a new drive quickly should an issue arise. So if you can get bigger drives and/or more capacity from your initial $1400 budget, go for it.
Or look for a really good deal with lower capacity drives.
In short: Go over requirements if you can, but not over budget.
That’s the drive’s internal affairs. As long as they are not SMR.
Raidz2 is not efficient with these, contrary to mirrors, but for a backup it should not matter. And source code should compress excessively well.
Your numbers will be much better if you forget this idea.
There is every possibility that you won’t need the spare drive, or if you do it won’t be for 3-5 years. By that time the drive in the drawer might be dead, and a new fresh drive might be half the prince, and available in a day or two from Amazon.
As your drives get bigger and as you have less, having a cold spare makes even less sense.
If you are concerned about time to replace, use raidz2.
Probably. My concern would be that with some empty slots air would just flow through the big gaps and bypass the drives. X-XX-X has all drives next to a channel, so I think it might still work. But blanks would be better.
I’m not sure that’s actually a problem with the 304. There are two fans at the front that directly blow on the HDs, and there are fairly large gaps between them.
Ie I’m not sure if you need dummies for good cooling, you certainly do in “suck” style coolers
Your concern may be correct, or it might not, it’s easy to check I guess.
With RAIDZ expansion, the rules have changed completely. Before this to increase your RAIDZ storage you had to add a whole extra vDev consisting of several drives, so you planned for the maximum amount of growth (over say 85 years) and bought it all at the start.
With expansion, you can plan for some moderate growth (e.g. for a year or two) and if you leave slots free you can add a disk once your RAIDZ starts to get full.
So if you need say 8TB, I would first add 50% for growth, then add 25% to reflect max 80% occupancy, then convert from TiB to TB (add 10%) i.e. 8 * 1.5 * 1.25 * 1.1 comes to 16.5TB. So personally I would achieve that using (say) 4x 10TB or 12TB drives in a RAIDZ2 configuration, leaving a further 2x slots for future growth allowing me to double my total useable capacity. Given that I was dedicating 2 drives for redundancy, I wouldn’t personally buy a spare drive for the drawer because it would waste its warranty and I can get amazon to deliver a new drive as required in a very short time-scale.
However, I do think you should consider how many copies of your data you will plan to hold i.e. how many generations of backup and adjust my storage requirements. I would also consider how much it would be worth to keep the backups simple and avoid my time “being selective” about which data I backup and scale it up further.
P.S. Snapshots for ransomware protection don’t cost you space - snapshots don’t cost you space, it is only the changes to the files after the snapshot that cost you extra space. And TBH you want the space to run out if ransomware starts to create new encrypted copies of all your files because it will trigger warnings and draw attention to the sudden new copies. But snapshots are great for e.g. keeping the last 3 months worth of backups, so include this in your requirements calculations.
Uncle Fester’s Basic TrueNAS wiki has a good section of storage requirements calculations.