Oh that’s simple, actually having the $$$ to upgrade my hardware.
In an ideal world, I would upgrade the main server and put its older hardware into the bulk backup server.
I do have a DELL R710 that I got free that I have been meaning to commission into offsite backup, but once again I need to spend $$$ to get the DELL compatible HBA that can be flashed to IT mode. Plus I haven’t tested it yet to see if it even works (was working when it was given to me at the time) and find out how much power it consumes…
Need to also make more time in life, can I have 32hour days please.
It’s a hobby/passion I can’t feed at the moment, so it needs to limp along for the time being.
I have this old meme of an image, turns out there is a rounding bug in the UI! my server has since had more VDEVs added, naturally.
regarding the real implications of filling a zpool, if your use is WORM it really shouldn’t matter. the issue the 80% copypasta is trying to prevent is fragmentation from read-modify-write workloads. but the TrueNAS UI has no way of knowing what your use is or will be.
today the server has a lot more storage. but I still fill it beyond 80%. it performs fine, it didn’t magically blow up or stop working well. (current topology is 9x 8-wide raidz2, 12-20TB disks)
I donno, I’ve seen a production server (not one of mine) get to 100% full on zfs & then refuse to acknowledge any & all commands. Wasn’t a fun 8 hour call with vendor support or RCA with management for the owner.
While I do appreciate the shared additional viewpoint, I’ll stick to not having the chance it happens to me. Though that wasn’t write once read many like you had.
I don’t think any of us are debating that (trying to) fill to 100% is a horrible idea.
And to be clear what lead to the insane screenshot is circumstance, I was waiting on hardware to arrive and I had to shut everything writing to the pool at that point…
The amazing thing is I had to turn the server off a few times during the expansion process as where its temporary physically located would have been too noisy during bed time for the kids.
It automatically started a scrub after the expansion finished, I assume this is to make sure it is all hunky dory, but thats fine, one can never be too safe with a scrub
you might already know but you can increase the raidz_expand_max_copy_bytes as described here
and if you are intentionally initiating scrubs a ton, you can also tune zfs_scan_mem_lim_fact to use more than 5% of RAM in the sequential sorting algorithm, which can absolutely help bigger zpools scrub faster. (especially on a pool with multiple vdevs and a ton of records)
This would have been handy, sadly I didn’t know about these tricks, but that’s ok, it did complete, and it was a once off. I did it for a bit of extra temp space and to test the feature.
Apparently the performance enhancements have been merged 3 weeks ago, so it will be good when it makes it into TrueNAS Scale
Well on the cheap I was going to get 4x 6TB cheaply, and re-arrange the backup pool to be 10x 2TB and 10x 6TB. I was considering making both vdevs raidz3 but sorta undecided between raidz2 and raidz3. But unfortunately that wont be a for a little bit until i can get some spare $$$
33 snapshots in total, not too many… most of the snapshots take about 50 to 60meg
2 have 5gig and 3gig respectively…
Removing snapshots wont free up much to be honest.