I laugh in the face of 80%

My bulk backup server is a tad full.

Thought some here would get a laugh and be horrified at the same time :joy:

For added bonus tech specs are:

Silverstone RM43-320-RS 20bay hotswap case
Supermicro X7DBE ← Yep this is ancient
1x Xeon E5405 @ 2.00GHz
16gig of DDR2 ECC Memory
2x HBA’s in IT mode

whilst its ECC Memory, TrueNAS doesn’t recognise it as an ECC system

and it’s a mixed vdev at the moment as well.

raidz2 6x 3TB
raidz2 6x 6TB
raidz2 7x 2TB + 1x 2TB being added with vdev expansion as I type this, has another 16hours to go to complete it.

I will be interesting how much extra space I will get when the vdev expansion completes, if it completes :stuck_out_tongue_closed_eyes:

My plan is to change the configuration of the backup server soon to be 2 vdevs of raidz3, 10x 2TB (mix of 3’s and 2’s) and 10x 6TB

Dont care if its slow, just as long as the data is stable, even though thats debatable based on how old the hardware is :rofl:

Anyone else running such a debilitated backup server ?

3 Likes

Well at least you have given us in advance the likely cause of any problems that crop up.

Let me ask you: how do you think you should have done it?

Oh that’s simple, actually having the $$$ to upgrade my hardware.

In an ideal world, I would upgrade the main server and put its older hardware into the bulk backup server.

I do have a DELL R710 that I got free that I have been meaning to commission into offsite backup, but once again I need to spend $$$ to get the DELL compatible HBA that can be flashed to IT mode. Plus I haven’t tested it yet to see if it even works (was working when it was given to me at the time) and find out how much power it consumes…

Need to also make more time in life, can I have 32hour days please.

It’s a hobby/passion I can’t feed at the moment, so it needs to limp along for the time being.

1 Like

99.7% very impressive that 40 TB of storage is filled up to the limit. Hope the expansion works flawlessly.

dd if=/dev/zero of=/mnt/tank/140gb.bin bs=1M count=149999 status=progress

Try it and see what happens next :stuck_out_tongue:
(No really dont lol)

I wont let my system fall below 1TiB of free space.

Also, X7 really is ancient :stuck_out_tongue:

image
I have this old meme of an image, turns out there is a rounding bug in the UI! my server has since had more VDEVs added, naturally.

regarding the real implications of filling a zpool, if your use is WORM it really shouldn’t matter. the issue the 80% copypasta is trying to prevent is fragmentation from read-modify-write workloads. but the TrueNAS UI has no way of knowing what your use is or will be.

image
today the server has a lot more storage. but I still fill it beyond 80%. it performs fine, it didn’t magically blow up or stop working well. (current topology is 9x 8-wide raidz2, 12-20TB disks)

2 Likes

I donno, I’ve seen a production server (not one of mine) get to 100% full on zfs & then refuse to acknowledge any & all commands. Wasn’t a fun 8 hour call with vendor support or RCA with management for the owner.

While I do appreciate the shared additional viewpoint, I’ll stick to not having the chance it happens to me. Though that wasn’t write once read many like you had.

I don’t think any of us are debating that (trying to) fill to 100% is a horrible idea.

And to be clear what lead to the insane screenshot is circumstance, I was waiting on hardware to arrive and I had to shut everything writing to the pool at that point…

1 Like

Expansion completed, gained a little bit of breathing room:

The amazing thing is I had to turn the server off a few times during the expansion process as where its temporary physically located would have been too noisy during bed time for the kids.

It automatically started a scrub after the expansion finished, I assume this is to make sure it is all hunky dory, but thats fine, one can never be too safe with a scrub :wink:

2 Likes

ahhhh soo much storage, I’m very jealous, I dream of buying 24TB hdds to expand my storage… :grin:

you might already know but you can increase the raidz_expand_max_copy_bytes as described here

and if you are intentionally initiating scrubs a ton, you can also tune zfs_scan_mem_lim_fact to use more than 5% of RAM in the sequential sorting algorithm, which can absolutely help bigger zpools scrub faster. (especially on a pool with multiple vdevs and a ton of records)

Sounds like it its time to move it elsewhere. And I think you need a few larger drives :slight_smile:

Out of curiosity, is a lot of your space being held by snapshots? I’m thinking of a way to get you back some more storage space, if it is there.

This would have been handy, sadly I didn’t know about these tricks, but that’s ok, it did complete, and it was a once off. I did it for a bit of extra temp space and to test the feature.

Apparently the performance enhancements have been merged 3 weeks ago, so it will be good when it makes it into TrueNAS Scale :wink:

Well on the cheap I was going to get 4x 6TB cheaply, and re-arrange the backup pool to be 10x 2TB and 10x 6TB. I was considering making both vdevs raidz3 but sorta undecided between raidz2 and raidz3. But unfortunately that wont be a for a little bit until i can get some spare $$$

33 snapshots in total, not too many… most of the snapshots take about 50 to 60meg
2 have 5gig and 3gig respectively…

Removing snapshots wont free up much to be honest.

Most people will completely forget about snapshots so it was good you knew about it.

Best way to backup the main server :wink: been using this system since freenas 9.2 days, very resilient… even survived a back-plane fire :fire:

2 Likes

Perhaps you would have a bit more space, given that Scale is apparently under-reporting the expansion gains in the GUI.