Beginner - TrueNAS Setup on used SuperMicro X10SDV-6C+-TLN4F Board

Just remember that if you lose a data or special vDev your entire pool is toast.

If you are going to use a special metadata / small file vDev, then it must be at least a 2x mirror if not a 3x mirror.

2 Likes

My “Beginner status” might shine through here.
I was thought that NFS was required as an rsync file dest.
But then if SCALE has an rsync server built in, i could just use that.
And maybe even backup wo. the need for SSH encryption.

I will be installing just as soon as i have disabled SecureBoot, and have verified that functionality with a DEB12 install , prob. today.

I would still like to have some NFS fileshare storage, but then just for filearea.

Regarding backup:

I need to backup some physcal linux (deb) servers (weekly).
Today i use rsync (daily diff + weekly full) to a separate local disk.

And i’d like to backup some Proxmox linux servers too.

What backup method (client) would you suggest ?
For the physical servers, I have been looking at borgbackup, and it looks nice.
But how do i implement such a server on SCALE.

For Proxmox.
If i decide to use a Proxmox Backup Server (PBS) , how would i utilize the SCALE as PBS storage ?? - NFS or ??

TrueNAS Scale has rsync client built in, but if you want an rsync server then you will need to install a (standard) app for it. It will depend on whether you want to push from your other clients to an rsync server on TrueNAS or pull from your other clients which are themselves running an rsync server to TrueNAS.

If you are going to build any new servers, consider using ZFS as the file system on them as ZFS replication is probably the best solution but only if you have ZFS at both ends.

NFS server is available as standard with TrueNAS Scale, so you have no worries there. Just not the best solution for incremental backups.

I have no experience with PBS or BorgBackup and so will have to leave it to others to help you with this.

1 Like

Well i disabled SecureBoot, installed DEB12 - It worked.

Then i installed TrueNas Scale : Version: Dragonfish-24.04.2.3

NVME partitioned as : TrueNas System 64G and 16G Swap - And 158G “data”

  • sdg below is the USB Installer stick

scale-nvme-part-1

I wanted to (naughty me …) - To use some of the Boot NVME for an APP Pool.

Using this one for the installer script edit

And this one for the ZFS partitioning.
https://www.reddit.com/r/truenas/comments/lgf75w/scalehowto_split_ssd_during_installation/

Took me 3 trys to get it right :sunglasses:
Trick was : After the edit , to run the modified installer in the CLI w. ./true…
The first two timed i exited - And ran the installer from the menu.
That installer was the unmodified one …

Now i really need some guidance (i think).

I want to setup my 6 “spinnings” (sda…sdf) in a RAIDZ2 Pool.
And APPS on the NVME POOL (zpool export ssd-storage)

Well i have PDF printed the Getting started guide too.

But any hints would be welcomed.

And this is why splitting the boot disk is not really recommended.
Two or three years from now, when your current boot disk, will fail you’ll have long forgotten about the process.

1 Like

Been there … Done that :slight_smile:
I made an extensive guide, including shell commands + output.

Learned the hard way, that the “Easy to remember” isn’t going to last even 3 months.
And that spending 15 minutes now making some DOC, saves hours when you have to repeat/recreate it.

Besides … I think i’ll make a RescueZilla image of the boot disk.
RescueZilla is so easy to use - “Backup/restore - CloneZilla in a neat gui”

1 Like

I made two pools

1’st pool: : ssd-storage (The 152G partition on the boot NVME)

2’nd pool : Sata-0-5
Here the system gave me a warning that it was not recommended to use different disksizes in a vdev - I have 5 x 4TB + an 8TB.

Here’s a “df -h” of the pools

The system has already allocated some “db stuff” on the ssd pool.
Does the “db stuff” get allocated on the first pool defined ?

Seems like i “got lucky” … I think.
My “goal if possible” would be to be able to spindown the “Rust pool” , when not “active”. And i think a pre-req for that (spindown) is that the “db stuff” is on another pool.

Correct ??

You can safely ignore this.

Yes. But you can move it from the GUI (System > General) at any time, anywhere you want.

1 Like

@etorix - Thank you for your time & patience :+1:

I noticed that my two “auto created” Datasets , same name as the Pool
Have different properties.

ssd - (Flagged as - This dataset is used by the system)

** Sata **

I did not set any of thee properties, the “system did”.
Any explanation to why they’re different ??

Would bad things happen if i disable Atime on the ssd DS ?
I suppose disabling Atime would save some ssd (nvme) access.

What about compression ON (ssd) vs LZ4 (Rust)
What would be best for a NVME with the system db stuff on it ?
What does compression ON - stand for ?

I measured the “Box” Shutdown/Standby Power to be approx. 10W.
Since this will be a pure weekly backup box (power on once a week) to take backups, i went cheapskate to save another $40 in electricity.

I have now enabled in Bios “Action on AC restore : Always power on”, and connected the box to an APC-PDU Outlet.
Since I can remotely control the APC Outlets via SNMP, it’s no issue to turn the box on/off, from a bash script.

Approx. Time from AC Power On (APC PDU) to :
IPMI ping answer : 60 sec.
TrueNAS (Console) CLI login : 3m30s

I’ll have to read up on the SuperMicro IPMI tools too, but the “Always power on” in Bios, suits me fine here.

1 Like

Still waiting for BlackFriday for new disks for this node, that has room for Max 6 Sata disks.

Plan is to get 6+1 x Tosh N300 8TB disks, would give around 32TB data.
7 x 8TB is around $1410 - The +1 is the “spare in the drawer”

But it hit me that i would not be able to expand with an extra vdev, as i’m maxed out. I was choosing 8TB disks, as that was where i’d get most TB for the $

I could get the same 32TB with 4+1 x 16TB
5 x 16TB is around $1750

And 36TB with 4+1 x 18TB
5 x 18TB is around $1870

Right now i have maybe 6…8TB of “backup/NAS” data.
3TB on NFS , and 3…5 TB on PC’s (including OS).

So maybe the 32TB would be enough for a long time.
And i’ll have to remember that i only will have 16TB on the “Backup NAS” (6 x 4TB) - The Backup NAS would only have “select” files (dataset) replicated.

Am i going “overboard here” ? …

This is for a two person home setup, and i’d expect it will take me longer that the “rust lifetime” to get to 12TB used.

The extra size is for snapshots/ransomware protection.

Please talk me out of it :blush:

Ps:
Would there be any obvious disadvantage of using such large vdevs ?
Besides resilvering time.

Or any advantage besides the expansion possibility, and a bit of saved power from 2 less HDD’s … Prob. eaten up by the extra cost.

Cache size:
The 16 & 18 TB have 512MB Cache , vs 256MB on the small disks.
Would that even count on a “backup only NAS” , where “storing” huge data amounts would be the primary task.

The 3TB NFS data contains many small files , as in GCC source and Linux kernel sources.

I won’t. Best $/TB should be around 16-18 TB drives these days.
For the sake of good cooling, I would rather populate all 6 bays in the Node 304—or maybe 4 bays in a symmetrical X-XX-X configuration. But you may not need to keep a cold spare, or at least not keep a cold spare from the very start, if you’re confident you can get a new drive quickly should an issue arise. So if you can get bigger drives and/or more capacity from your initial $1400 budget, go for it.
Or look for a really good deal with lower capacity drives.
In short: Go over requirements if you can, but not over budget.

That’s the drive’s internal affairs. As long as they are not SMR.

Raidz2 is not efficient with these, contrary to mirrors, but for a backup it should not matter. And source code should compress excessively well. :slight_smile:

1 Like

Not nice :roll_eyes:

Are you saying that putting two unpowered old dummy 3.5" in the 304’ would be better for cooling ? - Never even thought of that.

Will consider that … Thnx

Seems like the 18TB is the best deal, and not much more than 16TB
Me and my “thinking out loud” :face_with_head_bandage:

Thnx :+1:

Your numbers will be much better if you forget this idea.

There is every possibility that you won’t need the spare drive, or if you do it won’t be for 3-5 years. By that time the drive in the drawer might be dead, and a new fresh drive might be half the prince, and available in a day or two from Amazon.

As your drives get bigger and as you have less, having a cold spare makes even less sense.

If you are concerned about time to replace, use raidz2.

1 Like

Probably. My concern would be that with some empty slots air would just flow through the big gaps and bypass the drives. X-XX-X has all drives next to a channel, so I think it might still work. But blanks would be better.

1 Like

I’m not sure that’s actually a problem with the 304. There are two fans at the front that directly blow on the HDs, and there are fairly large gaps between them.

Ie I’m not sure if you need dummies for good cooling, you certainly do in “suck” style coolers :wink:

Your concern may be correct, or it might not, it’s easy to check I guess.

1 Like

With RAIDZ expansion, the rules have changed completely. Before this to increase your RAIDZ storage you had to add a whole extra vDev consisting of several drives, so you planned for the maximum amount of growth (over say 85 years) and bought it all at the start.

With expansion, you can plan for some moderate growth (e.g. for a year or two) and if you leave slots free you can add a disk once your RAIDZ starts to get full.

So if you need say 8TB, I would first add 50% for growth, then add 25% to reflect max 80% occupancy, then convert from TiB to TB (add 10%) i.e. 8 * 1.5 * 1.25 * 1.1 comes to 16.5TB. So personally I would achieve that using (say) 4x 10TB or 12TB drives in a RAIDZ2 configuration, leaving a further 2x slots for future growth allowing me to double my total useable capacity. Given that I was dedicating 2 drives for redundancy, I wouldn’t personally buy a spare drive for the drawer because it would waste its warranty and I can get amazon to deliver a new drive as required in a very short time-scale.

However, I do think you should consider how many copies of your data you will plan to hold i.e. how many generations of backup and adjust my storage requirements. I would also consider how much it would be worth to keep the backups simple and avoid my time “being selective” about which data I backup and scale it up further.

P.S. Snapshots for ransomware protection don’t cost you space - snapshots don’t cost you space, it is only the changes to the files after the snapshot that cost you extra space. And TBH you want the space to run out if ransomware starts to create new encrypted copies of all your files because it will trigger warnings and draw attention to the sudden new copies. But snapshots are great for e.g. keeping the last 3 months worth of backups, so include this in your requirements calculations.

Uncle Fester’s Basic TrueNAS wiki has a good section of storage requirements calculations.

1 Like

Also, a backup is a separate copy. It’s not really a backup if it’s on the same pool as the other copy.

1 Like

A snapshot of something that is already a backup is still a backup.

But lose the disk, you lose all backups so it is not the same as keeping several backups in different locations or on different servers.

1 Like