[Poll] Home TrueNAS Systems

Currently 2, soon 3.

1 → Dev DF Machine (Soon to be repurposed to Backup EE Machine)
2 → Dev EE Machine
3 → (Soon) Prod EE Machine.

I have a fire safe for long-term protection

I have my main server, does the heavy lifting.
The second is simple replication daily as a backup.

The third is my long-term storage. Every six months, I power it up and do a full replication of the main server’s data. The third machine is located in a room with concrete board lining, like a fire-safe. It should withstand a 90-minute fire.

1 CORE 24/7
1 CORE backup 24/7
1 SCALE testing
1 XigmaNAS (yes XigmaNAS) migration still being discussed mentally

1 Like

2 x TrueNAS SCALE 24.10 running as VMs on separate Proxmox VE 8 systems (main one gen 6 Core i7 with 20GB RAM the other backup / development one is a gen 3 Core i7 with 20GB RAM). A third TrueNAS SCALE 24.10 system running on dedicated hardware (gen 1 Core i7 with 16GB RAM) I use as an archive system for storing all manner of crap going back years. I’ve had the hard drives for years (all Western Digital) so I figure that instead of just chucking the older (SATA II) drives in a drawer with random zeroes and ones on them I could continue to use them whilst they are still error free to store meaningful data as an archive of stuff that I probably won’t need again but it doesn’t hurt to have a third copy.

Just for historical interest, previously my main system ran ESXi 8 which I’ve now abandoned, for obvious reasons, and my second system was running oVirt which I’ve also abandoned (again for obvious reasons) so I’m a fairly recent convert to Proxmox and I have to say TrueNAS Scale runs well on it and I’ve had no issues with PCI Passthru.

I’ve been around since the era of Cyber Jock (for those who remember him back in the day :joy:) who would fly into an absolute rage at the prospect of running (what was then) FreeNAS as a virtual guest but in all these years I have never encountered any issues running it virtualised with any of the aforementioned hypervisors. It just runs, and runs, and runs…

I have one CORE and one SCALE system at home with the former doing all the heavy lifting including jails and VMs. SCALE is a smaller all NVMe system with a handful of apps that are a pain or next to impossible to deploy on FreeBSD.

E.g. Nextcloud: FreeBSD jail, OnlyOffice: custom app on SCALE.

All ingress is handled by Caddy on OPNsense.

The third NAS is CORE located at my company office which features as the replication target for the other two.

Only-office in in the ports tree…

Have you tried configuring it to a point that it actually runs?

Indeed!

You’ll see in the script there are commands I use to change certain variables inside the “local.json” file. You can use the same command for any “var/value” pair.

Right now it works out of the box with Nextcloud as long as it behind an https proxy (nextcloud won’t allow http) and the JET secret is also randomly generated and shown at the end.

1 Like

Actually it can do this. Replication run and a scrub are both at night and the machine gets turned on at bedtime and turned off in the morning.

I call it 1 1/2. I have my production TrueNAS core machine, that you all can have when you pry it from my cold, dead hands, and a “test” NAS, that I was using to kick the tires on scale. Currently, I am also looking at XigmaNAS, so for the time being, it is running that.

  1. sharing with family members
  2. sharing from multiple PCs just for me
  3. family archive
  4. my archive
  5. backup of my main PC
    (6 Phisicals. Not count VMs because of testing only)

Cost prohibits us from increasing the capacity of the TrueNAS disks for archiving, so we have multiple units.

Those who wrote that they power on the second instance just for backup and then power it off after backup is finished, … when do you do scrubs? How to you know that backup was successful? How often do you test backup data?

1 Like

Smart testing too.

Manual monthly scrub and long smart test, nothing particularly imaginative

Get alerted on email on replication fail, think by default. If someone know how do the same when replication end succesfully i will be glad :grin:

Honestly i dont have so much data change, except for 1 datasets, so i always check that one and occasionally other (on rotation)

I have 3+, but not all in my home. I have my primary NAS here with me, an off site NAS that’s sitting in a friend’s house, and then depending on what I’m working on for my blog (or just for fun), I have 1-3 more machines that I’m using to test things out or for tasks that I don’t want to occupy the other two NAS servers with.

That looks great - thanks for all the work. I gave up when I read “now setup postgres”. What the * does it need a database for? It’s supposed to be just the “word and excel engine” for Nextcloud …

Maybe I’ll give it a try.

I run 2 instances of Scale on hardware. 1 is in production and the other one is my playground.

The database is under the “CoAuthoring” heading. Perhaps it’s needed for collaboration.

I manage three: My main NAS is at home, my parents have one that I manage (that also serves as a replication target for some of my data), and as I’m away from home for the next several months, I have one with me to serve as a backup target and a local media server. The first and last run SCALE; my parents’ NAS is currently on CORE.