New server - paralysis of options?

I have two TrueNas Core boxes in my office. Both homemade. These servers are used exclusively as file servers. I run very little on them besides SMB. These servers store a lot of photos taken by a pro photographer, and backup images from all the other computers in the office.

Server
6 - Western Digital Red 8TB drives in ZFS2
1 DOM for boot

Backup Server
6 - Seagate EXOS X18 16TB drives in ZFS2
Mirrored DOMS for boot

When I originally built the first Server, I had 4TB drives in it (and FreeNAS) and I had 8 TB drives in my Backup. I sync the backup every night and have Hourly, Daily, and Weekly snapshots. I had plenty of space when I started, but after a couple years, I was always over my 80% pool size, so I resilvered my backup with 16TB drives, and then repurposed my 8TB drives into the main server.

I am running out of space again. I can only keep about a week of snapshots on the main Server until I run into the 80%.

I can keep 5 months of snapshots on the backup before I hit the 80% limit.

I have 6 new 16 TB drives here and soon plan on resilvering them into the main Server. This solves my 2 week snapshot problem, but I would really like to be able to keep a years worth of snapshots on my backup server.

I am thinking that it has come time to build a bigger-better backup server. I don’t think resilvering the backup server with bigger drives will really gain enough space to keep me going at this rate with a 6 drive ZFS. The “backup Server” case has slots for 2 more hot swap drives, but the motherboard only has 8 SATAs, 6 used for the Pool and 2 for DOMs. So building a new pool there doesn’t make sense to me either.

Both of these systems were built with SuperMicro boards, Xeon CPUS, and ECC Memory. I utilized Silverstone HTPC cases with iStarUSA hot swap racks mounted in the front. This worked well enough.

I am thinking that I need to jump up to a case that will hold at least 10 drives in ZFS2. I have looked at iStarUSA D410-DE12BK 4U and a Rosewill 4U Rackmount case. They are 1/3 or less the price of a Supermicro SuperChassis.

I am used to building “office computers” or “photoshop workstations” with those kinds of components, I am pretty ignorant of “server components.”

To run a system with this many drives, I won’t be able to run them all from the motherboards like I currently have.

I have some parts around here for another backup server but somehow I need to add more SATA ports. I think the mobo is a SUPERMICRO MBD-X11SCA-W-O which only has 8 SATA ports.

I saw someone mention the SuperMicro X10SDV-2C-7TP4F board in another thread and said it supported 22 SATA ports. Looking and the specs it mentions 4 SATA and 16 SAS. Will a SATA drive plug into a SAS port?

I could just order an IX mini-R and plug the drives in.

Just posting this here in hopes that someone will help me sort out the myriad of options.

Thanks, -Kirk

You can’t just add a HBA like an LSI to add more SATA ports?

2 Likes

…what he said above. And get one of those 24 drive, front load cases …

…and call it a day.

3 Likes

Short answer yes.

Longer answer, SAS, (Serial Attached SCSI), was specifically designed to be backward compatible with SATA. If you plug in a SATA drive into a SAS disk port, the SAS controller turns the SAS disk port into a SATA disk port automatically.

Further, a SAS Expander back plane must also support SATA disks, but uses SATA protocol tunneled over SAS protocol. But, basically is the same thing as above.

SAS Expanders are chips that turn fewer host side ports into more disk side ports. So, 4 SAS host ports may have 12 disk side ports. All of which would support SAS or SATA disks.

2 Likes

I agree that adding an HBA is a good idea.

The backup server, is it located is a different building? I’m thinking about the safety of your data.

2 Likes

I would suggest it is time to add another 6 disk VDev to each of your servers. Or even switch to a pair of 8-way!

This means you would want to move the mother board into a bigger case.

Perhaps a 24 bay case.

You would want to add an LSI SAS HBA.

Here’s a link to the build report of my 24 bay system I built back in 2017 :wink:

(Still going strong)

Right now, the backup server is in the same building. Back in June, we set up a Ubiquity wireless bridge to another building 1/4 miles away. I had planned on building a 3rd server and placing it in the remote building.

Help me work through the logic behind having more than one Vdev in a ZPool vs one big ZDev.

Wouldn’t it be safer in a 12 drive Zpool to have one ZDev in ZFS3?

Wireless is not very reliable. But if you are doing small incremental backups/snapshots, it might just work.

You are probably right. Figured if I did that, I would get it all in sync in the main office, then move it. Then it would only be snapshots to transfer overnight.

I like the remote location because it is on a completely different electrical system and I have a security building there to put it in. The other option is another building across the parking lot that has a wired connection to the main office. Faster and more reliable transfers wired.

The wired connection is likely the best solution. And as for power, I’m certain you are using a good UPS.

1 Like

Would this be a good option?

1 Like

Wider vdevs are slower. More vdevs means more iops.

And you can add another VDev without destroying the pool.

But yes, you’d have more redundancy with a 12w Z3, but it does mean starting from scratch and restoring data to it.

OK, that makes sense. I wasn’t quite grasping the purpose for multiple vdevs when using all the same size drives.

What is a general rule of thumb for how wide a VDev should be before slowdown is an issue?

I guess I didn’t realize until today that VDev’s can be added to an existing Zpool. Or did I get that wrong?

If that is the case, there is another option for growing an existing ZPool other than how I have been doing it by reslivering in larger drives one at a time.

Yes, you can add additional sets of drives as additional vdevs, and with Electric Eel (24.10) you can even add additional members to an existing RaidZ VDev, but there are some significant caveats

1 Like

With a clear line of sight and proper PtP equipment, wireless can be pretty darn reliable. Observe the niceties re: fresnel lens zones, etc and transmission can be close to gigabit speeds at that distance.

As for the servers, I agree that adding VDEVs makes a makes more sense than going wider. The main benefit of swapping lower capacity for larger drives is lower power consumption.

Plenty of good rack mount cases from Supermicro Available on eBay for next to nothing.

1 Like

You can think about adding additional disk shelf/chassis - it’s a case with many disk slots without MB/CPU… On the server itself you should add SAS HBA with external ports that are connected to the disk shelf.
With this configuration you can easy connect disk pools to another server if there is a problem with components or you need to upgrade the server.

1 Like

I see these generic 24 bay SAS server chassis on ebay. Are they any good?

I also so this used SuperMicro barebone server.

It has a system board, but no processor, memory, or drives. Comes with 4 power supplies. I am confused how the power supplies work on a system like this. Are they redundant or are they to be hooked up to separate components?

I’ve got one of those and I liked it.

The other one say:

…am not too sure the backplain would be friendly ?
no clue, never used one of those

1 Like