Greetings community.
Another noob looking for expert insight - my sincerest apologies, I’m sure this type of thread has been done to death a million times over.
A while back I yolo’ed a TN Scale setup and somehow lost two disks in a vdev
That was a learning experience.
Since I’ve had other things to do bu have tried to research ZFS (on TN scale) some more and have purchased a lot of goodies with intend on rebuilding bigger than last time.
I have some questions I’ll post at the bottom but here is the plan/setup (build still TBD these are just the parts i have)
Server:
Motherboard: ASUS Z270-A
CPU: Intel i7-7700
RAM: 2x CORSAIR Vengeance DDR4 16GB 2400MHz CL16
Fans: 2x 120mm, 2x 80mm, 4x 40mm Noctua fans
M.2 nve: Samsung 970 pro
2.5" SSD: Innodisk 2TB 3TG6-P
2.5" bay: 2x Icy Dock ExpressCage MB038SP-B (16 bays total)
Case: Silverstone RM42-502
PSU: Corsair VS550
HBAi: LSI 9300-i16 IT mode
HBAe: LSI 9300-8E
Rack: Caymon SPR642 (600mm deep so network rack)
Because the rack is shallow I opted to try my hand at adding a JBOD/DAS to the server
JBOD:
Controller: CSE-PTJBOD-CB2 Power board for JBOD
Expander: Dell JBOD 24 internal 12 external lane SAS2 6Gbps
PSU: CORSAIR RM Series RM650
Case: EDENSE ED424H48
I also have:
22x ADATA ISSS316 2.5" SATA SSDs
6x WD Red Plus WD120EFBX
The intent:
Store all my linux distros and personal data (with intent to figure out a off-site backup of the personal data) and be off-site data backup for friends and family.
Possibly also store computer snapshots/backups as well has hosting various services example like HA, Unifi, pihole, tailscale
The plan (so far):
Buy 8 more WD disks - I’d like to get 2 full vdevs running and have 2 drives to spare. Also buy 2x16GB extra stick of memory ( the board supports 64GB max)
Get the jbod built, get proxmox up and running again (something about 8.2 now having updates? maybe) get Truenas up and running again if it isn’t already on the original install.
Then start over completely on the ZFS config, shares, IAM, apps etc.
Since the jbod has 6x 4 SATA backplanes I was thinking of spreading vdevs across the backplanes so that i can lose 2 entire backplanes with Z2. Sound neat not sure it it makes sense though.
ZFS plan
12x12TB WDs in 2 RAID-Z2vdevs - 2x2TB as spare (or future expansion)
4x1TB Adata SSDs for L2ARC (because i have an abundance of them)
SLOG: Unsure - thinking about buying 2x Kingston DC600M SSD 1920GB for mirror because they have PLP and I reckon the Adatas aren’t good enough?
Metadata: Unsure again - maybe more DC600M’s? Maybe I can add these down the line if i need performance?
6x 1TB Adata in RAID-Z2 for some faster storage for maybe VMs
Questions:
- First of all I’d be very excited for any input or recommendations baring in mind that besides a few extra drives I have most of the hardware on hand in a big pile - might share a picture
- Lightly worried that my 6x WDs are serial sequence from same vendor and if I should distribute the 6 drives over 6 vdevs with Z2 to minimize risk.
- I’m worried about adding 1 vdev at a time that the data isn’t equally distributed across all vdevs over time and that will come back and haunt me somehow. I’ve looked at redistribution scripts but still feel weary.
- I’m unsure about adding/expanding L2ARC, ZIL and Metadata vdevs over time if it will need to be designed for today and if I should just do ot not
- Uncertainty about using these Adata drives for anymore more than L2ARC and fast storage array
- Uncertain about the need or benefit og metadata special vdev and if (assuming the 3% rule) if i need to calculate for that today of can grow into it easily. at 24 disks in 4 cedecs of Z2 that’s long term 192TB meaning 5.76GB which is a lot esp if it needs to run in a mirror too. Maybe I could do 2x2TB now and add additional 2x2TB later and stripe the mirrors for 4TB?
Overall I think I just need some assurance or guidance so that I’m not painting myself into a corner - now is when I have time to change direction or swap strategy.
What would you do with my intent and available hardware?
Long-long term additons I’ve considered:
10Gbe (but i have a 4x1bge card for now I could look into teaming if i need more than 1 gbps)
Used quadro or similar to transcode with
Adding 12 more HDDs
Swapping mb/cpu to something more reliable with ecc support.
Swapping ye 'ol PSU to another corsair RM600