New build opinions?

Hello Guru’s :slight_smile:

I’m in the hardware procurement phase of building a Truenas, and could use a few guidelines when it comes to storage and performance.

This box will be running SMB shares mainly for Windows clients. Data stores are large seismic drawings with files from 1MB to 10GB.
Network wise im thinking 4x 10gbase-t with LCAP bundling. 2x dual NIC’s for redundancy.

Server is a 2U SMI 2029U-E1CR4 with 24x internal SSD slots, PCI 3.0 on mobo. I’m also attaching a 3U SMI JBOD (CSE-836BE1C) with 12gbps bandwidth and 16x 3.5 slots. Attached
to a LSI 9206-16E 12gbps adapter in IT mode.
Server’s got 768 or 1TB of RAM (cant remember right now).

End-user requirement is a 10-15TB highspeed production volume, meaning they will have this volume for writes and reads. This will be the volume
where they handle the seismic data and it needs to be as fast as possible.
They also require a 100TB “archive” volume for historic data, which does not need the same speed as the prod volume.

For the prod volume i have 12 Samsung PM883 960GB drives, for archive i’ve purchased 6x 16TB Seagate Exos 3.5 spin drives. I’m planning 1-2 hotspare’s for both volumes.
I do not expect these volumes to increase to more then 30TB for the SSD and 2-300TB for the SATA.

Redundancy is not of importance, as they can handle a bit of downtime on this box. End users are wishing for full 10gbit speed for 1-2 clients when working.

I have the opportunity to add a 4X Asrock NVMe internal controller, or even two of them if it has a performance gain.
My questions is to the gurus about the special VDEV’s. If needed i plan to use the 4x NVMe asrock for special vdevs and i’m aware of the lack of data loss protection on this solution.
But the server will have its own UPS, shutting down Truenas when 10% battery remains.

How would an expert setup special vdev’s in a scenario like this? Having maximum performance to the SSD volume in mind.

How would you setup these two pools? I will add NVMe as required to get the best possible performance.

Metadata needed?
L2 Cache?
SLOG/ZIL requirements?

I’ve also read that Truenas Scale only uses 50% RAM for cache, should i go with core instead? As mentioned this will only server over SMB, no block storage/ISCSI or NFS.

All pointers are much appreciated!
Thanks for your time :slight_smile:


This is no longer the case since the release of Dragonfish.

1 Like

What do you mean here?
A bifurcating adapter is fine. An actual “controller” would be very suspicious.

UPS are not backups, and are no substitute for redundancy.
A SSD pool does not need a special vdev. The HDD pool could benefit from a special vdev, but then rather as 3-way or 4-way mirror. You do not want to lose a few hundreds of terabytes of data because one special SSD suddenly failed, do you?

I see no requirement for a SLOG (no sync writes), and with 768 GB RAM it is highly dubious that a L2ARC would be useful.