Building my First TrueNAS box

Hello Experts,

I am new to this forum and new to TrueNAS. Seeking your advice in setting up my first TrueNAS storage. I have got a pretty powerful HPE DL380 Gen10 server with following specification. Btw, this is for an Office infrastructure where 100+ users will be accessing this server.

Dual Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz gives total of 40 threads.
192 Gb Advnaced ECC ram
HPE Smart Array P816i-a SR Gen10 with 24x 1.92 TB of SATA SSD
HPE NS204i-p Gen10+ Boot Controller with 2x 480Gb NVMe SSD
HPE Eth 10/25Gb 2p 631FLR-SFP28 Adapter with dual 25Gb NIC
HPE Ethernet 1Gb 4-port 331i Adapter with 4 numbers of 1Gb NIC

I am planning to use the 1.92 Gb SSDs for DATA without HW RAID support and NVMe SSDs with hardware RAID1 for OS (boot volume). Thinking of using the 25Gig ports for data LAN and 1Gig ethernets for management access. Most of the use case is for NFS based storage for file servers and some portion will be using as iSCSI volumes for Esxi datastores.

I am expecting your valuable suggestions on below points,

  1. Which distro will be more suited for me? “SCALE or CORE”?
  2. Which RAID levels will be best for the above server? Also RAIDZ or dRAID?
  3. Allocation of disks per VDEV? Also how many VDEVs per pool.?
  4. Since the server is with sufficient RAM, should I spare disks for Log, Cache, Dedup, or Metadata.

On a side note, what will be the effect if I use the RAID card support for redundancy and only use the NAS features of TrueNAS?

Thank you all,

NFS bulk storage and iSCSI have different desired RAID configurations. Bulk storage works well with RAID-Zx while iSCSI tends to work noticeably better with multiple Mirrored vDevs.

The only real benefit of dRAID is integrated Hot Spares. Using dRAID actually has problems when using it for small files or small writes, due to the need for full stripe width block allocation. (If I understand dRAID well enough…)

Then you might as well not use TrueNAS at all.

One of the key features is data integrity with ZFS’ RAID, (Mirroring, RAID-Zx or dRAID), is generally more complete and better implemented than hardware RAID cards have available.

Hardware RAID controller cards are highly discouraged because they can corrupt data in a ZFS pool. ZFS wants to perform in order writes, regardless if elevator seeking is available. Even using hardware RAID controller cards with single disk LUNs or JBOD is known to be unreliable.

Now it is debatable using hardware RAID-1 just for the boot Mirror. This is because PC hardware tend not to have good support for boot order and fail-over. (Unlike OpenBoot which allows something like “boot-device=disk1 disk2 net”.) But, if you do use hardware RAID for the boot devices, it is generally a dedicated card. See above.

Some have suggested using 3 disks if you truly need boot device reliability. 2 in a hardware RAID-1 and the 3rd as a ZFS Mirror to the hardware RAID-1. That in theory gets you the best of both worlds. But, this is a complex topic and can be controversial.


Thanks Arwen for the reply.

As I mentioned in the first post, this server is already available with me… infact it’s repurposing as storage. I too read somewhere, for treunas, disks should not be using any hardware RAID controller because the RAID features are present within the OS itself. But in my case, I should use the controller just to connect the disks to mother board. Will that be a problem as you mentioned above?

A RAID controller by itself is not the problem, if it can be modified to turn off its RAID capabilities and just present the disks to TrueNAS on a JBOD basis. Given the instability of many RAID controllers show under sustained workloads, driver issues, etc. the community here generally favors using known-good controllers from companies like LSI that have been flashed to IT mode.

Many resellers advertise them used on eBay for relatively little money, the reputable ones will even advertise that the controller has been flashed to the latest IT mode firmware to spare you the work.

Many private-labeled RAID Controllers sold under the Dell, etc. names are based on LSI chipsets and can be flashed to LSI IT mode. I have never had to flash a RAID card or HBA, so I’m afraid I can’t help you there.

All that said, you have a lot of users to support, so some careful planning should go into setting up your NAS, including pool layout, the number of VDEVs, a potential sVDEV for small files / metadata, etc. A lot of things that need to be considered.


Thank you @Constantin for your replay. While I was reading this forum found below incident which is forcing me to not use the current server as it is for TrueNas. There also the user was using the same RAID controller as the one I have it in my server currently.

Now I am confused as what to do with this current server?? Is it advisable If I use HW RAID controller for data redundnacy and use TrueNAS only for NAS feature? Or should I quit completely from the idea of TrueNAS and go for something else?

Thank you.

Using hardware RAID and then presenting one “disc” to Truenas for the NAS functions is a disaster waiting to happen. ZFS needs direct access to the discs and you cannot turn off ZFS in Truenas. So your pool will most probably one day just show offline with little chance of recovering.

1 Like

Ohh… so I should look for an HBA or quit from using truenas I guess…

Basically, yes.


You should be able to get a card that can go in that server, as they are general cards anyways, pending on if it is a direct PCIe slot connector or an internal port on the server it goes into, can usually find both types on Ebay.

And of course, the mandatory “make sure you also have backup systems in place for any data stored on TrueNAS that is important!”

1 Like