TrueNas configuration (given 6 NVMe slots)

So, I bought myself a Bee-link ME Mini

As read in this post, I need to make sure that the SSDs used (together with CPU, NIC, …) are not exceeding 45W of power. I trust there is a reason Bee-link teamed up with Crucial and on some tests I read that the P3 has a average wattage of 1,1W with peak at 5,2W. Using 6 of these should still keep me within this power limit.

This brings me to me question. How to configure TrueNas? Below my considerations. I would love to hear your thoughts on this and/ or your configuration.

vDev type Usage Configuration Considerations
CACHE Additional cache behind the ARC cache for caching files which are accessed a lot. ARC sits in RAM so it is much faster. No need for redundancy: upon failure/swapping no loss of data. Not As this vDev will have the same read/write speed as the other drives, there will be none to very little performance boost. Therefore, I prefer extra storage in the DATA vDev. Also, the Bee-link comes with 12GB of RAM and I read that 1GB for each TB stores is optimal. So, the optimal for the ARC is 8GB, leaving 4GB for TrueNas itself and for read/write operations.
Data Where the actual data is stored. Redundancy & resilience is key. 6x Crucial P3 2TB RAIDz2 Balance between redundancy & resilience while maintain 75% storage efficiency having a max storage of 8TB.
Dedup Stores the same data which is stored on multiple datasets/folders/… In case of small deviations the ‘main’ file is stored in the Dedup vDev with a file on each location in the DATA vDev holding the deviations. Not. Mainly, I am storing personal data with some logs files. Personal data is already deduplicated.
Hotspare Disk TrueNas will use upon Resilvering (when a disk in the Data vDev is no longer reachable. (NB does not ‘activate’ by bad sectors, etc.) Not I choose the keep a additional disk in the box. So, I swap this ‘cold’ disk when the Scrub or S.M.A.R.T. tests indicate errors.
LOG Stores the ZIL file for all async writes. Async is at a maximum of 5 seconds and given an input of (2x 2.5 Gbps) ~600MB/s a maximum of 3 GB is needed for these drives. However, use mirroring for redundancy Not Crucial P3 has a write speed of 3000MB/s and I use 6 of them. The NICs are the bottleneck here. Also, it seems a waste to by the smallest NVMe SSD (250 GB) twice while it will only use 3GB of it.
METADATA Stores the filenames, permissions and timestamps. Therefore, can be (a lot) smaller than the DATA vDev Not. As this vDev will have the same read/write speed as the other drives, there will be none to very little performance boost. Tricky to me as without this vDev the whole DATA vDev becomes unusable.

Hope to hear what you think.
Would you choose the have 2x 1TB mirrored disks as a METADATA vDev and chosing a RAIDz1 for the remaining 4 disks in the DATA vDev? (Giving 6 TB of actual data at 75% storage efficiency.)
Or do you think configuring 2x 250GB mirrored disks as a LOG vDev will give a significant performance boost?
Love to see your considerations.

Metadata (aka special) vdev and slog vdev may only boost performance (it’s not guaranteed) if they consist of the faster drives than the data vdevs. In the case of 6 NVME drives, there is no need for them.

Also, the peak power of consumer SSD can easily be 10W or higher. And you will probably reach that peak during the scrub run.

Thanks!

It terms of peak power draw: Micron (the manufactorer) says is has a peak of 5,2W

Even if it’s true, it leads to 31W consumption for 6 drives. You do you, but even the topic you mention has posts about suboptimal power design.

Yes, I agree. That comes with the choice for a budget option.

Anyone any thoughts why I should make use of the cache and/ or dedup vdev?

You should not make use of a cache or dedup vdev. You list 6 NVMe. Specs for your machine list 12GB of RAM, Traditional dedup is very memory hungry.

Just plan your pools and vdevs.
BASICS

iX Systems pool layout whitepaper

1 Like

Have you asked the manufacturer of the Bee-link for a list of approved NVMe drives? They should have that somewhere.

1 Like