Questions from someone starting out in the world of servers

Hi.

I have four DAS USB 3.0 with 5 bays each, all with RAID function.
Three of them with five 6TB HDDs in RAID 5.
One of them with five 12TB HDDs in RAID 5.

The DAS are slow and a little “sensitive”.

I’m thinking about entering into the “world” of real servers.
I’m reading and learning little by little how it works.

On the software question: ZFS.
The ZFS has the features like the file integrity, compression in the file system, deduplication, HDD redundancy via software (like a RAID) and the possibility of adding more disks to the pool/array. Correct? [1]
And the best operating system to use ZFS is TrueNAS.
OK.

On the hardware question:

The motherboard “Supermicro X10SDV-4C-7TP4F” It seemed interesting to me because it has:

16x SATA3/SAS2 via Broadcom 2116. There would be no need for me to purchase a PCIe LSI HBA card. Correct? [2]
2x 10G SFP+ and 2x GbE LAN. There would be no need for me to purchase a PCIe 10G SFP+ card.
CPU Xeon 4-Core/8-Threads
DDR4 ECC RDIMM
2x slots PCIe 3.0 x8 for expansion

This motherboard has “M.2 Interface: 1 SATA/PCI-E 3.0 x4”.
I think it’s not NVMe. If this M.2 slot is not NVMe, is it possible/is there any PCIe card that is a NVMe “controller type” with M.2 slots? [3]

The form of the motherboard is Flex ATX = 9.0" x 7,25" (22.86cm x 17.78cm)
Is this form compatible with another form in relation to the position of the screws, the size and position of the I/O shield…? [4]
It’s hard to find a chassis that specifically accepts this form.

In relation to PSU: the Supermicro has the server 5018D4-AR12L that uses this motherboard mentioned above.
This server has 12 bays and has a 400W PSU.

In my perspective of using a chassis with 10 or 12 bays/HDDs +
the motherboard mentioned above +
the four slots DDR4 +
one SSD M.2 with 1TB for TrueNAS installation

Would a 400W PSU be enough for this server? [5]

What is the difference between a redundant PSU and a CRPS? [6]
Or is there no difference, both mean the same thing?

The motherboard mentioned above has SMBus. I couldn’t find a PSU that supports SMBus, only PMBus.
Are both compatible? [7]

Sorry for the several questions.
Thank you.

Welcome to TrueNAS and the forums!

In general, if you have to ask about ZFS’ De-Duplication, you should not be using it. Meaning you have not done enough research, for your intended use, to consider implementation of ZFS De-Dup.

Yes, ZFS supports HDD redundancy. With Mirrors, RAID-Zx, (similar to RAID-5/6) and dRAID.

Yes, ZFS pools can be increased in capacity, using various methods:

  • Replace all existing drives in a vDev, (Virtual Device, like RAID set), whence all are done, (one at a time), the vDev increases in capacity
  • Adding additional vDevs, (aka Mirrors or RAID-Zx)
  • Using the new RAID-Zx column add / expand feature to add 1 drive at a time to a RAID-Zx

No, that describes a combo slot, either SATA M.2 or NVMe M.2 drive slot, depending on what device you install.

Their are dozens of choices for NVMe M.2 PCIe cards. From a single M.2 slot, to a dozen, (yes, on 1 card). Cost dramatically varies depending on if you need a PCIe switch. This topic is a whole forum thread in it’s own right.

TrueNAS uses dedicated boot drives. Using 1TB is like using a Big Rig truck to visit your corner store. A waste. In general, 16GBs to 64GBs are all that is needed. Nor is speed needed.


As for the other questions, I don’t have ready answers. Perhaps someone else will.

2 Likes

To add to Arwen’s comments, in TrueNAS, the boot device is only the boot device; it doesn’t serve (and can’t be used for[1]) any other purpose. If that’s just the size of SSD you have laying around, there still might be a better use for it, but it won’t hurt. But if you were planning on partitioning it and using part of it for something else, you’ll need to change your plan.


  1. strictly speaking, yes, it can be hacked to be used for other purposes, but that’s an entirely unsupported configuration. ↩︎

1 Like

So the ideal is to use a SSD NVMe 64GB/128GB to install TrueNAS.

And use another SSD NVMe for the ZFS “extras” (cache, metadata, LOGs…)?
Leaves only the data itself on the HDDs.

Intel Optane seems to be highly praised as this device for the ZFS “extras”.

What would be an alternative to Intel Optane?

If needed. They usually aren’t.

2 Likes

If not used, did these ZFS “extras” remain on the HDDs along with the data?

ARC (read cache) is in RAM; LOG (if used–it’s only used for synchronous writes) is on the spinners along with the metadata.

2 Likes

But can I put this on NVRAM?
I’ve read about using Intel Optane or NVMe SSD with PLP.

Put which on NVRAM, and why? Before asking what kinds of devices you can use for (something), wouldn’t it make sense to determine if you need that (something) in the first place? As I noted above, most installations neither need nor can benefit from L2ARC, SLOG, metadata-only L2ARC, nor a SVDEV for metadata.

If you’ve already determined that you need one (or more) of these, the requirements will differ based on the application. SLOG really needs PLP; the other applications don’t. A special VDEV will be critical to your pool, so should have the same level of redundancy as the rest of the pool. L2ARC doesn’t need either, just performance.

3 Likes

OK. I think I understand (or not).
I just thought about moving to a non-volatile device, things that in a power outage, I would lose data.

Now I’ll change the subject:

The Intel QuickAssist (QAT) says it speed up hash calculations and compression.
Will it make any difference in hash calculations (SHA256/BLAKE3) and compression in ZFS?

I’ll have to defer to others on that question, other than to note that the checksum algorithm used by default in OpenZFS is Fletcher.

1 Like

How crazy is this workload? For home use with spinning HDDs, you won’t notice any difference with checksum acceleration. Unless you’re using deduplication (which I don’t think you will) you don’t need to use a cryptographically secure algorithm like SHA256 or BLAKE3. Fletcher4 runs circles around them, and is more than sufficient for detecting a corrupt block or bitrot.

As for compression, the default LZ4 is so fast that spinning drives cannot write data quickly enough to make it a bottleneck. Even using ZSTD now is barely noticeable, thanks to the “early abort” introduced with OpenZFS 2.2.

2 Likes

Their are several things to un-pack about power loss affecting ZFS and data loss.

  1. Any previously stored data in a ZFS pool can not be / is not lost on power loss, (except in hardware failures that affect pool redundancy, like loss of both disks in a 2 way Mirror vDev).
  2. Any data that has not yet been written / still in flight, is lost, just like every other file system out there.
  3. Using SLOG, (Separate intent LOG), is a specialized case good only for synchronous writes. In theory, you can slow down your asynchronous writes to go through a SLOG device too, but normal async writes use RAM instead.
  4. ZFS attempts to be always consistent on disk, thus, after crash / power loss, no file system check is needed. I say attempt because consumer hardware fails more often that Enterprise and that can lead to data loss.

There is sometimes misconceptions about ZFS loosing data due to power loss. This specific issue was a design criteria of ZFS, no data loss on power loss. Given no hardware failures, (RAM, storage controller, storage device, etc…), and no bugs in ZFS, (rare, but has happened), their is zero chance of ZFS loosing data on crash or power loss. Except of course data in flight. Data is either completely written or not.

2 Likes

So it’s just one SSD to install TrueNAS and HDDs for data.

And perhaps an SSD or two for apps, if you’re going to be using them.

1 Like

OK. Now I will try to learn about dedup of ZFS.

About deduplication in ZFS:
It will use a table with the deduplication information.
Does this table stay on the HDDs with the data?
If it is possible to put the table on an SSD and if it gets damaged, will I lose the data on the HDDs?

If you have a ZFS pool of HDDs, yes, the ZFS De-Dup table is in the pool. So if you export the pool’s disks, and take them somewhere else, (or different OS that supports the needed ZFS features), it will work.

Yes, you can put the De-Dup table on SSDs. There are 2 options:

  1. On L2ARC which is non-critical and can fail without data loss. Just speed loss.
  2. In a Special Allocation Class vDev, (aka Special Metadata device). This IS CRITICAL, meaning loss of a Special vDev means loss of the entire pool. Thus, it is HIGHLY recommended to use the same redundancy as the data pool vDevs. So at least 2 way Mirror, though 3 way Mirror is better when using RAID-Z2, (which has 2 disks of redundancy).

Option 1 is more like a read Cache for the De-Dup table. L2ARC vDevs can be added or removed at will. They can not be Mirrored, but using 2 or more they are Striped. Any update to the De-Dup table does get written to both L2ARC and data vDevs.

Open 2 can not be removed from the pool unless all the data vDevs are Mirrors. A single RAID-Zx or dRAID data vDev, and any Special vDev is permanently part of the pool.

Note that Special vDevs can not use RAID-Zx or dRAID. They must be Mirrors. (Well, technically it could be a Stripe or single disk… but loss of that Stripe means loss of the entire pool :cry:

1 Like

If ZFS is using all the RAM that the motherboard supports, is it possible to use an SSD as RAM, creating a “virtual memory/pagefile/swap” or something like that?

Would this be L2ARC?
Obviously L2ARC will never be in RAM (main/primary memory).
Always in a secondary memory (HDD, SSD).

There are problems with the existing ZFS De-Dup and I don’t have all the answers.

It is my understanding of the existing ZFS De-Dup implementation that RAM is REQUIRED for the working De-Dup table. Any L2ARC that is used for the De-Dup table, is to speed up ZFS import of the pool. Otherwise it could take minutes to read the De-Dup table from a HDD pool. Failure to have enough RAM for the De-Dup table can prevent pool import. But, I could be wrong on some of the details.

The new ZFS feature of Fast-Dedup is supposed to fix some of the problems of the existing implementation. Including insufficient RAM. Again, I don’t know when it might be officially supported by TrueNAS. However, it has been officially released for OpenZFS version 2.3.0 from Jan. 13th, 2025.

In someways, anyone using De-Dup, (original or new fangled), is on their own. Many of us SOHO users, (aka non-paying users), don’t use De-Dup. So you have to do more research than simply asking questions here. We probably don’t have enough answers.

Good luck.

1 Like