Hello,
I plan to set up an “old” machine as a NAS that will be used almost exclusively to serve media files, little data frequently updated and no VM or container.
I have an ASUS SABERTOOTH Z87 motherboard with 6 sata ports behind the Z87 and 2 others + 2 eSATA behind an ASMedia chip. And soon a LSI SAS 9300-16I controller probably in HBA with 4 cables to connect a maximum of 16 HDs.
I have not yet decided on an OS (TrueNAS or Debian)
I have 5 HDs 12TB, 2 6TB, 2 8TB and 2 4TB. The idea is to combine everything as if it were 8 12TB disks in a RAID6 cluster (LVM Raid) or equivalent on ZFS. There will also be a few other disks but they will only contain archives saved elsewhere, so no performance or availability constraints.
My main question is how to best distribute the disks on the SATA ports for performance and reliability, everything on the HBA card or also distributed on the motherboard? and on the HBA card is it better to distribute them on the four mini SAS outputs or on the contrary group them?
I know that TrueNAS runs on ZFS but you seem very competent on SAS/SATA issues that’s why I’m asking my question here
But if you offer me advice on an LVM Raid configuration I’ll take it too!
I will point you to some basics. I don’t think ZFS / TrueNAS is what you want. You don’t combine different size disks in a VDEV or it takes the smallest size. For example, all your listed drives would be a Raid-Z1, 11 units wide, 4TB. So about 40TB with 4TB used for parity. That is just a size example and not the set up you would do if you decided on ZFS
I would probably put everything on the LSI SAS 9300-16I
BASICS
iX Systems pool layout whitepaper
Thank you very much for your answers, simple and clear!
And thank you for the links, I am rather technical and I like to know what is happening deep inside the machine, it is perfect for me.
I will dig a little on what is possible to aggregate logical volumes on zfs but if I have to have several raid clusters I think I could live with it.
I will focus on the choice mdadm/ext4 vs zfs. the questions of security, ACL, encryption, compression, dedup, snapshots are not really important in my case.
The big advantage of zfs for me would be the data integrity control and protection against data corruption… and then it would make my 32 GB of ram profitable :o)
But I wonder about the risk of premature wear of disks that are already quite old, I have the impression that zfs (maybe the automatic scrubbing especially) draws a lot on the disks. ZFS might be overkill in my case, but I’m curious o:)
I’m still a little disappointed, I was told that ZFS was a magic thing that could do everything!!!
but at my age I should know that magic things don’t exist :o)
I think unraid can combine different disk of different sizes, but it is not free, of course. Or try openmediavault.
ZFS is magic, on the right hardware!
While I am sure part of that is a joke, you probably have come across some bad information at one time or another. Lots of Internet information about ZFS or TrueNAS exists, and not all of it is true. Or true today.
For example, in the past, it was suggested to have 1GByte of memory per 1TByte of ZFS disk storage. But, it was never a hard and fast rule. And it does not apply to casual users. Nor to applications where data is read once and not again for a long time, like media files.
Another old rule involved ZFS RAID-Zx widths, which had optimal widths per parity level, (1-3 disks worth of parity). However, with data compression, that “rule” is not applicable today.
Some people think TrueNAS supports Linux type RAID, (MD-RAID or LVM RAID). But, TrueNAS was originally written on top of FreeBSD, which is not Linux. Many years ago the non-ZFS RAID & volumes were removed from FreeBSD based TrueNAS, standardizing on only ZFS. Thus, by the time Linux based TrueNAS SCALE was released, no Linux RAID support.
There are lots of configurations where ZFS is not optimal. Sometimes people make it work because they want the data integrity or other features of ZFS. (I use ZFS on Linux for my home PCs because of those features…) But, in other cases ZFS is just too limited, (ZFS was not designed as a desktop file system, nor for inexpensive use).
As for running plain Linux with LVM, I don't have any suggestions other than be careful of LVM snapshots.
While LVM snapshots work and do the job, don’t keep them around longer than needed. If I understand them correctly, they act more as a transaction write back buffer. So upon deletion, the transactions have to be flushed to main storage. The more data to be flushed, the higher the I/O load and longer it will take.
That is different than ZFS with “async destroy” feature. Removing ZFS snapshots simply free up the space, incrementally. Even across reboots. No fuss no muss.
You cannot, and really shall not, do that with ZFS.
sure that it is one !
yes that is the problem, we find a lot of information sometimes obsolete, often confusing or inaccurate. I am self-taught and I know that to really know a technical subject, it is only experience and real use that gives us a clear vision. that is why I came here with my question :o)
and the answers I got enlightened me a lot !
thanks for the advice, I had this problem with vmware. but I do not use snapshots except for vm for test use
I think I will stay for this time on things that I know, I will perhaps switch to zfs in the future if I upgrade my motherboard.
but I will still set up a lab to test TrueNAS, the interface seems really interesting to me. and I’m going to dig into TrueNAS scale which I just discovered the existence of, I know Debian quite well and it could be a solution for me
in any case thanks to all for your answers