I’m considering assembling one linux box to serve as a fileserver with a bunch of disks. Something like TrueNAS seems very attractive, but I’m still trying to gather all my hardware puzzle pieces.
I’d like to have data-integrity protection on hardware level and use drives and HBA/RAID controllers in the HW stack that support them.
As I understand it, it’s the T10 for SAS and T13 for SATA.
Grok says that both SAS and SATA lines support it only on enterprise-grade drives, not NAS or desktop lines.
But Grok can’t tell me anything more than that, or anything specific about T13 support on HBA/RAID cards.
So I wonder if someone here could point me to the right direction.
Namely:
which HBA/RAID lines support T10/T13 ?
which HDD lines I should look into ? Are any model exceptions within gthem ?
does Linux SW stack support that and to which extent ?
how much is host SW/kernel involved on data transfer hot path ? Does howst have to recalculate and/or check for every sector transferred, or does HBA/RAID do it automatically ?
I do not understand what you mean by the above. From my brief research, T10 and T13 are the standards groups for SAS and SATA. In terms of SAS the actual protocols are referred to by version, e.g. SAS-2, SAS-3, etc. Serial Attached SCSI - Wikipedia
What does the it in the second sentence refer to?
There are very few, if any, non-Enterprise SAS drives, consumer drives are almost always SATA. To be clear, “Enterprise Grade” is largely a marketing term used to indicate drives that are appropriate for 24x7 operation, higher temperatures, higher densities (more drives in one location), higher reliability, and higher cost. As drive usage has changed over the years terminology has changed to keep up. You can find drives identified as Desktop, NAS, Surveillance, Enterprise Data, Enterprise Capacity, and others. My benchmark is warranty. I do not consider any drive with less than a 5-year warranty to be a true Enterprise drive, although there are non-Enterprise drives with a 5-year warranty (the Western Digital Black drive is described as a high performance / gaming desktop drive and has a 5-year warranty).
I’m not insisting on TrueNAS, it is just an interesting option from my POV.
Also, as I was informed that all the main players in HW RAID/HBA arena chose not to support either for RAID (only HBA at best) dues to support headaches, which means that I would probably have to go with HBA, which should work with TrueNAS, so this is non-issue.
T10-PI refers to the data integrity protection path. Basically reconfiguring the drive to use extra DI/DIX field at the tail of each sector ( so 512 + 8/16 DIF bytes or 4096 + 16/64 DIF bytes ) for hash protection.
DI field stores basically a hash of the sector and host (extra HW in HBA/RAID) is supposed to generate the hash on transfer and either write it on secotr write or check it on sector read.
T13-PI is the same thing for SATA.
What does the it in the second sentence refer to?
It refers to data integrity protection as per T10/T13, often marked as T10-PI and T13-PI
ZFS protects against silent data corruption by design by saving strong checksums for every part of the data and the corresponding metadata. And chained checksums for the checksums. Etc.
It is considered impossible to experience data corruption without noticing.
If this fulfills any of these “standards” I do not know. First time I read of them. I trust ZFS for exactly the reason stated in my first paragraph.
All those “fundamentals” have their cost that is rarely mentioned.
Cost, which I don’t need for my purposes.
Besides, with ZFS or without it, Grok says that ZFS can put underlying T13-PI support (when available) to a good use.
I’m not here to debate merits of ZFS but to get useful data in data-integrity HW/SW stack that happens to be useful in these kind of environments and that apparently can be used by ZFS, so I’m asking here.
No stated use case. No target size for storage.
For what we know, you may not need a HBA at all.
What is “it” that you (mis)understand?
What are “the T10” and “T13”? If you’re so deep in Information Technology that you can name drop obscure standards, please provide reference links for the reader. Or go directly to read the code of OpenZFS to see what it does and whether it meets your standards.
Using the search engine a bit, because this made me curious.
It seems this proposed technology is mainly pushed (and supported by white papers, marketing material and the like) by storage vendors in the enterprise field like
IBM
HPE
Broadcomm
Seagate
EMC2
Oracle
…
Based on my experience in the field I firmly believe that you will consequently find this technology in large Enterprise products that are neither
affordable
power conserving
easily accessible for a home lab enthusiast
Contrary to that if one does not insist on using TrueNAS, ZFS can be run on something as low as a Raspberry Pi with a SATA or NVMe HAT and storage attached. Search vor “Pi NAS” and you will find various setups.
As far as TrueNAS is concerned - no, it does not support any of this and the only storage technology and data integrity mechanism it does support is ZFS.
So no, this product and this community is not for you - or maybe your proposed storage technology isn’t and you would fare much better investigating a little time to learn about ZFS?
As I said, I’ve checked with main HBA/RAID names about T10-PI and the response was that their controllers don’t support it in RAID mode, but they do in HBA mode.
But at that time I didn’t know about that SATA has its own implementation of the functionality T13-PI.
So as it turns out, I can use affordable SAS HBA or RAID and have the T10-PI, but since SATA drives are cheaper, I thought checking here rather than going on another search.
Still, no great loss. SAS drives aren’t much more expensive…
Contrary to that if one does not insist on using TrueNAS, ZFS can be run on something as low as a Raspberry Pi with a SATA or NVMe HAT and storage attached. Search vor “Pi NAS” and you will find various setups.
Yes, RasPi is a magnet for HW tinkering morons, much like Python attracts “programmers”.
You have not yet stated your use case, expected storage capacity, desired level of redundancy, desired power consumption, … anything.
I store everything I deem important on ZFS. I don’t see how these vendor proprietary mechanisms could improve the reliability I currently have in any way.