T10/T13 data integrity protection - which HBA/RAID, which disks?

I’m considering assembling one linux box to serve as a fileserver with a bunch of disks. Something like TrueNAS seems very attractive, but I’m still trying to gather all my hardware puzzle pieces.

I’d like to have data-integrity protection on hardware level and use drives and HBA/RAID controllers in the HW stack that support them.

As I understand it, it’s the T10 for SAS and T13 for SATA.
Grok says that both SAS and SATA lines support it only on enterprise-grade drives, not NAS or desktop lines.

But Grok can’t tell me anything more than that, or anything specific about T13 support on HBA/RAID cards.

So I wonder if someone here could point me to the right direction.
Namely:

  • which HBA/RAID lines support T10/T13 ?
  • which HDD lines I should look into ? Are any model exceptions within gthem ?
  • does Linux SW stack support that and to which extent ?
  • how much is host SW/kernel involved on data transfer hot path ? Does howst have to recalculate and/or check for every sector transferred, or does HBA/RAID do it automatically ?
  • any gotchas that one should look for ?

Don’t even think of using hardware RAID with TrueNAS.

2 Likes

I do not understand what you mean by the above. From my brief research, T10 and T13 are the standards groups for SAS and SATA. In terms of SAS the actual protocols are referred to by version, e.g. SAS-2, SAS-3, etc. Serial Attached SCSI - Wikipedia

What does the it in the second sentence refer to?

There are very few, if any, non-Enterprise SAS drives, consumer drives are almost always SATA. To be clear, “Enterprise Grade” is largely a marketing term used to indicate drives that are appropriate for 24x7 operation, higher temperatures, higher densities (more drives in one location), higher reliability, and higher cost. As drive usage has changed over the years terminology has changed to keep up. You can find drives identified as Desktop, NAS, Surveillance, Enterprise Data, Enterprise Capacity, and others. My benchmark is warranty. I do not consider any drive with less than a 5-year warranty to be a true Enterprise drive, although there are non-Enterprise drives with a 5-year warranty (the Western Digital Black drive is described as a high performance / gaming desktop drive and has a 5-year warranty).

I’m not insisting on TrueNAS, it is just an interesting option from my POV.

Also, as I was informed that all the main players in HW RAID/HBA arena chose not to support either for RAID (only HBA at best) dues to support headaches, which means that I would probably have to go with HBA, which should work with TrueNAS, so this is non-issue.

T10-PI refers to the data integrity protection path. Basically reconfiguring the drive to use extra DI/DIX field at the tail of each sector ( so 512 + 8/16 DIF bytes or 4096 + 16/64 DIF bytes ) for hash protection.

DI field stores basically a hash of the sector and host (extra HW in HBA/RAID) is supposed to generate the hash on transfer and either write it on secotr write or check it on sector read.

T13-PI is the same thing for SATA.

What does the it in the second sentence refer to?

It refers to data integrity protection as per T10/T13, often marked as T10-PI and T13-PI

ZFS protects against silent data corruption by design by saving strong checksums for every part of the data and the corresponding metadata. And chained checksums for the checksums. Etc.

It is considered impossible to experience data corruption without noticing.

If this fulfills any of these “standards” I do not know. First time I read of them. I trust ZFS for exactly the reason stated in my first paragraph.

HTH,
Patrick

I know that, but I want to use data integrity protection on lower layers for many reasons.

One being that I don’t want for every goddamn sector to have to go through the CPU for its hash to be recalculated.

All of those ZFS shiny options look good on paper and are probably golden in some datacenter, but are overkill for my homelab.

I need a lean&mean small machine that is energy frugal.

Don’t even think of ever using AI to answer technical questions…

As for getting meaningful human answers, it would help if you explained clearly your requirements and actual needs.

My actual needs are in original post.
I don’t need marital offers, insurance advices etc etc.

They are not options but fundamentals for good reason. They work perfectly well on a low power platform like an Intel Atom.

If you needs are hardware error correction using T10/T13 then TrueNAS (and ZFS) is a poor fit for you.

Is that an answer to your question?

All those “fundamentals” have their cost that is rarely mentioned.
Cost, which I don’t need for my purposes.

Besides, with ZFS or without it, Grok says that ZFS can put underlying T13-PI support (when available) to a good use.

I’m not here to debate merits of ZFS but to get useful data in data-integrity HW/SW stack that happens to be useful in these kind of environments and that apparently can be used by ZFS, so I’m asking here.

No stated use case. No target size for storage.
For what we know, you may not need a HBA at all. :roll_eyes:

What is “it” that you (mis)understand?
What are “the T10” and “T13”? If you’re so deep in Information Technology that you can name drop obscure standards, please provide reference links for the reader. Or go directly to read the code of OpenZFS to see what it does and whether it meets your standards.

What does the word “that” refer to?

I am not aware of any such support.
I will happily stand to be corrected, but not by a large language model.

If you feel confident in your inquiries, why don’t you ask GROK how that support works and how you can leverage it best?

1 Like

AFAIK none of those are relevant for T10/T13-PI support.

Yeah. For all you know, I might or not need car tire change. Or a new food for my dog.
None of which is relevant for original post.

If you are planning on turning off checksums in ZFS then I strongly recommend that you use some other solution than TrueNAS / ZFS.

I suspect this audience (the TrueNAS Forums) is not going to have the answers to your questions.

Using the search engine a bit, because this made me curious.

It seems this proposed technology is mainly pushed (and supported by white papers, marketing material and the like) by storage vendors in the enterprise field like

  • IBM
  • HPE
  • Broadcomm
  • Seagate
  • EMC2
  • Oracle

Based on my experience in the field I firmly believe that you will consequently find this technology in large Enterprise products that are neither

  • affordable
  • power conserving
  • easily accessible for a home lab enthusiast

Contrary to that if one does not insist on using TrueNAS, ZFS can be run on something as low as a Raspberry Pi with a SATA or NVMe HAT and storage attached. Search vor “Pi NAS” and you will find various setups.

As far as TrueNAS is concerned - no, it does not support any of this and the only storage technology and data integrity mechanism it does support is ZFS.

So no, this product and this community is not for you - or maybe your proposed storage technology isn’t and you would fare much better investigating a little time to learn about ZFS?

Unless you are just trolling.

HTH,
Patrick

1 Like

No it cannot and as expected Grok is writing BS.

As I said, I’ve checked with main HBA/RAID names about T10-PI and the response was that their controllers don’t support it in RAID mode, but they do in HBA mode.

But at that time I didn’t know about that SATA has its own implementation of the functionality T13-PI.

So as it turns out, I can use affordable SAS HBA or RAID and have the T10-PI, but since SATA drives are cheaper, I thought checking here rather than going on another search.

Still, no great loss. SAS drives aren’t much more expensive…

Contrary to that if one does not insist on using TrueNAS, ZFS can be run on something as low as a Raspberry Pi with a SATA or NVMe HAT and storage attached. Search vor “Pi NAS” and you will find various setups.

Yes, RasPi is a magnet for HW tinkering morons, much like Python attracts “programmers”.

You have not yet stated your use case, expected storage capacity, desired level of redundancy, desired power consumption, … anything.

I store everything I deem important on ZFS. I don’t see how these vendor proprietary mechanisms could improve the reliability I currently have in any way.

1 Like