Can Someone Please Explain How SAS Works

A plea for help from someone who’s been out of the server hardware game for too long :slight_smile:

I’ve managed to rig up a server with SAS hardware that somehow works, but I’m ashamed to say that I don’t fully understand how, or why it’s working and, more importantly, what it’s limitations are. I grok SCSI, ATA, IDE and SATA, but SAS remains a bit of a mystery to me, particularly with regards how many devices are supported on a given HBA and its ports.

ATA/IDE were simple: one port, two devices, master and slave
SATA: one port, one device (although I believe port multipliers are a thing, although not recommended)
SCSI: I understand SCSI fully, devices, LUNs, all makes sense to me

But SAS? :rofl:

If I have a server with 16 SAS drives in it, which are presented to the HBA via two SFF-8643 connectors (I assume 8 devices on each). Each port on the HBA (a Dell Perc H310) has a maximum bandwidth of 6Gb/s. Is this shared, then, between all the devices on any given port?

The spec sheet for the HBA quotes the following limits on the number of physical devices that can be connected:

Non-RAID: 32
RAID 0: 16 per volume
RAID 1: 2 per volume plus hot spare
RAID 5: 16 per volume
RAID 10: 16 per volume
RAID 50: 16 per volume

But it’s not clear if this is per port, or for the card as a whole. Is there an actual limit on the number of physical devices that can be connected to a SAS port, or is that a recommended limit based on bandwidth/IOPS for the different RAID configurations?

The above is theoretical, to simplify the question, what I actually have in my system is the following:

HP DL380 G6 (with onboard SAS HBA disabled and disconnected from the internal drive cages)

  • Dell Perc H310 with both ports connected to…
  • HP 468405-002 24-port SAS expander connected to…
  • Two internal groups of 8x SAS SSD
  • One external Storageworks D2700 drive shelf w/25x SAS SSD

The above works, all SAS devices are visible by the HBA and the OS and it’s running fast enough to satisfy my needs.

What confuses me is this concept of the SAS expander being “24-port”. I understand that the expander works similar to a network switch, sharing the bandwidth between many connected devices. But was does “24-port” mean in the context of a port expander?

Is there a hard limit on the amount of physical devices I can connect to a setup like this? I have a total of 41 disks in this box, and don’t quite understand how it’s working within the limitations of the spec sheets for the HBA and port expander.

At some point, I might like to expand the configuration by adding another drive shelf with an additional 25 SSDs in it. Physically, I can daisy chain that off the existing drive shelf, but I would like to know if the HBA will be able to see all those drives, and understand the performance impact. I suspect installing an additional HBA for the extra drive shelf would vastly improve performance.

Given this old thread, I appreciate that that Perc H310, with it’s 2008 chipset, is not recommended for SSDs, so I’ll likely be looking for a 3008-based HBA with an external SFF-8088 connector, if I can find one.

3 Likes

Not SFF-8643 (aka. MiniSAS HD)?

6 Gb/s is a SAS 2 lane. 4 lanes per SFF-8087/8643, so 24 Gb/s per connector, shared among drives.

“Per volume limits” looks like RAID controller limits :scream:
Ignore this and let ZFS control its vdevs.

With SAS 3, the connector is more likely to be SFF-8643 (internal) or SFF-8644 (external). But you can find suitable cables for any combination of connectors.

1 Like

Thanks @dan, that’s super-helpful, and I think I understand how my system is operating now:

  • each physical port on the HBA has four 6Gb/s “lanes” within it
  • connecting both those ports to the SAS expander give a total of 8 lanes and allows a theoretical maximum of 48Gb/s between the PCIe bus and storage
  • (however the Perc H310 card is a PCIe x8 card, so is limited to 20Gb/s single direction, 40Gb/s dual direction (theoretical maximum))

The SAS expander is documented has having “24-ports”, but I’m guessing what they actually mean is 24 lanes. It has two SFF8087 ports for connection to the HBA (8C and 9C), one external SFF8088 (1C, although I have my disk shelf on this), and six internal SFF8087 ports with 4 lanes each, for a total of 24 lanes. Two of these ports (maybe four? Can’t remember) are connected to the internal drive cages, housing sixteen drives.

The D2700 disk shelf has its own internal expander, apparently, which makes sense. It’s already a cascaded expander, essentially, hanging off Port 1C of my SAS expander. I could daisy chain another shelf off this, but hanging 50 SSDs off one 4-lane port will not be performant. I’ll definitely look for another HBA if I do this.

Am I right in thinking, then, that there is no practical limit to the number of physical devices you can connect, only the amount of bandwidth they share?

Thanks @etorix

  • correct: SFF-8643
  • re. port bandwidth: yes, I figured that out, 4-lanes per physical port for 24Gb/s total
  • yeah, I’ll ignore those RAID limits
  • good to know I’ll not try searching for a 3008 with SFF-8088, and look instead for a suitable cable. I have no SAS3 equipment, but the 3008 chipset seems to be recommended for SSDs with SAS2, and my D2700 has SFF-8088 connectors

Thanks both of you for the help. I’ve learned a lot today!

Ironically I would have likely learned all this sooner had my configuration not worked when I put it together. The fact that it did led to me taking a “well, cool!” attitude and leaving it as was when I built it :smiley:

There are hard limits on drive numbers in the SAS specification, but these are high enough that bandwidth should be the first wall your system crash into.

1 Like

And another good link

1 Like

Thanks @Stux

That’s…ahm…comprehensive? :rofl:

1 Like

Yeah. For when you really want to know :wink:

One more question: is there a linux commandline tool that will display the hierarchy/tree structure of SAS devices, expanders and HBAs in a system?

Just curious, really…I know I can open the box up and look at the cables; just wondered if it was technically possible (similar to USB with ‘lsusb -t’, showing speeds, or ‘lspci -tvm’) and available as a tool.

I don’t know of anything offhand that gives the tree-style view, but lsscsi seems like it’d be where I would start.

1 Like