12G SAS 2.5" Enclosures

Hi,

I have multiple TrueNAS Scale arrays running in HP D2000 6G SAS enclosures, and they are now due for replacement.

I’m considering 12G SAS options now.

Can anyone provide feedback on the options I’ve found through Googling:

  • HP D3700
  • Dell PowerVault MD1420
  • Dell EMC Unity DAE 25
  • 3PAR StoreServ 8000 Enclosure
  • Netapp DS224C

I am only looking into 24/25 x 2.5" drive options (2U).

Are there any significant differences, advantages, problems, etc., with any of these options, or do they all function similarly?

Maybe I missed a great option? Any feedback on any of these options would be really appreciated.

The planned controller is the LSI 9305-16e SAS 12GBPS External Non-RAID Host Bus Adapter.

On a side note regarding the HP D2700 I currently use, I recently read about SAS channel bussing. Does anyone know how the physical channels are arranged in HP D2700 enclosures spanning over the disks and lanes?

Cheers!

Unless you plan on filling the enclosure with 12Gbps SAS SSDs and have 3-4 pools that aren’t accessed at the same time, there’s very little advantage to moving to a 12Gbps solution.

16 lanes of 6Gbps SAS is more bandwidth than the PCIe bus connection (about 80Gbps) on the 9305, so 16 lanes of 12Gbps SAS would be mostly wasted. OTOH, if you have 12Gbps SAS SSDs set up in pools of no more than 6 disks each and only one pool is active at a time, you’d get the full bandwidth of those disks.

But, to truly take advantage of this, you’d need 100Gbps networking on the TrueNAS server, or else you’d throttle at that point. Even 40Gbps networking would mean that 6Gbps SAS would be enough.

That said, there’s the Supermicro 216BE1C-R609JBOD, which has good connectivity and documentation about the internal connections.

2 Likes

Interesting.

I understand that I connect Controller with a single link to to enclosure, giving only 4 ports. I thought, if I connect more ports, then I get into dual domain scenatios, not getting more bandwidth, maybe I got that wrong?

Beside that, I do have dual 100G networking and expect quite heavy loads due to the number of users connected. So it’s worth exploring this further.

With my existing enclosures, I see all drives on all ports. Im also not yet familiar with the SuperMicro enclosure but I will look into.

This depends on the design of the backplane in the external enclosure. The Supermicro JBOD that I linked has a single expander, so extra connections will increase total bandwidth (although a single disk will still be limited to the speed of a single lane).

There is a version with a dual expander, which allows multipathing or high-availability.

Even with 100Gbps networking, you are limited by the HBA connecting at PCIe 3.0x8, which is about 80Gbps, which is still lower than the speed of 7x 12Gbps SAS SSDs.

The math says that 24x 12Gbps is 288Gbps. You’d need 24 SAS3 lanes (8 more than you have) and likely 32 PCIe 3.0 lanes to get the data from the HBA to RAM. Even at 6Gbps, you need 16 PCIe 3.0 lanes.

I suggest looking into PCIe 4.0 or 5.0 HBAs and a motherboard that can give you the slots to support those speeds. Then, you would be able to actually use 12Gbps SAS.

SAS3 and SAS4 are not really required for a NAS (where you need to feed the data from the drives over the network), but are great for workloads that use a lot of data locally (like a database that sorts through a lot of data, but only sends a small amount of it over the network).

All that said, I have four 2U Supermicro chassis (not JBOD) that have 12Gbps backplanes, but that was because I was buying them for new servers, and the price difference between 6Gbps and 12Gbps was tiny.

1 Like

Thanks, valuable input! So what you say is I won’t really have a big win if I replace the existing 6G shelves with 12G. Further, what do you think about the channel bussing? Does it make sense to place disks in the chassis in a specific sequence, order or position? Since I have quite a lot 6G chassis, would it make further sense to spread arrays across different enclosures?

I believe “databolt technology” is a feature of the LSI 12gbps HBA. This supposedly means you can get the the benefit of 12gbps with 6gbps drives (even sata) as the HBA will effectively multiplex the data streams to double the achievable bandwidth.

Ie. 12gbps is worth it. And basically means you can double the bandwidth to the expander with the same number of “ports”

I wonder how this applies to 6gbps expanders.

1 Like

This has been around forever. All it does is multiplex data from multiple drives on a single SAS channel and it still can’t magically create more than 12Gbps per SAS channel. It also can’t overcome the PCIe bottleneck.

No matter how you slice it, a PCIe 3.0x8 HBA will not be able get full bandwidth out of 24 SSDs. It means that doing things the right way for ZFS (a pool that is something like 4x RAIDz vDevs, each 6 drives wide) will still only give you 80Gbps at the absolute most, even if those drives are 12Gbps SSDs. The only real advantages to 12Gbps is only needing an -8e HBA and two cables to the JBOD.

If you are buying a new HBA and JBOD, then 12Gbps is a no-brainer as the extra cost is nothing compared to the cost of drives to fill the chassis. But, if you already have a functioning 6Gbps connection, replacing the HBA and JBOD is not really worth it unless you are using something like the 9600W-16e, which gives you 4x the PCIe bandwidth compared to a PCIe 3.0x8 card. That’s 320Gbps, which is more than enough for a 24-disk JBOD of 6Gbps SSDs.

Unfortunately, you’d still need an SAS4 (24Gbps JBOD) if you want to use 12Gbps SSDs at full speed.

2 Likes

Hopefully OP is looking at SSDs :wink:

Planning to use Samsung PM1643A

Using 12Gbps SSDs means you should not have more than 10 or so total in all pools, as far as performance is concerned, since even 7 would bottleneck at the PCIe bus, as those Samsung drives can actually write faster than 12Gbps, because they are dual ported. A pair of HBAs with a dual expander in the JBOD and the right multipath config would mean you could actually use up to about 10 drives at full speed.

Using more than enough to saturate the bus will have the advantage of spreading out the writes to help preserve endurance, but even with 1TB drives (far below the size I suspect you will use), 10 drives gives 18PB of writes.

1 Like

Quick Feedback for everyone reading this …

I went with two 2nd hand EMC SAE 6G 25SFF JBOD and one 9305-16e for each enclosure. I easily hit 10GB transfer read/write (100GBit wire speed) in each enclosure - so quite happy and as all before wrote, no need for expensive new gear I did not manage to double this by using two 100GBit ports at the same time … but this might really be around the limit for this given system (single EPYC 7601)

2 Likes