Can a RAID-Z array be made faster with more drives?

I’d like to increase the sustained write speed of my server’s Raid-Z3 array.

Can this be done by doubling the total number of drives from 8 to 16?

If so, would it matter if I used multiple IT-mode SAS controllers for the single array? Or would it perform better using just one card with twice as many ports?

short answer YES. long answer IT DEPENDS.

I am using 48 ssd broken over 2 expanders using one card.

SAS connections if it is 12g is 12gb/s per channel or 4 channels per cable, so a combined bandwith of 48gb/s per cable. My 9305-16e has four ports or a combined bandwidth of 192gb/s. Now of course, it is only using a pcie 3.0 x8, which in maxed out at 8gb/s so…

Now if you have 4 drives hooked straight to the HBA, you are limited to the max speed of the drive hooked to the connection. Even a newer SSD, you only going to get 4-600 mb/s per drive, so if you have say 4 ssd’s hooked directly to a HBA say 9305-16i, you only going to get say 2.4 gb/s ish, all hypothetical of course. NOW, say you use an expander. I use adaptec 82885t 12g units.

Think of an expander kind of like a network switch, imagine if you have you HBA with all that through put power of 10g connections hooked to a pc which only has a 1g connection, you are limited to bandwidth of the lowest connection speed of 1g. BUT, when you use an expander, think of it like a switch, you are NOW sharing the BIG pipe “10g” with MANY 1g connections, which will allow you to use as much as possible untill you saturate that 10g connection. Think of expanders like that.

In the case of 9305-16i/e you can hook up 16 drives and get good speed depending on the drives, or why not hook up 4 expanders and hook up MANY more drive and use the pipe for what it can really do.

I use 2 cases “Norco RPC-4224” filled with 24 4tb SSD each, 3 vdevs with 8 ssd’s in each. running though two expanders to my 9305-16e, I get INSANE through put. I have to run a 40g NIC to keep up with it, since I with just 1 expander i can saturate a 10g NIC.

Now that was just for throughput. There is also parrallel data access improvements. Say you have 1 drive, this applies more to HDD’s then SSD’s but still to both. If you use just that one drive, you have the latency of the arm swinging back and forth across the platters which give you pause during read/write cycles. The more parallel drives you have the better read/write performance you will get. Now in raid5/z1, raid6/z2 or z3 you will have lesser performance in write because of parity calulations, but it will still be better then single drive access.

So in short, more drives, more vdevs, better performance!

Ah yes, I’d forgotten about expanders. That would be perfect because I’m using the onboard SAS controller on my server’s motherboard.

Which ports on your Adaptec 82885ts are the inputs? I’m hoping the external ones are outputs so I can use a second case for more drives.

Port roles are in the manual.

Medium answer: A wider vdev will have higher sequential throughput, but still the IOPS of a single drive so the same (poor) random performance. And 16-wide is wider than generally regarded as acceptable for safety (long resilver).

2 Likes

Found it, thanks.

And I suppose that makes sense, but my server is used for storing and retrieving backups, so sequential performance is what I’m after.

Also, if you’ll forgive my lack of knowledge, why would the resilver be longer in a wider array?

I figure having more drives available with healthy data would make resilvering faster.

With 16 disks, I would have made two 8-disk raidz2 vdevs.

In your situation, I would add a second 8-disk raidz3 vdev. Though, it will be a bit lop-sided and suboptimal since half the data is already written to the existing vdev. Future writes will be better distributed.

What follows is just my assumption:
Making the vdev wider will resilver slower because more disks need to be touched/calculated per read/write operation. I’ve definitely noticed better resilver/read/write performance from more/narrower vdevs than fewer/wider vdevs.

As stated above, think of expanders like network switches. MOST of them you can use ANY port for inputs or outputs. SOME cards have designated ports, but most of them do not care. You can try it your self if you like. I like the 82885t because you can pick one up for 25-30$ on ebay or like 50$ on amazon, its 12G which is NOT just the connection to the drive, but to the expander. Meaning if you have an older a sata drive, which for sata 3 is only 6g, the aggregate of those drives will still potentialy run at 12g is good since my 9305 is a 12G expander. But I digress… the HBA is only 8gb/s anyway with the PCIE slot which to me is silly…

This is a good video showing port designation.

I have used both internal and external ports.

Currently I am using the 9305-16e and using 8644 cables to go from my controller “truenas server” to my storage arrays. Another reason I like the 82885t is it can run standalone without a MB since it has a 4 pin molex port you can get your power to it with.

They have 2 external ports, so you can use one for input like in my case, and the second to go to another case, or have you input from hba on an internal port and use both external ports to go to 2 other cases. Go wild!

I agree with this, if you are using rz3 then perhaps 9 drives. I currently run 8 per vdev in rz2. I found this to be simple for drive placement since most cases have drive arrangements in the 4 drive amount, so in the case of my rpc-4224 it has 24 so I have 3 sets of 8. Which when I load them, I do the placement first with serials on each set, so if I can track location easily for a drive issue. Then I do them in blocks of eight.

1 Like

Are you suggesting “RAID 10”?

That could work, and provide extra redundancy.

Are the real-world writes in that situation also faster than a single vdev?

No. Raid 10 is striped mirrors

I recommended a second raidz3 vdev.

So more like “RAID 70” if there were such a thing as triple-parity RAID 7. :wink: