short answer YES. long answer IT DEPENDS.
I am using 48 ssd broken over 2 expanders using one card.
SAS connections if it is 12g is 12gb/s per channel or 4 channels per cable, so a combined bandwith of 48gb/s per cable. My 9305-16e has four ports or a combined bandwidth of 192gb/s. Now of course, it is only using a pcie 3.0 x8, which in maxed out at 8gb/s so…
Now if you have 4 drives hooked straight to the HBA, you are limited to the max speed of the drive hooked to the connection. Even a newer SSD, you only going to get 4-600 mb/s per drive, so if you have say 4 ssd’s hooked directly to a HBA say 9305-16i, you only going to get say 2.4 gb/s ish, all hypothetical of course. NOW, say you use an expander. I use adaptec 82885t 12g units.
Think of an expander kind of like a network switch, imagine if you have you HBA with all that through put power of 10g connections hooked to a pc which only has a 1g connection, you are limited to bandwidth of the lowest connection speed of 1g. BUT, when you use an expander, think of it like a switch, you are NOW sharing the BIG pipe “10g” with MANY 1g connections, which will allow you to use as much as possible untill you saturate that 10g connection. Think of expanders like that.
In the case of 9305-16i/e you can hook up 16 drives and get good speed depending on the drives, or why not hook up 4 expanders and hook up MANY more drive and use the pipe for what it can really do.
I use 2 cases “Norco RPC-4224” filled with 24 4tb SSD each, 3 vdevs with 8 ssd’s in each. running though two expanders to my 9305-16e, I get INSANE through put. I have to run a 40g NIC to keep up with it, since I with just 1 expander i can saturate a 10g NIC.
Now that was just for throughput. There is also parrallel data access improvements. Say you have 1 drive, this applies more to HDD’s then SSD’s but still to both. If you use just that one drive, you have the latency of the arm swinging back and forth across the platters which give you pause during read/write cycles. The more parallel drives you have the better read/write performance you will get. Now in raid5/z1, raid6/z2 or z3 you will have lesser performance in write because of parity calulations, but it will still be better then single drive access.
So in short, more drives, more vdevs, better performance!