Since expanding pools is now an option I guess it works out either way. Some minimal space losses & also expect GUI to hate showing the correct space after expanding; unless you rebalance after expanding.
I did find this interesting post where they show that the efficiency of both 6, 7, and 8 wide vdevs is 66.6%. Not sure if anything has changed though to make that untrue (or if this is an accurate analysis):
A six-wide z2 used to be considered one of the sweet spots re performance, capacity loss to parity, etc. IIRC, ashift also had to do with it? But for a small SOHO system, who cares?
I’d focus on what gets you the performance / capacity you need, ideally with one drive bay left open. That empty bay becomes very useful if you ever have to replace a drive, pre-qualify a new drive (bad blocks, et al), etc.
I run a 7 wide z2 array of HDDs (with a metadata only L2ARC SSD). It is fine.
Having read the linked article, I should add I’m using it with 1M record size and lz4 compression for mostly WORM data - But that can include tens of thousands of small files and it performs well reading them. My VMs live on a 3 wide mirror. Larger mixed IO including app storage lives on a 6 wide z2 SSD array.