I have a pool made of two RADIZ1 VDEVs, each being made of 4 8TB SSD.
I recently acquired 4 more of the same SSD’s (I had a really good price on them, so I couldn’t turn the offer down), and now wondering what’s the best use case for them.
Add a fresh RADIZ1 VDEV to the 2 existing ones, or add 1 disk to each existing VDEV (so that they will become 5 disks RAIDZ1 VDEV) and keep 2 hot spares?
Capacity is not really an issue for now, so please don’t take this element into consideration. Also all my pools are being backed up every night on a cold storage (Synology NAS), but I would like to be as close as possible to a 100% uptime on this particular pool.
What led me to ask this question was the fact I’m torn between those 2 issues when expanding a pool:
When adding a new VDEV, existing data is kept on existing VDEV until you run the rebalance script (which I’m a little bit scared to run I must admit, in the eventuality of destroying the datas)
When adding a drive to an existing VDEV to expand it, the space reported is completely out of place, and you cannot do anything about it (unless now something has changed?)
Also I forgot to mention that if I chose 2x 4 disks VDEV in the first place, is because of resilversing times that I want to be the shortest possible in the case of a disk failure.
So, in your opinion, what’s the best option here?
PS:
Bonus question about record size: As I understand it, by changing the record size on an existing Dataset, all data already present are still stored with the record size used when creating the dataset
Is there a way to change it on the fly directly on the disks, or must I delete then recreate this data after having changed the record size?
To chnage the record size of existing data, you have to rewrite (delete-restore, rebalance, zfs rewrite) the data. THat seems a bit over the top, especially with SSDs.
As for the main question, without knowing your use case and requirements (other than “not capacity”, so I assume that the pool is far from full), not to mention the hardware (apparently, attaching 12 SSDs is not an issue…), it would be difficult to give appropriate advice.
Then add a third vdev, do not rebalance and call it a day.
This seems to make little sense. If you’re going for hot spares, you could as well rebuild for two 6-wide raidz2 and improve resiliency.
Nothing has changed and NOTHING WILL CHANGE. The behaviour was purposefully baked in OpenZFS code to ease implementation of raidz expansion; improving that is expected to be an endeavour of similar magnitude as implementing raidz expansion in the first place—expect similar time to delivery, i.e. decades.
Ok thanks for the input, indeed, I wanted to achieve better transfer speeds, but I guess it’s a bit excessive
As for the HBA to which the SSD are connected to, it’s a Broadcom 9400 16i
And the use is for ingesting, and editing large videos files (4K / 8K Raw / ProRes files)
I guess you have a point there, that makes sense
Indeed, you’re totally right, I didn’t even consider this possibility
Ok, understood sir, so I won’t hold my breath for a fix anytime soon
PS: Sorry I realize I did not properly thanked you for having given me this advice, so thanks a lot
Then adding a vdev is the natural solution. More IOPS, more throughput.
I suppose that the pool is emptied when projects are done, and moved to cold storage. That will eventually take care of rebalancing.
Curious about the compression settings you may be using on these big raw video files. CPU allowing, there may be an opportunity for extra gains here.
Sure, now that I think about it, it totally makes sense
No, it’s not emptied, because it depends on the type of vids.
Personal videos stay there to be accessed by the whole family (as for pictures [edited from the pool too], TVShows, Movies… well mostly large files)
Professional projects, on the other hand, are indeed moved to cold storage, then deleted from the pool.
So it won’t totally address the rebalancing “issue”
As for the compression settings, I assume you’re talking about Dataset compression settings and not video compression for export?
If so, I did not set any (meaning I left the one by default ie LZ4).. But by reading your thought, I assume I should have researched a bit about it
CPU used is a 14600K, what type of compression could be beneficial (and not hammering the CPU too much), and what type of gain could I expect from it?
PS: I assume that if I change the compression setting, it will only apply to new files being moved to the dataset?
PS2: Obviously, 10Gbe from the TrueNas server to the main switch, then 10Gbe for my workstation, 2.5Gbe everywhere else, regularly accessed by 3-4 clients at the same time
And 192GB RAM, very useful for working on large files right after ingestion