Hi Everyone,
I’m writing this post to ask for some advice regarding a specific situation that is about to become…possible: what is the best way to add disks to a data vdev (once Electric Eel is released), along with a metadata vdev, and then rebalance to ensure the most effective use of the available resource?
The context is this:
I have a 6 wide raidz2 data vdev (Iron Wolf Pro, 16TB dive, ST16000NT001). I have four additional drives (same make and model), plus two Samsung PRO 990 1TB drives and one WD blue SN570 1TB drive (WDS100T3BOC). It is used for storing a combination of large media files (nothing less than 300MB to 20+GB), and media files ranging from 5MB to 120MB. The pool in it’s entirety is write once read many; essentially a typical media pool.
However there are generally many additional files present, all of which are less than 1MB and are accessed regularly. I understand that these can be stored on a Metadata vdev with the correct settings, however there is a slight hiccup in identifying how many files (As per the guide from constantin) as the record size for all datasets is currently set to 128K (I’m a noob, but I was more of a noob when I set this server up last year) That said, there will not be more than 50,000 of these small files that are smaller than 1MB (and I would expect the number ot increase to around 120,000 as the pool fills).
My thoughts from reading the guides are to set a metadata vdev as a triple mirror (possible quad mirror given the concerns regarding Samsung Pro 990 drives right now even though I haven’t seen these issues, using two WD black 1TB drives, keep the blue spare), with special_small_blocks=1MB, and set the record size to 2MB. Whilst this is not space efficient for the NVMe drives, there are a significant number of the files over 512KB in size, and the space/drives exists so might as well use it (unless dangerous etc).
However, to save writing the data twice, (as I am in no rush), would there be an issue with adding a metadata vdev such as a quad mirror, at the same time as adding the 4 additional disks, and then running the rebalance script (is it the markusressel one? zfs-inplace-rebalancing)
This begs the question if the other possible methods are better (back up, destroy storage pool, recreate from scratch as 10 wide raidz2 and reload data (does not need Electric Eel), or even to create a new 4 wide raidz2 with the metadata vdev and transfer the files over in situ, before decommissioning the 6 wide pool, amd then adding the 6 disks (requires Electric Eel).
Ultimately, I think this is a relatively minor issue for me, as this is more of a “Because I can” scenario, rather than a need, but I think this highlights a couple of scenarios that iX might want to consider if there is any form of GUI in place for vdev disk addition rebalancing.
Any advice or comments are greatly appreciated.