Adding drives to Zraid2

My old nas died, so I set up a new one (Jonsbo N3) using Zraid2 with the 4 x 4tb drives from the old nas, after doing full surface tests on them. Then I added 4 more 4tb drives. I have a 120gb SSD cache drive in the system as well

My question is what the best to configure the 8 drive setup, do i add the new drive to the existing pool, expand the pool, add spares etc or do I drop the pool and start again?

Thanks
Steve

Yes - all of the above options could be the right one depending on your detailed circumstances.

Thanks mate, I am thinking I could add 2 or 3 drives to the pool and make at least 1 hot spare.

I mainly use it for images and a plex server, though I am thinking of using it as a time machine backup as well

So it seems dedup, metadata and log are not really of much use to me.

Thanks again
Steve

With RAIDZ you can now expand by adding drives one at a time. It works slowly in EE, it will work much faster in FT.

You can add (or remove) a hot spare at any time.

Broad-brush rules of thumb:

SLOG is only needed for reasonably active zVols, virtual disks, iSCSI and database files - and these should be on mirrors rather than RAIDZ. All other IO can be on RAIDZ and should be set to asynchronous (from the default of Standard). SLOG needs to be on significantly faster technology than the data to be worthwhile.

Metadata can be beneficial in some circumstances but not for your use case which is all inactive at-rest data.

Dedup is not recommended except in the most specific of circumstances. This may change in Fangtooth which has the new much faster Dedup functionality - but we need to wait and see what people’s experiences are with this. (However the most common use case seems to be extensive cloned virtual drives.)

1 Like

Thanks again, all very clear now :slight_smile:

If I’m understanding correctly, you have 8 disks, and would like to use RaidZ2.

If you have a 4-wide raidz2 you have 50% storage efficiency.

If you add another 4w raidz2 VDev you still have only 50% storage efficiency.

You could expand the original 4w to 8w, 1 at a time, which will take a long time depending on how full the pool is, and will not be optimal.

The optimal solution would be to simply rebuild the pool as 8w from the get go, and restore the data, assuming you can copy it off. This would provide 75% storage efficiency.

Assuming storage efficiency is more important than iops.

1 Like

Thinking about it, Iops is probably more important to me, Loading and saving Photoshop documents is pretty disk intensive.

I do have backups of my content so a pool rebuild is not out of the question and reading about performance a stripe+mirror seems be best for performance and will still leave plenty of storage space.

I am finding Scale to be really good and much easier to use compared to Core!

Then you don’t understand IOPS. Loading and saving entire files from Photoshop will work brilliantly on RAIDZ. The IOPS focus is when you are doing a lot of parallel small random reads and writes and not when you are doing single user i.e. single threaded sequential operations on entire files (which ZFS should be able to read and write at full disk speeds.

Sub-question: What do you actually mean by “cache”?

I assume you mean L2ARC. Do you have >= 64GB of memory? If not this may well negatively impact your performance because it will use more memory than it saves. If you do have >= 64GB then if you are doing sequential I/Os with the size of your pool you are probably already caching everything you need without needing L2ARC.

Either way, it probably isn’t doing anything positive for your performance.

As an Oracle DBA of around 40 years experiemce, I do have a bit of understanding of IOPS, though obviously not in this situation.

All the reading I have been doing suggests RAID10 is faster than ZRAID, but there is a quite a bit of conflicting/differing information, and as you point out it depends on what operations you are performing.

Seems like ZRAID2 is optimum for my applcation.

Yes I do mean L2ARC and I have 32GB of memory, I added it because i had the 120GB SSD lying around. Again there are a few different opinions when reading about this online.

I noticed a difference in the allocation of RAM when I added this drive.

I guess I could run some speed tests with and without to see if it is having any effect.

Thanks
Steve

BTW, this was one of my references for ZRAID speeds, though it does not take the type of IO into consideration

I tried a simple speed copy test using rsync --progress command repeated 3 times

Copying a 290mb Photoshop document

with L2ARC … write = 52 mb/s read = 58 mb/s
No L2ARC … write = 44 mb/s read = 50/mb/s

Not very extensive testing but the L2ARC definitely seems to improve things a little