Is this the correct configuration?

Hi,
I bought a used NAS system with 4 bays with 2x4TB HD installed, 2xSamsung_SSD_860_EVO_250GB and two NVMEs (SSD 970 EVO 250GB, KBG40ZNT256G TOSHIBA). I use the NAS mainly for a home server with Nextcloud, Plex and some smaller dockers.

I configured the two 4TB drives as mirrored raid because there wasn’t really any other option. I guess I can’t just expand that pool by adding a third drive but would have to add two more disks at the same time and then create a new pool. Is that correct?

The two SSDs are fairly slow, so I used those to install the TrueNAS OS redundant. I am not really sure if redundancy is overkill here. Maybe I should just use only one SSD and then use the other as cache, but they are old and slow, so I don’t think it’s worth it?

I configured the two NVMe as mirrored log for the first pool. They have slightly different sizes and have different speeds, so I am not really sure if that is the best use for these two. Maybe I should use one as read only cache?

What would you make out of this configuration?

No - not correct for several reasons.

  1. You can create a second mirror vDev on the two new disks and add it to the existing pool - you don’t need an extra pool. This gives you single drive failure on each vDev and 50% space used for redundancy. The new disks don’t need to be 4TB.

  2. You can create a new RAIDZ pool with the new disks (several options, depending on whether you use a pre-release EE version or not) - possibly also removing a mirror during migration to use in the new pool, and once the data is migrated, you can add the last old disk to the new pool (several option depending on whether you want to wait to install EE or not). You would end up with

In DF the options are:

  1. Create a 3 drive RAIDZ1 using the 2 new drives and a file-based pseudo-drive and then offline the pseudo drive, and after migration replace the offline pseudo-drive with a real drive to have a redundant 3 drive RAIDZ1 - then wait for EE to add the 4th drive to the vDev. Do make sure that pseudo file is a slightly smaller sparse file than the old drives - if the sparse-file is too big, amd the new drives are even a few blocks bigger than the old drives, then you won’t be able to add the old drives in.
  2. Remove a mirror and create a 4 drive RAIDZ2 using 3 real drives and a file-based pseudo drive and then offline the pseudo drive, and after migration replace the offline pseudo-drive with a real drive to have a redundant 4-drive RAIDZ2.

In EE the options are as above plus:

  1. Create a 2 drive RAIDZ1 using the 2 new drives, migrate your data, and then add the two old drives one-by-one to the vDev. Make sure that the new drives are not a single block bigger than the old drives otherwise you won’t be able to add the old drives into this new pool.
  2. Remove a mirror and create a 3 drive RAIDZ2 using 3 real drives, migrate your data, and then add the fourth drive to the vDev.

Option 2 would be my recommendation.

Good choice. Mirroring will improve your availability if one drive has problems.

The best use for these NVMe drives will depend on what you are planning to use the server for. I don’t have enough information to make a definitive recommendation. But in general…

If you need a higher performance solution for part of your use case, then in the vast majority of cases, creating a separate mirrored NVMe pool is likely to be a better solution than using these NVMe drives for any of the specialised vDev types (SLOG, L2ARC, special Metadata).

NVMe SLOG is only useful for asynchronous writes to HDD. Asynchronous writes are only needed for specific use cases, and unless the data is bigger than the NVMe drive you will be better off putting the data vDev on the NVMe drive instead.

L2ARC is only beneficial when you have maxed out your memory and you have a vast amount of active data on HDD so that normal ARC hit rates are below 99.5%.

Special metadata creates an additional point of failure for the pool - and again is probably only beneficial when read performance is critical from boot or you have such a vast amount of HDD that the metadata cannot be kept in ARC.

So, the probability is that your proposed use of the NVMe drives for SLOG / L2ARC is a wrong choice.

Let us know your intended use of this server, and we can advise further.

(My NAS has an ancient 2-core processor and only 10GB of memory (4-5GB ARC), and with 16TB of storage space I still get > 99.5% ARC hit rate, and can still max out my 1Gb network.)

3 Likes

Thx for the very detailed answer. I decided to wipe the drives and start over with RaidZ1 by adding another 4TB drive. I will use the system as a home NAS with Nextcloud, Plex and a few dockers.

I am still not sure what to do with the two NVMe, they are too small to make another pool out of them so I think cache is the best option.

Personally I think that these are a perfect size to use as a separate mirrored nvme-pool, and use them to store the iXapplications, the Docker apps, and the metadata for those applications e.g. the plex metadata. This will e.g. help the plex app on your TV feel much more responsive.

1 Like

Totally agree with protopia, 250gb Is not a poor choice for an app dedicated pool. And putting cache where not needed can be worst than not have It at all.
If, in future, you will need more space, you can always easy upgrade the pool swapping 1 NVME at time, and resilvering (really fast operation on NVME).