Optane P4801x 350GB Overprovisioning help

My truenas has 2 pools and I have 2 Optanes which I want to use as SLOG’s.

The idea is to create 2x 32GB SLOG partitions on each Optane, mirror them to eachother so that we have effectively 2 “mirrored” SLOG’s (albeit not 2 per pool, essentially both Optanes would be used for both pools, but seperate SLOG partitions).

I am little confused with the process of over provisioning. The drives out of the box is 350GB so we will only be using 64GB of each 350GB. This provides a fair amount of over provisioning and gives each SLOG 32GB per pool, which I understand should be more than enough.

What is the process in doing this? Looking at some resources it seems as simply as simply having 2 paritions per drive and as the rest is unallocated (thus not partitioned / free space), this already is considered over provisioning, but want to just make sure this is the correct direction?

You can also do it by using intel datacentre tools / Intel MAS… which will “physically” reduce the size of the drive…

I think this is the right URL

https://www.intel.com/content/www/us/en/download/19520/intel-memory-and-storage-tool-cli-command-line-interface.html

1 Like

nvme or nvmecontrol might even be able to do it from TrueNAS itself.

The question remains: Why do you want to set up two drives as two pairs of mirrorred partitions? (And do you need a SLOG in the first place…)

AFAIK, partitioning a drive and assigning a given partition to a particular pool still cannot be done from the GUI, only from the CLI - the GUI only allows complete disks to be assigned to a given task. Whether it’s a good idea or not is a different question.

Is overprovisioning via the Intel tool all that helpful vs. just letting the Optane wear leveling do its thing?

The primary storage is used for NFS VHD sharing (with sync writes off course). The underlying storage is SSD, but the optanes have a much higher PBW and is offcourse much faster than the SSD’s. My understanding is that there will always be a SLOG / ZIL (which is part of the pool) so in a way having the SLOG’s seperate from the underlying storage will have a performance impact.

The idea with mirroring the SLOG is simply for redundancy as we’re limtited with a Dell R630 with the available slots (so cannot have mirrored SLOG’s dedicated to a pool), but as we have 2 seperate pools per R630, the idea was the setup 2x 32GB partitions per Optane so that when we setup the VDEV we can mirror the data to eachother, in the event of an Optane failure (unlikely but always possible) we should be able to recover.

I am however now not entirely sure if this is at all possible as it appears that the vdev itself dedicates an entire device instead of partition, so I assume some magic would need to happen in the background?

Worth noting that we have a secondary TrueNAS that is replicated, but the storage here is normal spindle drives, so to keep consistency we’re installing Optanes in all of the TrueNAS machines so that we’re “guaranteed” the best overall solution.

I am happy to get input here as my understanding is mostly based on theory and not implimentation

Thank you for the response and provided information.

What string of CLI commands did you have to use to get the drive to split into the separate partitions, as i have been Reading the manual. It almost appears that when i issue a format command it will clear the Drive and this makes me fear the main Reason for the Optane would be impacted as the Cache and all should be effected, Im from a DISKPART environment (Windows) where i would just split the Partitions and call it a day… however i do fear that if this was done on a Windows system, will the filesystem remain once the drive is re-installed into the Actual Server Hardware…

What would be the recommended path or Manual to consult to assist in this resolve.

PS. New to this level of Filesystems and any and all information would be appreciated.

Mind that SLOG failure does not cause data loss, only performance loss as the system reverts to in-pool ZIL. It takes an unclean shutdown AND the SLOG not coming back upon reboot to loose data.

If this is acceptable, just use each drive a single drive SLOG for a single pool and be happy in GUI land.

If you want to proceed with the mirrorred double partitions, overprovision the drives first and then split the resulting 64 GB drives in two partitions.
There’s no point discussing commands if we don’t know what OS you run.

1 Like