Optane P4801x 350GB Overprovisioning help

My truenas has 2 pools and I have 2 Optanes which I want to use as SLOG’s.

The idea is to create 2x 32GB SLOG partitions on each Optane, mirror them to eachother so that we have effectively 2 “mirrored” SLOG’s (albeit not 2 per pool, essentially both Optanes would be used for both pools, but seperate SLOG partitions).

I am little confused with the process of over provisioning. The drives out of the box is 350GB so we will only be using 64GB of each 350GB. This provides a fair amount of over provisioning and gives each SLOG 32GB per pool, which I understand should be more than enough.

What is the process in doing this? Looking at some resources it seems as simply as simply having 2 paritions per drive and as the rest is unallocated (thus not partitioned / free space), this already is considered over provisioning, but want to just make sure this is the correct direction?

You can also do it by using intel datacentre tools / Intel MAS… which will “physically” reduce the size of the drive…

I think this is the right URL

https://www.intel.com/content/www/us/en/download/19520/intel-memory-and-storage-tool-cli-command-line-interface.html

1 Like

nvme or nvmecontrol might even be able to do it from TrueNAS itself.

The question remains: Why do you want to set up two drives as two pairs of mirrorred partitions? (And do you need a SLOG in the first place…)

1 Like

AFAIK, partitioning a drive and assigning a given partition to a particular pool still cannot be done from the GUI, only from the CLI - the GUI only allows complete disks to be assigned to a given task. Whether it’s a good idea or not is a different question.

Is overprovisioning via the Intel tool all that helpful vs. just letting the Optane wear leveling do its thing?

The primary storage is used for NFS VHD sharing (with sync writes off course). The underlying storage is SSD, but the optanes have a much higher PBW and is offcourse much faster than the SSD’s. My understanding is that there will always be a SLOG / ZIL (which is part of the pool) so in a way having the SLOG’s seperate from the underlying storage will have a performance impact.

The idea with mirroring the SLOG is simply for redundancy as we’re limtited with a Dell R630 with the available slots (so cannot have mirrored SLOG’s dedicated to a pool), but as we have 2 seperate pools per R630, the idea was the setup 2x 32GB partitions per Optane so that when we setup the VDEV we can mirror the data to eachother, in the event of an Optane failure (unlikely but always possible) we should be able to recover.

I am however now not entirely sure if this is at all possible as it appears that the vdev itself dedicates an entire device instead of partition, so I assume some magic would need to happen in the background?

Worth noting that we have a secondary TrueNAS that is replicated, but the storage here is normal spindle drives, so to keep consistency we’re installing Optanes in all of the TrueNAS machines so that we’re “guaranteed” the best overall solution.

I am happy to get input here as my understanding is mostly based on theory and not implimentation

Thank you for the response and provided information.

What string of CLI commands did you have to use to get the drive to split into the separate partitions, as i have been Reading the manual. It almost appears that when i issue a format command it will clear the Drive and this makes me fear the main Reason for the Optane would be impacted as the Cache and all should be effected, Im from a DISKPART environment (Windows) where i would just split the Partitions and call it a day… however i do fear that if this was done on a Windows system, will the filesystem remain once the drive is re-installed into the Actual Server Hardware…

What would be the recommended path or Manual to consult to assist in this resolve.

PS. New to this level of Filesystems and any and all information would be appreciated.

Mind that SLOG failure does not cause data loss, only performance loss as the system reverts to in-pool ZIL. It takes an unclean shutdown AND the SLOG not coming back upon reboot to loose data.

If this is acceptable, just use each drive a single drive SLOG for a single pool and be happy in GUI land.

If you want to proceed with the mirrorred double partitions, overprovision the drives first and then split the resulting 64 GB drives in two partitions.
There’s no point discussing commands if we don’t know what OS you run.

1 Like

Sorry for the necro-bump but I wanted to post that this hint led me in the right direction. Not sure if it’s OK to post external links, so I’ll just mention a blog post I found by Drew Thorstensen (Nov. 21, 2020) titled “NVMe Namespaces.” My SSD supports the feature so I followed that and was able to successfully resize a 960GB enterprise NVMe to 16 GB with 4K blocks instead of 512. Hopefully this makes for a decent SLOG on a home NAS.

truenas_admin@truenas[~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 10.9T 0 disk
sdb 8:16 0 10.9T 0 disk
sdc 8:32 0 10.9T 0 disk
sdd 8:48 0 10.9T 0 disk
nvme0n1 259:0 0 16G 0 disk
nvme1n1 259:1 0 119.2G 0 disk
├─nvme1n1p1 259:2 0 1M 0 part
├─nvme1n1p2 259:3 0 512M 0 part
└─nvme1n1p3 259:4 0 118.7G 0 part

truenas_admin@truenas[~]$ sudo nvme list
Node                  Generic               SN                   Model                                    Namespace Usage                      Format           FW Rev  
--------------------- --------------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme1n1          /dev/ng1n1            P300HHBB250xxxxxxxxx Patriot M.2 P300 128GB                   1         128.04  GB / 128.04  GB    512   B +  0 B   APF1M7R0
/dev/nvme0n1          /dev/ng0n1            50026B7xxxxxxxxx     KINGSTON SEDC2000BM8960G                 1          17.18  GB /  17.18  GB      4 KiB +  0 B   EIEK51.3

Indeed, that’s true in 25.10 as well (I first tried partitioning and found out).

I didn’t use Optane but I think wear leveling would work in either case. The reason I reduced the disk size (NVMe namespace, in this case) is because I read somewhere that SLOG performance can tank if it’s too big. :man_shrugging:

1 Like

Sure it is! We like our references.

1 Like

So I realized my mistake: I used multiples of 1024 instead of 1000 and I ended up with 16GiB instead of 16 GB. Not a big deal but I went back and corrected it to be consistent with how disk drives are typically sized.

/dev/nvme0n1 /dev/ng0n1 50026B7xxxxxxxxx KINGSTON SEDC2000BM8960G 1 16.00 GB / 16.00 GB 4 KiB + 0 B EIEK51.3

So now in ‘lsblk’ output, without modifiers, should report it as 14.9G (it uses IEC units - GiB - by default):

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 10.9T 0 disk
sdb 8:16 0 10.9T 0 disk
sdc 8:32 0 10.9T 0 disk
sdd 8:48 0 10.9T 0 disk
nvme0n1 259:0 0 14.9G 0 disk
nvme1n1 259:1 0 119.2G 0 disk
├─nvme1n1p1 259:2 0 1M 0 part
├─nvme1n1p2 259:3 0 512M 0 part
└─nvme1n1p3 259:4 0 118.7G 0 part

:upside_down_face: