Hello,
I do have 8 SATA drives (no data on them) that I wish convert to logical sector 4096 bytes,
Which command should I use on TrueNAS Scale CLI to do that?
Thanks.
Hello,
I do have 8 SATA drives (no data on them) that I wish convert to logical sector 4096 bytes,
Which command should I use on TrueNAS Scale CLI to do that?
Thanks.
Who says you can? I’m fairly certain that’s not part of the ATA command set, so any such capability would be drive-specific - and SATA disks do not typically support this openly if at all.
That’s is what I’m trying to undestand.
At OS level (e.g. Windows) you can do it
so I was just wondering if there’s something similar in TrueNAS Scale that allow the logical block change, without removing one by one the drives to be formatted in a Win baremetal.
That would be the “ashift” value, which defaults to 12 in TrueNAS. This uses 4096-bytes for writes.
That does not in any way affect the disk’s LBA geometry. That’s setting up a filesystem setting. As @winnielinnie said, the default in TrueNAS already is 4k, which is fine for 99% of users.
This is the effectively the blocksize, and (unless I have this wrong) this is defined on a Dataset basis - in effect you can have some parts of your pool with one blocksize, and other parts of your pool with different blocksizes.
I am not sure I have seen any recommendations on how to choose a blocksize depending upon the vdev layout and / or the size of the files and / or any other parameters that might be affected such as read or write performance.
Don’t confuse the ashift with the recordsize.
The ashift determines the minimum size of a block, and for practical reasons must be equal to or larger to the logical reported disk block size. For performance reasons, it should really be greater or equal to the physical block size on HDDs (SSDs are more nuanced and complicated).
The recordsize defines the maximum size of a block (with mirrors and RAIDZ, smaller blocks are written if there’s not enough data to fill a whole recordsize).
Smaller recordsizes mean more metadata for the same amount of data and fewer opportunities for compression to work. Larger recordsizes mean less metadata, but greater write amplification.
So, block-oriented workloads should target the bottom of the range (e.g. 16k), workloads based around large files that don’t change often if at all should target the max (1M or 16M, depending on how aggressive you want to be). The default 128k is a great compromise in that nobody is perfectly happy with it but nobody can complain all that much either.