Non-native blocksize - ashift doesn't seem to correspond

Hello everyone,

I just upgraded from core to scale and got the alert saying one or more devices are configured to use a non-native block size. I read up on this alert and it doesn’t seem to be much of an issue in and of itself. That being said, I would like to understand why I am getting this alert because it seems to me as though the ashift of the pool does not match the alert.

If I do a zpool status, I get the following output:

pool: naspool
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
        Expect reduced performance.
action: Replace affected devices with devices that support the
        configured block size, or migrate data to a properly configured
        pool.
  scan: scrub repaired 0B in 1 days 15:24:24 with 0 errors on Mon Jan 27 15:24:36 2025
config:

        NAME                                      				STATE     READ WRITE CKSUM
        naspool                                   				ONLINE       0     0     0
          mirror-0                                				ONLINE       0     0     0
            72fc5486-19f7-11eb-934e-38d547e0bdad  		        ONLINE       0     0     0
            598fca2d-d85c-11ec-98c5-38d547e0bdad 		        ONLINE       0     0     0
          mirror-1                               				ONLINE       0     0     0
            54339342-10fd-11eb-bae5-38d547e0bdad  	            ONLINE       0     0     0  block size: 512B configured, 4096B native
            543e9f6e-10fd-11eb-bae5-38d547e0bdad  	            ONLINE       0     0     0  block size: 512B configured, 4096B native
          mirror-2                               				ONLINE       0     0     0
            08545dac-bede-11ee-9620-38d547e0bdad  	            ONLINE       0     0     0
            45edd38d-8e3e-11ee-99c8-38d547e0bdad  		        ONLINE       0     0     0

errors: No known data errors

This would lead me to believe that my ashift would be 9. That being said if I run the following command it gives me an ashift of 12.

sudo zpool get ashift naspool
NAME     PROPERTY  	VALUE  SOURCE
naspool  ashift  	12     local

That being said, I am not very knowledgeable regarding Truenas and this could clearly be me not understanding things correctly. Any help understanding this would be much appreciated.

What are the 6 drives in the pool? Brand? Model?

They are all WD Reds. They have varying sizes as I acquired them throughout the years. The first mirror is 3TB drives, the second, which is the one that has the issue, is 4TB drives and the third is 8TB drives.

Can you post the model numbers?

When was this pool originally created?

What is the output of this, for each drive:

 sudo smartctl -x /dev/<drive> | grep "Sector Size"

Hi thanks for your help. Here is the information you asked for:

Model numbers:
Mirror 0: WDC_WD30EFZX-68AWUN0 and WDC_WD30EFRX-68EUZN0
Mirror 1: WDC_WD40EFAX-68JH4N0 (they are both the same model number)
Mirror 2: WDC_WD80EFZZ-68BTXN0 (they are both the same model number)

This pool was created about 8 years ago (not certain of the date but give or take).

Output for each hdd in the mirrors:
Mirror 0:

sudo smartctl -x /dev/sdb | grep "Sector Size"
Sector Sizes:     512 bytes logical, 4096 bytes physical
sudo smartctl -x /dev/sdf | grep "Sector Size"
Sector Sizes:     512 bytes logical, 4096 bytes physical

Mirror 1:

sudo smartctl -x /dev/sda | grep "Sector Size"
Sector Sizes:     512 bytes logical, 4096 bytes physical
sudo smartctl -x /dev/sdd | grep "Sector Size"
Sector Sizes:     512 bytes logical, 4096 bytes physical

Mirror 2:

sudo smartctl -x /dev/sdc | grep "Sector Size"
Sector Sizes:     512 bytes logical, 4096 bytes physical
sudo smartctl -x /dev/sde | grep "Sector Size"
Sector Sizes:     512 bytes logical, 4096 bytes physical

Curious about this then:

sudo zdb -l /dev/disk/by-partuuid/72fc5486-19f7-11eb-934e-38d547e0bdad | grep ashift

Repeat the above for each of the 5 remaining PARTUUIDs of the member devices.

This is a SMR model

3 Likes

Plus they happen to both be the ones in mirror-1.

You really do not want SMR drives in your pool, let alone mixed with CMR drives.

It will cost you, but when practical, you can replace them, one by one, until your mirror-1 vdev consists of two non-SMR drives.

3 Likes

The good part is that these are old drives, so replacement is to be factored in, and it should be possible to get a pair of 8+ TB to take over both mirror-1 and mirror-0, eventually downsizing to 4 drives.

1 Like

Exactly! Thanks everyone for your help spotting this. I will look into replacing these drives and as @etorix mentioned, I’ll be able to grab a couple of 8+ TB drives to replace them and downsize my amount of drives.

I’m weary about this when it comes to SCALE / Linux.

There might be a rare issue[1] with kernel panics triggered by the presence of an “indirect” vdev, which exists when you remove a mirror from a pool.

Here is an upstream bug report on the OpenZFS GitHub, reported by a Fedora Linux user, and additionally corroborated by two SCALE users.

Here is a TrueNAS SCALE user who was bitten by what seems to be the same bug.

Here is another SCALE user, who might have bumped into the same bug, yet was able to “recover” through unconventional means.

Here is an iXsystems Jira bug report, created by the user in the second link.


  1. Possibly combined with other yet to be determined factors. ↩︎

1 Like

For anyone coming to this post in the future, doing the following command:

sudo zdb -l /dev/disk/by-partuuid/72fc5486-19f7-11eb-934e-38d547e0bdad | grep ashift

on each of the drives gave me an ashift of 12 on mirrors 0 and 2 and an ashift of 9 on mirror 1.

1 Like

Wonder how that happened. :flushed:

Ditto. I thought that ashift was a pool-wide property, not per vdev.

I suspect this also mean you cannot replace the drives in mirror-1 because the new drives could inherit the wrong ashift. Better add a new vdev or replace the drives in mirror-0 first, and then remove mirror-1.

2 Likes

ZFS: “You’ll never truly know everything about me. Never.” :male_detective:

2 Likes

@jrvd007 Did you ever at any point use the command-line to add a new mirror vdev to the pool, or did you do everything only in the GUI?

Never. Only GUI. I am not confident enough in my abilities to mess with the pool in the command-line.

1 Like