I just upgraded from core to scale and got the alert saying one or more devices are configured to use a non-native block size. I read up on this alert and it doesn’t seem to be much of an issue in and of itself. That being said, I would like to understand why I am getting this alert because it seems to me as though the ashift of the pool does not match the alert.
If I do a zpool status, I get the following output:
pool: naspool
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: scrub repaired 0B in 1 days 15:24:24 with 0 errors on Mon Jan 27 15:24:36 2025
config:
NAME STATE READ WRITE CKSUM
naspool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
72fc5486-19f7-11eb-934e-38d547e0bdad ONLINE 0 0 0
598fca2d-d85c-11ec-98c5-38d547e0bdad ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
54339342-10fd-11eb-bae5-38d547e0bdad ONLINE 0 0 0 block size: 512B configured, 4096B native
543e9f6e-10fd-11eb-bae5-38d547e0bdad ONLINE 0 0 0 block size: 512B configured, 4096B native
mirror-2 ONLINE 0 0 0
08545dac-bede-11ee-9620-38d547e0bdad ONLINE 0 0 0
45edd38d-8e3e-11ee-99c8-38d547e0bdad ONLINE 0 0 0
errors: No known data errors
This would lead me to believe that my ashift would be 9. That being said if I run the following command it gives me an ashift of 12.
sudo zpool get ashift naspool
NAME PROPERTY VALUE SOURCE
naspool ashift 12 local
That being said, I am not very knowledgeable regarding Truenas and this could clearly be me not understanding things correctly. Any help understanding this would be much appreciated.
They are all WD Reds. They have varying sizes as I acquired them throughout the years. The first mirror is 3TB drives, the second, which is the one that has the issue, is 4TB drives and the third is 8TB drives.
Hi thanks for your help. Here is the information you asked for:
Model numbers:
Mirror 0: WDC_WD30EFZX-68AWUN0 and WDC_WD30EFRX-68EUZN0
Mirror 1: WDC_WD40EFAX-68JH4N0 (they are both the same model number)
Mirror 2: WDC_WD80EFZZ-68BTXN0 (they are both the same model number)
This pool was created about 8 years ago (not certain of the date but give or take).
The good part is that these are old drives, so replacement is to be factored in, and it should be possible to get a pair of 8+ TB to take over both mirror-1 and mirror-0, eventually downsizing to 4 drives.
Exactly! Thanks everyone for your help spotting this. I will look into replacing these drives and as @etorix mentioned, I’ll be able to grab a couple of 8+ TB drives to replace them and downsize my amount of drives.
Ditto. I thought that ashift was a pool-wide property, not per vdev.
I suspect this also mean you cannot replace the drives in mirror-1 because the new drives could inherit the wrong ashift. Better add a new vdev or replace the drives in mirror-0 first, and then remove mirror-1.