Wipe Disk- Error 5 Input/Output Error

Hello!
I recently acquired 6x Intel P3600 drives- hoping to put them in RaidZ on my existing TrueNAS box that has been running well for some time.

Tested the drives first for a while in Windows- speed was great, and no errors reported. Tested them all pretty extensively. They had only a few hours of power on time before my testing, pretty nuts. Exact model #: INTEL SSDPEDME016T4F

Moved them to the TrueNAS box. When wiping the drives in Quick mode, I almost instantly get [Errno 5] Input/output error. Also tried doing gpart -destroy nvd0-5 and then wiping in the UI- same effect. Also tried writing full zeros to the drive. Still can’t wipe the disks or create a pool.

Error from storage/disks/wipe:
image

Error from storage/pools/create:

What should I try next?

Dell R730
TrueNAS-13.0-U5.2 (A bit scared to update since this version has been rock solid for me, but maybe it is time)
2x Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz
232 GB ram
8x 8 TB HDDs
1x 800 GB PCIe Intel P3600 nvme for SLOG for HDDs
6x 1.6 TB PCIe Intel P3600 NVME for new pool that isn’t working yet :slight_smile:

How are you connecting them? About the update, you can easily revert if you update from the WebUI.

These are normal pcie cards- so they’re just plugged right into the risers on the r730.

Thanks for the info on the update- I haven’t ever tried reverting before. Good to know it is so easy :slight_smile:

There was an Intel data center software for P3700 drives that could be used to firmware update them, change their sector size, over provision and reset them etc.

Tried to find the equivalent for p3600 drives but couldn’t easily.

Sorry

But maybe that’s a solution if you can find something.

This sort of thing

Software used to be called idct, but was renamed intelmas

(Intel data center tool → Intel Memory and Storage Tool)

1 Like

Thanks! I’ll check this out.

@Stux As suggested, I did a secure erase and verified they were on the latest firmware. Used the Solidigm toolkit as intel sold this stuff to SK Hynix. Solidigm’s toolkit recognized it no issue.

@Davvo As suggested, I updated to the latest TrueNAS core.No issues there, super smooth.

However, I am still having the same issue. :frowning:

Drives work great in my Windows box- no issues reported, and I can format the entire drive, etc.

I also can create the pool manually via command line ie: zfs pool create:


image

I see this in dmesg:

Any suggestions why TrueNAS still doesn’t like these drives?

Also, I noticed that if I try to do a short SMART scan, it is also mad:

In Windows, no errors are reported at all. It is quite happy there. :slight_smile:

1 Like

Ok, next update. I did gpart destroy, then created a 2g swap in partition 1 on each disk, and then partition 2 with the rest of the space as freebsd-zfs. Then, I did zpool create with the gptid instead of the entire device. Exported in command line, imported in UI, and it seems to work fine. SMART is still mad however.

Was this dangerous of me to do?

Exact commands I used:

root@storage01[/dev]# gpart create -s gpt /dev/nvd0
nvd0 created
root@storage01[/dev]# gpart create -s gpt /dev/nvd1
nvd1 created
root@storage01[/dev]# gpart create -s gpt /dev/nvd2
nvd2 created
root@storage01[/dev]# gpart create -s gpt /dev/nvd3
nvd3 created
root@storage01[/dev]# gpart create -s gpt /dev/nvd4
nvd4 created
root@storage01[/dev]# gpart create -s gpt /dev/nvd5
nvd5 created
root@storage01[/dev]# gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/nvd0
nvd0p1 added
root@storage01[/dev]# gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/nvd1
nvd1p1 added
root@storage01[/dev]# gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/nvd2
nvd2p1 added
root@storage01[/dev]# gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/nvd3
nvd3p1 added
root@storage01[/dev]# gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/nvd4
nvd4p1 added
root@storage01[/dev]# gpart add -a 4096 -i 1 -s 2g -t freebsd-swap /dev/nvd5
nvd5p1 added
root@storage01[/dev]# gpart add -a 4096 -i 2 -t freebsd-zfs /dev/nvd0
nvd0p2 added
root@storage01[/dev]# gpart add -a 4096 -i 2 -t freebsd-zfs /dev/nvd1
nvd1p2 added
root@storage01[/dev]# gpart add -a 4096 -i 2 -t freebsd-zfs /dev/nvd2
nvd2p2 added
root@storage01[/dev]# gpart add -a 4096 -i 2 -t freebsd-zfs /dev/nvd3
nvd3p2 added
root@storage01[/dev]# gpart add -a 4096 -i 2 -t freebsd-zfs /dev/nvd4
nvd4p2 added
root@storage01[/dev]# gpart add -a 4096 -i 2 -t freebsd-zfs /dev/nvd5
nvd5p2 added
root@storage01[/dev]# glabel status
                                      Name  Status  Components
gptid/965850d8-f7ff-11ec-b455-d4ae527abb23     N/A  nvd6p1
gptid/9483afd9-fecc-11ec-b455-d4ae527abb23     N/A  da6p2
gptid/82ace430-f369-11ec-a257-d4ae527abb23     N/A  da4p2
gptid/824fd7cb-f369-11ec-a257-d4ae527abb23     N/A  da5p2
gptid/825c741b-f369-11ec-a257-d4ae527abb23     N/A  da2p2
gptid/82a36ac9-f369-11ec-a257-d4ae527abb23     N/A  da3p2
gptid/825210d8-f369-11ec-a257-d4ae527abb23     N/A  da0p2
gptid/823b4fea-f369-11ec-a257-d4ae527abb23     N/A  da1p2
gptid/4fc987ab-da96-11ec-bab9-d4ae527abb23     N/A  da8p1
gptid/a699b16c-7770-11ee-923f-d4ae527abb23     N/A  da7p2
gptid/4fcd78d7-da96-11ec-bab9-d4ae527abb23     N/A  da8p3
gptid/82096e02-f369-11ec-a257-d4ae527abb23     N/A  da4p1
gptid/946c3f03-fecc-11ec-b455-d4ae527abb23     N/A  da6p1
gptid/23b6be48-00f4-11ef-add3-d4ae527abb23     N/A  nvd0p1
gptid/256bc478-00f4-11ef-add3-d4ae527abb23     N/A  nvd1p1
gptid/2740aa11-00f4-11ef-add3-d4ae527abb23     N/A  nvd2p1
gptid/2900889c-00f4-11ef-add3-d4ae527abb23     N/A  nvd3p1
gptid/2ac69d83-00f4-11ef-add3-d4ae527abb23     N/A  nvd4p1
gptid/2c6a7d02-00f4-11ef-add3-d4ae527abb23     N/A  nvd5p1
gptid/43fede46-00f4-11ef-add3-d4ae527abb23     N/A  nvd0p2
gptid/45ead0bf-00f4-11ef-add3-d4ae527abb23     N/A  nvd1p2
gptid/475f1fe7-00f4-11ef-add3-d4ae527abb23     N/A  nvd2p2
gptid/48e6f6da-00f4-11ef-add3-d4ae527abb23     N/A  nvd3p2
gptid/4a87335a-00f4-11ef-add3-d4ae527abb23     N/A  nvd4p2
gptid/4c28b91e-00f4-11ef-add3-d4ae527abb23     N/A  nvd5p2
root@storage01[/dev]# zpool create pool02_nvme raidz gptid/43fede46-00f4-11ef-add3-d4ae527abb23 gptid/45ead0bf-00f4-11ef-add3-d4ae527abb23 gptid/475f1fe7-00f4-11ef-add3-d4ae527abb23 gptid/48e6f6da-00f4-11ef-add3-d4ae527abb23 gptid/4a87335a-00f4-11ef-add3-d4ae527abb23 gptid/4c28b91e-00f4-11ef-add3-d4ae527abb23
root@storage01[/dev]# zpool export pool02_nvme

Happy in UI now after importing:



image

SMART has the exact same errors as above.

Ok, I started tossing some load at this and got a crap ton of LBA out of range errors. It caused truenas to become unstable and was even causing issues with my HDD array, so I pulled the NVMEs to recover quickly.

I am still confused at why truenas/freebsd doesn’t like these disks. They work fine elsewhere. Any suggestions on how to start testing/debugging?

Well. You could try updating your OS. It could be a kernel nvme bug.

Also, check your bios firmware.

I seem to remember having to update my Supermicro x10 sdv for proper Samsung nvme support, and they work everywhere.

Thanks! I believe I am running the latest stable version of Core now:
Version:
TrueNAS-13.0-U6.1

I also am running the latest BIOS for this box.

Any other ideas of things I could try?

The drives’ own firmware? Maybe disable legacy in the interface options of the BIOS.

Thanks Davvo!
The drives are on the latest firmware version. I am booting with UEFI, Bios mode is disabled on the r730. Is there another legacy interface option I should check out?

Looking into repurposing a different machine as a test rig to see if I can recreate the issue on different hardware. I also plan on trying TrueNas scale to see if the issue is freebsd related.

1 Like

Same issue with Core on a fresh install on a different box:

Spinning up Scale now.

Ok- it worked first try, no errors or issues with Truenas Scale.

Seems that these drives don’t like Core/FreeBSD.

1 Like

Ok, upgraded my r730 to Scale. That was waaaay easier than I thought it was gonna be. We’re back online, the p3600 drives had no issues at all showing up on Scale and creating a pool. No issues at all. I think that the issue is FreeBSD/Core doesn’t like these drives, but Debian is groovy with it.

2 Likes