Cannot Create Pool -- [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda

Hello everybody,

I am currently attempting to create my first NAS using TrueNas Scale. In doing so I am coming across an issue where I am unable to create a pool due to the error posted in the title (I also put the full log down below). Here on some of the troubleshooting I have tried:

Switched to TrueNas Core = This worked and was able to get the NAS working, however Core lacks some features I think I want so I want to be on Scale.
Formatted disks with zeroes within TrueNas scale - Same issue
Tried importing from Core > Scale - Same issue
Attempted to wipe disk in Linux Shell - disk had no partitions, thus same issue.
Performed clean new installs of Scale multiple times - same issue.
Have tried TrueNas v24.10 and 24.04 - same issue.
The disks are seen in Bios and in scale.
Ran S.M.A.R.T Test - no issues.

If anyone has any ideas on how to resolve this that would be great. Thank you!

Traceback (most recent call last):
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 509, in run
await self.future
File “/usr/lib/python3/dist-packages/middlewared/job.py”, line 554, in _run_body
rv = await self.method(*args)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/service/crud_service.py”, line 261, in nf
rv = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 49, in nf
res = await f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/schema/processor.py”, line 179, in nf
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool
/pool.py", line 577, in do_create
await self.middleware.call(‘pool.format_disks’, job, disks, 0, 30)
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1629, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1460, in call
return await methodobj(*prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/pool
/format_disks.py", line 34, in format_disks
devname = await self.middleware.call(‘disk.gptid_from_part_type’, disk, zfs_part_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1629, in call
return await self._call(
^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1471, in call
return await self.run_in_executor(prepared_call.executor, methodobj, *prepared_call.args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3/dist-packages/middlewared/main.py”, line 1364, in run_in_executor
return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/usr/lib/python3.11/concurrent/futures/thread.py”, line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/middlewared/plugins/disk
/disk_info.py", line 173, in gptid_from_part_type
raise CallError(f’Partition type {part_type} not found on {disk}')
middlewared.service_exception.CallError: [EFAULT] Partition type 6a898cc3-1dd2-11b2-99a6-080020736631 not found on sda

Just in case anybody ever comes across this in the future. The issue was resolved by moving both HD SATA cables to different ports on my motherboard. Not sure why that resolved the issue or even caused when they worked on those ports in TrueNas Core, but it did so I’m happy.

@DasBlake

Thanks man I was having the same issue and this solved it for me as well. its weird that it just doesn’t like some of the SATA ports on a motherboard / requires certain ones.

Your pools should be imported with the partitions’ PARTUUIDs.

Strange that it wanted to use sda.

For me I had to use fdisk to wipe the partition table off the disk and reboot. Using fdisk or wipefs without rebooting didn’t do it without rebooting.

There is absolutely nothing stopping you from creating pools using the CLI, however (using the commands above) they will NOT be created the same as if you had used the UI.

  1. TrueNAS uses partitions not drives. One reason for this is to leave some buffer space in case the e.g. 4TB drive you buy later to expand the vDev is ever so slightly smaller than the ones you started the pool with.

  2. TrueNAS uses partuuids not drive names (/dev/sde), partition names (/dev/sde1) or disk UUIDs. The reason for this is that drive names can change on a reboot - what is /dev/sda this time may be /dev/sdd next time.

  3. Also the UI state of imported pools is partly independent from the Linux/ZFS list of imported pools. This is deliberate so that a pool that doesn’t get imported on boot shows up in the UI as offline rather than available to import. But it does mean that when you use the CLI to take actions on pools, then you need to take steps to make the UI state match.

Unless you understand the differences and know what you are doing, creating your pools through the UI should be the recommended approach.

@winnielinnie @Protopia Is there an actual solution to this issue? I am having the same issue but I have tried it several times with drives plugged in differently and it still will not work. I have (2) 2T drives that I am trying to create a mirror with and it just won’t let me.

I already have 1 pool running (mind you not well) with (3) 4T drives RaidZ1.

I have done a couple wipes (always the quick wipes) just to clean the drives. Maybe I need to do a long wipe?!?

If it helps, I am running Community Edition Version 25.04.1.

Thank you!

@thewizz

Please start a new thread if you have your own issue.

1 Like

OK, but it’s the exact same issue so I didn’t want to clutter the system with the same post.

You appear to be working through pool problems in another thread. I would keep to that one and ask in there, first, as it may be all related.

This is a completely different issue than my other thread. This is trying to create a new pool with different disks.

Same issue as OP…

My 2 cents.

When I see a thread in the forums with 10 posts or more, obviously, someone is working on it, so let them. i.e., it’s ignored. You run that risk. It is not difficult to start a new thread, it’s trivial, don’t see why the resistance.

Also, at times people and IX try and see how many times some issue has occurred. Well, if they see 1 thread, that’s one time right? No one is going to go through a thread and tally up manually how many people.

So, I suppose it’s up to you, but if I were you, I’d create a new thread. The solution may or may not be the same. If it’s the same, then, wait for the OP to resolve their issue right? You have volunteers on this forum mostly, why consume more of their time? Whatever that solution was or will be is yours if you are convinced it’s identical From years on these forums, quite a large percent of the time the “same” issue turns out to not be.

It gets very confusing when multiple people are responding, posting new data, etc. Very.

Just my opinion.

Unless you are having this issue on a completely different TrueNAS system, it does belong with your other, ongoing thread.

In pretty much every other forum I have ever posted on, you get yelled at for starting a new post for the same issue.

If that’s what everyone thinks needs to happen, I’m fine with that… I never said I wouldn’t.

1 Like