Create Pool: FAILED ('no such pool or dataset',) - can't create pool

I am not able to create a pool. It keeps throwing this error:

FAILED
('no such pool or dataset',)

with the following traceback:

Error: concurrent.futures.process._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker
    r = call_item.fn(*call_item.args, **call_item.kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker
    res = MIDDLEWARE._run(*call_args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run
    return self._call(name, serviceobj, methodobj, args, job=job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call
    return methodobj(*params)
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf
    return f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 117, in do_create
    zfs.create(data['name'], topology, data['options'], data['fsoptions'])
  File "libzfs.pyx", line 402, in libzfs.ZFS.__exit__
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 117, in do_create
    zfs.create(data['name'], topology, data['options'], data['fsoptions'])
  File **"libzfs.pyx", line 1376, in libzfs.ZFS.create
libzfs.ZFSException: no such pool or dataset**
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 355, in run
    await self.future
  File "/usr/local/lib/python3.9/site-packages/middlewared/job.py", line 391, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 981, in nf
    return await f(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 735, in do_create
    raise e
  File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/pool.py", line 688, in do_create
    z_pool = await self.middleware.call('zfs.pool.create', {
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1279, in call
    return await self._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1236, in _call
    return await methodobj(*prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/service.py", line 496, in create
    rv = await self.middleware._call(
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1244, in _call
    return await self._call_worker(name, *prepared_call.args)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1250, in _call_worker
    return await self.run_in_proc(main_worker, name, args, job)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1169, in run_in_proc
    return await self.run_in_executor(self.__procpool, method, *args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1152, in run_in_executor
    return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs))
libzfs.ZFSException: ('no such pool or dataset',)

I tried to add two SSDs to the Cache VDEV, but even after removing one and trying again it still fails to create the pool, so I am not sure that is the cause.

Any ideas, anyone?

Why are you adding two disks for a ā€œcacheā€ vdev (L2ARC)? Do you have sufficient RAM and/or a use-case for needing such?

Does the pool name contain spaces or any non-alphanumeric symbols?

64GB RAM. My original plan was to have a mirrored Cache and mirrored SLOG but it was thwarted by the fact that you canā€™t create a cache mirror. The GUI ā€˜letā€™sā€™ you add two devices to the cache, so I tried itā€¦but I have given up on that. I was gonna use one of the 960GB SSDs for cache but chaps on the old forum told me that for 64GB of RAM, 960GB cache is too bigā€¦

Pool name was ā€œpool0ā€ - I did think of that and looked naming conventions/restrictions up. Should not be the cause.

L2ARC and SLOG can be added to a pool later. They do not need to be added during pool creation.

Are you able to create the pool without an L2ARC and SLOG?

I would advise against adding L2ARC or SLOG vdev, as you should try using your system without them to see how satisfied you are.

1 Like

What is strange is that I was able to create it yesterday. Indeed, I created it 2 or 3 times and then deleted it again whilst I was ā€˜umming and ahhingā€™ about sector sizing and cache choiceā€¦I wonder if creating the pool the first time did something to the disks? (I hope not)

Just tried again without L2ARC - no joy :frowning:

FAILED

('no such pool or dataset',)

It might have left residual folders from the first time you created the pool.

ls -la /mnt

root@truenas[~]# ls -la /mnt
total 6
drwxr-xr-x   2 root  wheel  64 Apr 15 22:09 .
drwxr-xr-x  20 root  wheel  27 May 11 10:36 ..
-rw-r--r--   1 root  wheel   5 Apr 15 22:09 md_size
root@truenas[~]# rm /mnt/md_size
root@truenas[~]# ls -la /mnt
total 2
drwxr-xr-x   2 root  wheel   0 May 11 19:38 .
drwxr-xr-x  20 root  wheel  27 May 11 10:36 ..
root@truenas[~]#

Still erroring :frowning:

FAILED
('no such pool or dataset',)

Donā€™t start throwing around the rm command when no one said anything about immediately just deleting things. :grimacing:

What is the current state of your pools?

zpool list

I hear you. Normally I wouldnā€™t, but this system has nothing on it yet - I have only just finished building it. If I get no joy Iā€™ll just reinstall factory TrueNAS image and start againā€¦

Only the boot-pool, nothing else yet because of this error:

root@truenas[~]# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool   206G  2.15G   204G        -         -     0%     1%  1.00x    ONLINE  -
root@truenas[~]#

What options are you using to (attempt to) create the pool?

Silly as this sounds, but have you tried rebooting between failed attempts?

Ya knowā€¦after the amount of grief I went through recently trying to configure virtual WAPs on DD-WRT, it doesnā€™t sound silly in the slightest.

I have rebooted, but canā€™t be sure I did that between pool creation attempts, so Iā€™ll do it for certain to rule it out.

@nas I would suggest both reading about ZFS and reading some of the TrueNAS Core documentation. Perhaps even watch a YouTube video if you can find one that is about initial pool creation.

One thing that may help us diagnose the problem, is making a screen shot of the filled in pool creation GUI popup. Then posting that screen shot here for us to review.

1 Like

Ok, reboot did not make a difference. Also, yesterday I noticed in the bootup it was creating these virtual disks for some reason. I donā€™t know if that had to do with temporarily enabling a plugin or jail or not. I see this in the logs:

May 11 21:15:06 truenas pass15 at umass-sim0 bus 0 scbus15 target 0 lun 0
May 11 21:15:06 truenas pass15: <AMI **Virtual CDROM0** 1.00>  s/n **AAAABBBBCCCC1** detached
May 11 21:15:06 truenas cd0 at umass-sim0 bus 0 scbus15 target 0 lun 0
May 11 21:15:06 truenas cd0: <AMI Virtual CDROM0 1.00>  s/n AAAABBBBCCCC1 detached
#May 11 21:15:07 truenas 1 2024-05-11T21:15:07.064991+10:00 truenas.local dhclient 1012 - - receive_packet failed on ue0: Device not configured
#May 11 21:15:07 truenas 1 2024-05-11T21:15:07.065199+10:00 truenas.local dhclient 1012 - - ioctl(SIOCGIFFLAGS) on ue0: Operation not permitted
#May 11 21:15:07 truenas 1 2024-05-11T21:15:07.065319+10:00 truenas.local# dhclient 1012 - - Interface ue0 no longer appears valid.
#May 11 21:15:07 truenas 1 2024-05-11T21:15:07.065453+10:00 truenas.local dhclient 1012 - - No live interfaces to poll on - exiting.
#May 11 21:15:07 truenas 1 2024-05-11T21:15:07.065594+10:00 truenas.local dhclient 1012 - - exiting.
#May 11 21:15:07 truenas 1 2024-05-11T21:15:07.065966+10:00 truenas.local dhclient 1012 - - connection closed
#May 11 21:15:07 truenas 1 2024-05-11T21:15:07.066110+10:00 truenas.local dhclient 1012 - - exiting.
#May 11 21:15:07 truenas pass16 at umass-sim0 bus 0 scbus15 target 0 lun 1
May 11 21:15:07 truenas pass16: <AMI **Virtual CDROM1** 1.00>  s/n **AAAABBBBCCCC1** detached
May 11 21:15:07 truenas (pass15:umass-sim0:0:0:0): Periph destroyed
May 11 21:15:07 truenas **cd1** at umass-sim0 bus 0 scbus15 target 0 lun 1
May 11 21:15:07 truenas cd1: <AMI Virtual CDROM1 1.00>  s/n AAAABBBBCCCC1 detached
May 11 21:15:07 truenas pass17 at umass-sim0 bus 0 scbus15 target 0 lun 2
May 11 21:15:07 truenas pass17: <AMI Virtual CDROM2 1.00>  s/n AAAABBBBCCCC1 detached
May 11 21:15:07 truenas cd2 at umass-sim0 bus 0 scbus15 target 0 lun 2
May 11 21:15:07 truenas cd2: <AMI Virtual CDROM2 1.00>  s/n AAAABBBBCCCC1 detached
May 11 21:15:07 truenas (pass16:umass-sim0:0:0:1): Periph destroyed
May 11 21:15:07 truenas (**cd0**:umass-sim0:0:0:0): Periph destroyed
May 11 21:15:07 truenas pass18 at umass-sim0 bus 0 scbus15 target 0 lun 3
May 11 21:15:07 truenas pass18: <AMI Virtual CDROM3 1.00>  s/n AAAABBBBCCCC1 detached
May 11 21:15:07 truenas (pass17:umass-sim0:0:0:2): Periph destroyed
May 11 21:15:07 truenas cd3 at umass-sim0 bus 0 scbus15 target 0 lun 3
May 11 21:15:07 truenas cd3: <AMI Virtual CDROM3 1.00>  s/n AAAABBBBCCCC1 detached
May 11 21:15:07 truenas (cd2:umass-sim0:0:0:2): Periph destroyed
May 11 21:15:07 truenas (cd1:umass-sim0:0:0:1): Periph destroyed
May 11 21:15:07 truenas (pass18:umass-sim0:0:0:3): Periph destroyed
May 11 21:15:07 truenas (cd3:umass-sim0:0:0:3): Periph destroyed

I have no idea what triggered these. There are more than oneā€¦

Hey @Arwen I am @naskit from the old forums (couldnā€™t believe @nas was not taken here!).
Thanks, I posted the full trackback text in the original post, but Iā€™ll get some screenshots. I have both FreeBSD Mastery ZFS books plus the FreeBSD Storage Essentials and have been pouring over them all looking for answers, plus watching many YouTube vids and reading lots of articles and postsā€¦

Debug commands and methodologies would be very welcome.

More from the logsā€¦

May 11 21:15:07 truenas ugen1.5: <American Megatrends Inc. Virtual Floppy Device> at usbus1 (disconnected)
May 11 21:15:07 truenas umass1: at uhub3, port 3, addr 4 (disconnected)
May 11 21:15:07 truenas da0 at umass-sim1 bus 1 scbus16 target 0 lun 0
May 11 21:15:07 truenas da0: <AMI Virtual Floppy0 1.00>  s/n AAAABBBBCCCC2 detached
May 11 21:15:07 truenas pass19 at umass-sim1 bus 1 scbus16 target 0 lun 0
May 11 21:15:07 truenas pass19: <AMI Virtual Floppy0 1.00>  s/n AAAABBBBCCCC2 detached
May 11 21:15:07 truenas da1 at umass-sim1 bus 1 scbus16 target 0 lun 1
May 11 21:15:07 truenas da1: <AMI Virtual Floppy1 1.00>  s/n AAAABBBBCCCC2 detached
May 11 21:15:07 truenas (pass19:umass-sim1:1:0:0): Periph destroyed
May 11 21:15:07 truenas pass20 at umass-sim1 bus 1 scbus16 target 0 lun 1
May 11 21:15:07 truenas pass20: <AMI Virtual Floppy1 1.00>  s/n AAAABBBBCCCC2 detached
May 11 21:15:07 truenas da2 at umass-sim1 bus 1 scbus16 target 0 lun 2
May 11 21:15:07 truenas da2: <AMI Virtual Floppy2 1.00>  s/n AAAABBBBCCCC2 detached
May 11 21:15:07 truenas pass21 at umass-sim1 bus 1 scbus16 target 0 lun 2
May 11 21:15:07 truenas pass21: <AMI Virtual Floppy2 1.00>  s/n AAAABBBBCCCC2 detached
May 11 21:15:07 truenas da3 at umass-sim1 bus 1 scbus16 target 0 lun 3
May 11 21:15:07 truenas da3: <AMI Virtual Floppy3 1.00>  s/n AAAABBBBCCCC2 detached
May 11 21:15:07 truenas pass22 at umass-sim1 bus 1 scbus16 target 0 lun 3
May 11 21:15:07 truenas pass22: <AMI Virtual Floppy3 1.00>  s/n AAAABBBBCCCC2 detached

I am suspecting these were created in response to some virtualization trigger that I must have inwittingly pulledā€¦

Ah, a name changeā€¦ okay.

The virtual CD-ROM and other devices are an artifact of the IPMI, (aka Service Controller / Processor), that most server style boards have. You have an ASRock Rack C3758D4I-4L Intel Atom C3758 Mini-ITX Server Motherboard, so that would be expected.

1 Like

Screenshots as suggestedā€¦



2024-05-11 11_48_19-TrueNAS
2024-05-11 11_48_27-TrueNAS
2024-05-11 11_48_47-TrueNAS

Red herring - as per @Arwenā€™s reply, Virtual Floppy anf Virtual CDROM disks appear when I am connecting via the ASRockRack BMC Remote Console - it does use KVM for this.

This is going from bad to worseā€¦

Anyone know what this means and what can be done about it?

WARNING: L1 data cache covers fewer APIC IDs than a core (0 < 1)