Help understanding the set of API calls to created nested DataSets?

Howdy, folks.

I asked this on the old forum and didn’t get any answers but maybe that’s because the old forum was in the process of shutting down.

Ultimately I’m working on provisioning DataSets via Terraform/OpenTofu or Ansible but I’m not sure how to get that working where the DSes are nested > 2 levels deep. None of the above tools have much guidance on how to do so in their provider docs. So I’ve fallen back to the API to understand things at that level and then hoping I can use that knowledge to figure it out in a higher level tool.

Has anyone provisioned DSes > 2 levels deeps with the API and have any guidance as to the series of necessary calls and their parameters? That part I find most confusing is how to query an identifier for the parent DS as I’m specifying the child DS.


Which API are you using? For the REST API, it looks like you’d just create the dataset with "name": "poolname/path/to/dataset/however/many/layers/deep" in the request body.


Let me take a look at the API docs a bit later today. What I recall is that it wasn’t clear if I could just pass a path as a string or I first needed to look up some sort of ID for the parent. I recall it sort of manifested as the later up at the TF Provider layer.

This is documented in the pool.dataset.create APIs. cf create_ancestors. That said, IMO it’s not a great idea to create deeply nested datasets.

I’ve been digging around about the nested DataSets thing…

Curious if you can provide any guidance as to why/when nested DSes are heading off into “bad idea” territory?

1 Like
  1. We have APIs to create directories if you need to create a directory. A dataset isn’t a directory.
  2. The total of length of dataset name (for example: “tank/foo/bar”) cannot exceed 255 characters.
  3. The recommendation is to create one dataset per NFS export or SMB share
  4. Although the SMB server allows traversal of dataset mountpoints, this can be variously handled poorly by some clients (especially handling synthesized MS-FSCC file ids).

Ty. It’s entirely possible I was abusing DSes as a directory because that was the easiest thing to do in the console.

I will take a look at my approach as I dig into the referenced API endpoint.

From a practical standpoint, the more datasets you have, the more likely you are to end up with the admin fouling up a snapshot or replication task (or accidental destruction of data). You should take care when designing your storage to simplify administration since that’s how the bulk of one’s time will be spent (and is usually the most dangerous thing). There have been users who have gotten into the habit of creating / destroying datasets. The latter action is irreversible and snapshots will not bail you out (unlike with accidental directory deletion).

1 Like