Anyone using Ansible with TrueNAS?

I use Ansible at work; and I am slowly moving to manage my homelab with Gitops. For this, I am using Ansible to manage all my systems.

Looking at TrueNAS, there seems to be limited options for Ansible, there are only a few roles in galaxy. I am sure I can get what I need working, but it will be very hacky. Using a real solution with the API would be best in the long run.

Just wondering if anyone out there is using Ansible for managing TrueNAS?

Other than adding / removing users, what do you imagine you need to do that is not one shot?

Meaning, you setup snapshots, SMART tests, scrubs and other things once. Then occasionally change them, or add new dataset & shares. Using Ansible for things like disk replacements, adding vDevs and other hardcore work seems like an error possible process.

Now getting information on the state of your TrueNAS for pool, disk & UPS health, yes that is useful. At present you can set up E-Mails and in the past use SNMP.

I am looking to automate repetitive tasks. For example, I use my own CA with Small-Step:

After migrating from BSD to Linux, my automations for TrueNAS break because socat tools are not installed with each upgrade, therefore, I must upgrade TrueNAS then run:

sudo -i 
install-dev-tools
apt install socat

I would much rather be able to have Ansible run the update, then as a post-upgrade process add in socat so that my TLS certificate does not explode.

Even for one shot tasks - infrastructure as code is implicit documentation. I also love the idea to be able to completely nuke a system, then recreate it on a replacement with a single command.

Config backups are great, of course, but they don’t contain e.g. your dataset structure, ACLs, etc.

Do they contain the apps in SCALE? I have not tried a restore yet.

In CORE the jails are self contained in the iocage data structures on the active dataset. Also not part of the config backup.

2 Likes

Looks like TrueNAS is going from the REST API to Websocket Client:

So I would guess one would need to write a python module specific to TrueNAS for Ansible to fully integrate with it.

I wouldn’t say Ansible is infrastructure as code. In my opinion, that’s more of the domain of things that have agents like Salt. Ansible is just a way to automate running stuff through ssh, but it doesn’t guarantee reproducible states.

That might have been true years ago, but Ansible has evolved way beyond just being a SSH based automation tool. For example, ansible-pull lets you manage thousands of machines using nothing more than git and Ansible installed locally without ssh at all. Some of my work machines I don’t manage their services with SSH, but rather using Ansible and the REST API. I manage some of my home windows PC’s with Ansible using WinRM (not SSH) and choclatey instead of ssh and apt/dnf for example.

There is a reason Red Hat has largely moved away from supporting Puppet, and is now standardizing on Ansible. Ansible is easier to use and maintain, and faster to implement with. Plus, its fully modular, so a lot of the things you think it “can’t” do are usually encapsulated in modules, just like Python.

For what it is worth, I can fully bring down some of my infrastructure, and bring it back into existence with Ansible. I would call that infrastructure as code.

1 Like

Ah I see. Admittedly, I haven’t used in a few years, so it’s good to know that they’ve expanded it. I guess I should revisit it again sometime soon.

1 Like

Hello,

Sorry for resurrecting this topic but I don’t see any other discussion related to Ansible and Scale. I have weird issue I never faced on any other machine that I manage with Ansible, it simply refuses to run on Scale, even the simplest ping module.

I tried several things like using password auth, certificate and while I can connect with SSH without issue using above methods and I guess connection itself is working (I can see temp folder for ansible stuff is there in user home directory) it just freezes on ansible command execution and eventually exits once timeout is reached.

I don’t see any obvious error in debug mode from Ansible side, SSH service on Scale is not reporting error, just information that user connected and then disconnected.

Is SSH daemon for Scale somewhat different from other distributions that needs some additional tweaks? Anyone else has something similar?

EDIT: Never mind, it was networking issue, had to access SSH from the same VLAN as ansible master running playbook.