How to manually set up self-encrypted drives (SED)

Hello Everyone.

New user here. I would like to set up a pair on NVME SSDs in RAID-1 with drive based encryption. They do support Opal 2.0.
TrueNAS has, somewhat recently, gated the GUI-based setup to enterprise users. The documentation says:

UI management of Self-Encrypting Drives (SED) is an Enterprise-licensed feature in TrueNAS 25.04 (and later). SED configuration options are not visible in the TrueNAS Community Edition. Community users wishing to implement SEDs can continue to do so using the command line sedutil-cli utility.

Note: Additional changes to SED management options in the TrueNAS UI ahead of the 25.04.0 release version, with documentation updates to follow.

Question: Does anyone know how to manually configure this? Is there a tutorial somewhere? Thanks!

1 Like

Out of interest why wouldn’t you use ZFS encryption? Just curious.

This pool is for running VMs. Hardware based encryption is more performant.
I am aware of the pros and cons of hardware vs software based encryption.

I would like this thread to remain on topic of how to configure SEDs in TrueNAS community edition.

1 Like

Sure. There were indeed some tutorials, but trying to simplify:

Step 0: before you install your drive, record the PSID (Physical Security ID) number on the drive’s label (sometimes/typically, you need it to initialize the drive, and also if one day, you ever need it (to reset/erase a lost password drive), you won’t have to take your machine apart to get it.)

Then, assuming properly identifying your SED drive, e.g. via:
sedutil-cli --scan
and then a few environment variables set:
export PSID_NO_DASHES=<your_PSID_from_the_drive_label>
export SEDPassword=<your_password> # probably keep long/random, but avoid symbols
export SEDDevice=/dev/<your_SED_drive>

Step 1: ā€œFactory resetā€ the drive (assume this will erase the drive – you should be starting with a empty drive) with this command:
sedutil-cli --PSIDrevert $PSID_NO_DASHES $SEDDevice
Step 2: Initialize the drive with a password with this command:
sedutil-cli --initialsetup $SEDPassword $SEDDevice
Step 3: Enable the ā€œlocking rangeā€ - (0 is the whole drive)
sedutil-cli --enablelockingrange 0 $SEDPassword $SEDDevice

Now, your drive is setup with your chosen password, for full drive encryption. As such, you can now LOCK and UNLOCK the drive – by setting the locking range 0 as either ā€œlkā€ (lock) or ā€œrwā€ (read/write):
sedutil-cli --setlockingrange 0 lk $SEDPassword $SEDDevice
sedutil-cli --setlockingrange 0 rw $SEDPassword $SEDDevice

As you Lock/Unlock, you can see the status of your drives locking with this command:
sedutil-cli --query $SEDDevice |grep "$SEDDevice\|Lock"
which will show either
Locked = Y
or
Locked = N

Of course, by design, at power off/power interuption, it will automatically lock, so you’ll really only need the unlock command:
sedutil-cli --setlockingrange 0 rw $SEDPassword $SEDDevice

However, ā€œimplementing SEDs with the sedutil-cli utilityā€ isn’t much of an elegant solution.

There currently isn’t a UI option to set the application environment to not start automatically:

So, when you boot up, and the app service/Docker looks for its drive, the drive will still be encrypted, thus missing, and the service start will bomb out, e.g.:

So, you’ll have to manually unlock your drives, then restart the app service (hopefully) to get everything up and running. I suppose a systemctl disable docker.service is possible here, but I haven’t experimented this far yet. Needless to say, the UX here isn’t the typical TrueNAS ease of use.

What I have seen so far is that Electric Eel still supports the UI adding the SED passwords, and when I did a test upgrade of a system already set to use SEDs to Fangtooth, the SED passwords were still in the UI – so I’m not sure the functionality is really ā€œdisabledā€.
Likewise, the config database has table/fields for:
storage_disk | disk_password
and
system_advanced | adv_sed_password
and I’m not sure if those got set/config restored in Fangtooth+, if the SED passwords would return to the UI as well. Again, I haven’t experimented this far yet.
Also, per the Jira ticket initiating the SED change, supposedly the ā€œapi will still be allowed so this is not removing the feature from the communityā€, although I’m not sure what that means. Maybe there is still a cli/api way to set
storage_disk | disk_password
and/or
system_advanced | adv_sed_password
… which will also keep the functionality.

So far, I’m just stuck on Electric Eel over this, which is sad, as I’d like to try out Instances. But, in the meantime, the Docker apps in Electric Eel will keep me busy.

Overall, I think nerfing SED was a mistake, and I hope for its reversion. There is a lot of unwarranted bias again SEDs, but even LUKS/Cryptsetup, with 2.7.0, now supports Opal – despite some initial ā€œthis will never happenā€ – so the biases seem to be dying. Even the TrueNAS Jira ā€œissuesā€ related to SEDs seemed unfair, namely dinging SEDs for not being fault tolerant to sketchy power (e.g., NAS-129366, NAS-132518), which is a feature not a bug. Sketchy power is a potential security event – it shouldn’t be tolerated for convenience/fault resilience.

I’m not sure about the ā€œsharp edgesā€ alluded to in NAS-133442, but if it means a power loss locks an SED, or abusing your system by repeatedly turning it off and on again has erratic results, I’d close those tickets with ā€œworks as designedā€.

2 Likes

So, after playing with this a bit, and falling into a crash course in working with the API, I can now attest to the fact that setting SED passwords in the TrueNAS middleware via the API is actually pretty simple – rendering the UI change for SEDs, IMO, not a big deal.

Setting system_advanced | adv_sed_password – i.e. the Global SED password – via the API is as simple as, dropping to a shell, and using these commands:

midclt call system.advanced.sed_global_password_is_set
midclt call system.advanced.sed_global_password
midclt call system.advanced.update '{"sed_passwd": "<your_SED_password_string>"}' |jq

The first command returns true/false if a SED global password is set.
The second command returns the set SED global password, if it exists.
The third command will set the SED global password (to the string you put in place of <your_SED_password_string>).
Once set, you can check it with the second command.

Setting individual disk SED passwords, is only slightly more complex – because you have to call the command with a ā€˜disk identifier’ (NOT simply the disk name, like nmve0, sda, etc.) that you must collect with an additional step. Essentially, the below command at the shell will set an individual drive’s SED password:

midclt call disk.update "<your_disk_identifer>" '{"passwd": "<your_SED_password_string>"}'

… with the disk identifier taking a form like:
{serial_lunid}JQF4XMMRF5TT726_3d84f39adff29c457

You can collect the appropriate identifier for your SED drives thusly:

First, get the names of your SED (Opal ā€œ2ā€, Enterprise ā€œEā€) drives with
sedutil-cli --scan, e.g.:

root@TrueNAS[~]# sedutil-cli --scan
Scanning for Opal compliant disks
/dev/nvme0  2      Samsung SSD 970 EVO Plus 2TB
/dev/sda   No
/dev/sdb   No
/dev/sdc    2      Samsung SSD 870 EVO 500GB 
/dev/sdd   No

So, in this example, I’m looking for the disk identifiers for nvme0 and sdc. Once I know that, I can query disk information for those drives from the command line like this:

midclt call disk.query \
        '[["name","~","nvme0|sdc"]]' \
        '{  "extra":{"pools":true,"passwords":true},
            "select":["pool","name","identifier","subsystem","bus","type","model","serial","passwd"],
            "order_by":["bus","name"]
         }' \
|jq

(For any who need the explanation, the second line is a regex query filter putting the targeted disk names delimited by a pipe character, as regex match alterations – so you would modify for the particular needs of your own disk collection – e.g., ā€œnvme0|sdcā€, ā€œnvme0|nvme1|sda|sdbā€, etc. )

This will output formatted JSON of your SED drive details, giving an easy read of the ā€œidentifierā€ to use for the individual disk SED password setting command:

midclt call disk.update "<your_disk_identifer>" '{"passwd": "<your_SED_password_string>"}'

Of course, once set, you can repeat the earlier disk.query command and see the password in the individual drives details now that it’s set.

3 Likes

Thank you very much for looking into this.
I wasn’t super detailed in my initial question, but you are correct that this is not just a matter of the single commands to encrypt and decrypt the drive.

Important questions are:

  1. Where and how is PWD stored when the drives are unlocked.
  2. How does the system behave when the drives are encrypted.

Actually these questions are not specific to SED drives, They are applicable to ZFS encrypted ones too. I feel there should be a clean way to provide the PWD, unlock the drives and start the related services.

1 Like

When you download a TrueNAS config, with the seed option, you get a tar file with a:
freenas-v1.db
and a
pwenc_secret
file.

The *.db file is an SQLLite database that has these SED related values:
storage_disk | disk_password
and
system_advanced | adv_sed_password

e.g.:


and

The SED password field in the database is encrypted – I’d assume with the pwenc_secret file content. I’d likewise guess that all of these config items are stored somewhere on the boot_pool, since they boot up and unlock without user intervention – I’d assume the same goes for the ZFS encryption as well.

That is, I’d guess that the decryption keys are stored in cipher-text on the boot partition, along with the secret that decrypts the cipher-text. This is probably a security model that assume a data center with physical security over the host (protecting the boot_pool), and the encryption on the data drives are more for assuring data destruction at drive retirement, or in the rare event of data drives being stolen.

In a different model, where you couldn’t assure physical security (or where authorities/hostiles/spies might physically seize/covertly access your equipment, you’d probably want unlocking keys not stored on disk nor stored in memory – but that compromises ease-of-use and fault-tolerance/uptime.

I’m kind of interested in playing with the Opal support in LUKS 2.7.0+, which doesn’t yet seem to be in TrueNAS:

root@TrueNAS[~]# midclt call system.info | jq '.version'
"25.04.1"

root@TrueNAS[~]# cryptsetup --version
cryptsetup 2.6.1 flags: UDEV BLKID KEYRING KERNEL_CAPI

and see if I can get TrueNAS running off a boot_pool enclosed in a LUKS partition that just does Opal pass-through, and prompts for a password (and/or other authenticators, e.g., yubikey, fingerprint, etc.) at boot up before unlocking the boot drive.

Moreover, if enclosing a ZFS partition in a LUKS Opal structure doesn’t have any impact on ZFS operations, once the drive/partition is unlocked, it might even be a better approach for TrueNAS to do SED – rather than rolling their own solution with sedutil-cli under the hood, they could just rely upon the upstream LUKS SED solution. Just spitballing here though.

I’m curious to know if you got anywhere with this. While I understand the benefits of unlocking without a fob or password in an enterprise environment where you have greater physical security guarantees, there’s something to be said for not having en claire as the default for systems security. People who are worried about booting remote systems that require a boot password or similar can always install a KVM-over-IP (there are some great Raspberry Pi based systems for between $100-300) if that’s a concern.

As a person who works in security, I am fundamentally uncomfortable with a ā€œdecrypt everything automagically by defaultā€ system, and while I know there are trade-offs the unlocked-by-default boot disk should be an option and not the default IMHO.

There are certainly other valid opinions, but I really think this one is something that deserves more attention.

1 Like

I completely take your point but in my experience encryption at rest is often just a box ticking exercise. As you say if you really are concerned about your data getting into the wrong hands there are much better ways to do this.

Well, there’s certainly more than one way to do it, but the problem is that currently you can’t really trust the integrity of the boot drive. Setting aside the question of whether auto-unlock of the pool disks is a good idea, anyone with physical access to the system’s boot drive can make changes.

Of course you could install TripWire or similar (which would be unsupported) or lock your BIOS (which brings us back to KVM-Over-IP), and so forth. We’re not talking about nation-state attackers here—my tin-foil hat isn’t that big—but I think having the ability to rely on the integrity of your boot system is an important consideration that is somewhat fundamental to any kind of trusted computing.

Security is always a trade-off. Anyone who follows me anywhere off this site knows I know that, and am a strong advocate for making informed trade-offs. I just think the fundamental lack of builtin trade-offs is a gap, that’s all.

I’m not trying to argue with you about it; I think we fundamentally agree. I just think that whether it’s enterprise customers or CE users, there should be a better set of trade-offs built into the system, especially since iXsystems basically disclaims anything that isn’t built in. That’s my real point: if implementing basic security on boot systems is an unsupported feature, then it’s a design effectiveness problem rather than a difficult-to-solve problem. My $0.02, anyway.

Actually there is a simple solution:

Have SED device(s) as TrueNAS boot disk(s), that can also store keys of encrypted ZFS pools, and unlock them before TrueNAS boots via local console, remote via SSH or remote via HTTPs.
With that you don’t necessarily need KVM-over-IP / BMC.

2 Likes

I got somewhere, but not far – still researching, haven’t yet really started experimenting much yet.

What I’ve found:

OpenZFS actually have content demonstrating incorporating LUKS with ZFS:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html#encryption
Basically, their example of a LUKS encrypted boot pool just looks like a standard LUKS partition, then a zpool on that partition:

Create the root pool:
LUKS:

apt install --yes cryptsetup

cryptsetup luksFormat -c aes-xts-plain64 -s 512 -h sha256 ${DISK}-part4
cryptsetup luksOpen ${DISK}-part4 luks1
zpool create
-o ashift=12
-o autotrim=on
-O acltype=posixacl -O xattr=sa -O dnodesize=auto
-O compression=lz4
-O normalization=formD
-O relatime=on
-O canmount=off -O mountpoint=/ -R /mnt
rpool /dev/mapper/luks1

There’s some complexities to getting everything bootable, but nothing looks crazy:

For LUKS installs only, setup /etc/crypttab:

apt install --yes cryptsetup cryptsetup-initramfs

echo luks1 /dev/disk/by-uuid/$(blkid -s UUID -o value ${DISK}-part4)
none luks,discard,initramfs > /etc/crypttab
The use of initramfs is a work-around for cryptsetup does not support ZFS.

So, if software encryption LUKS works, maybe the Opal SED encryption LUKS – starting in cryptsetup 2.7.0 – can be just subbed in, e.g.:

cryptsetup luksErase --hw-opal-factory-reset <device>
     Enter OPAL PSID: ***
cryptsetup luksFormat --hw-opal-only ${DISK}-part4

before the zpool create.

Of course, the TrueNAS install already sets up the boot-pool, so short of tweaking the TrueNAS installer script, probably an easier proof of concept might be to try to LUKS encrypt an existing system.

Interestingly, there is now a ā€œreencryptā€ method in cryptsetup, that allow you to add LUKS to an existing partition, e.g.:

cryptsetup reencrypt --encrypt --type luks2 --reduce-device-size $keyslot_size /dev/ā€œ$1ā€

… and here’s a script, and a script showing a reencrypt for a BTRFS volume (because some were disappointed that the Ubuntu installer doesn’t support encrypted BTRFS, so they learned to retrofit it.):

However, this process involves a small (32m) reduction of the volume being encrypted, and while volume shrinking is supported in BTRFS, I don’t think you can do that with ZFS. I think you might have to zfs send -R the boot-pool to a temp spot, create the LUKS partition, and zpool in in, and then zfs send -R the boot-pool content back into place. And of course, make sure all the other stuff to support a LUKS boot, then a ZFS discovery/mount, all works.

And then, to do LUKS Opal, you need at least cryptsetup 2.7.0, and to work with the TrueNAS boot-pool in a recovery environment, you need a supporting OpenZFS, etc.

I tried making a simple Debian Bookworm environment, with the ZFS in the repo, and I saw this:

root@debian:~# zpool import
   pool: boot-pool
     id: 17027490679019720616
  state: UNAVAIL
status: The pool can only be accessed in read-only mode on this system. It
        cannot be accessed in read-write mode because it uses the following
        feature(s) not supported on this system:
        org.openzfs:zilsaxattr (Support for xattr=sa extended attribute logging in ZIL.)
action: The pool cannot be imported in read-write mode. Import the pool with
        "-o readonly=on", access the pool on a system that supports the
        required feature(s), or recreate the pool from backup.
 config:

        boot-pool    UNAVAIL  unsupported feature(s)
          nvme0n1p3  ONLINE

So, there probably more work to do to get all ducks in a row.

That’s interesting. Generally, I lost faith in the PBA environment because they always ran on super old kernel, and failed on my hardware. I was hoping eventually for something that ran in UEFI (like GitHub - necauqua/opal-uefi-greeter: An UEFI application that unlocks a SED and starts an OS from it. Written in Rust) off a standard EFI partition and then unlocked the system locking range, and booted off it.

This looks promising as it’s built on Ubuntu LTS 22.04.

The remote feature is a nice touch too. Interestingly the OpenZFS setup guide has a similar bundling of Dropbear into the initramfs, to likewise allow, over SSH, either zfsunlock or cryptroot-unlock:

Optional: For ZFS native encryption or LUKS, configure Dropbear for remote unlocking:
...
-- https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Bookworm%20Root%20on%20ZFS.html#step-4-system-configuration

You do have to be careful because different sedutil-cli use different hashing, and the passwords set with one don’t end up usable in another version/platform of the tools.

Probably a good idea to use the TrueNAS fork (assuming that what’s built into TrueNAS):

So you can still use the built in command for any needed management.

Thanks for the pointer, though. I’ll check this out.

1 Like

Luckily ChubbyAnts fork Supports UEFI. Others in the mean time too.

Yes, that is a very important point. For that the developer allowed to choose a specific fork:

Using other forks of sedutil

Optionally you can use other sedutil forks of the official Drive-Trust-Alliance one by setting the environment variable SEDUTIL_FORK as follows:

Example: sudo SEDUTIL_FORK="ChubbyAnt" ./build.sh