I’m running ElectricEel-24.10.0.1. I have a single pool with some shared volumes and approximately 210tb of stored data. I’m looking at creating a snapshot to tape. I have an LTO-8 library that I’d like to use.
I’m looking for some advice on the best way to create and manage the backup. In a perfect world, I’d love to capture EVERYTHING so that, in a complete disaster, I could load up TrueNAS and a client app and do a bulk restore, but as a fallback, I’d be okay just to capture the data via NFS or SMB without the ZFS permissions.
I created a Windows VM with the intent to run VEEAM on it, but couldn’t find a way to pass /dev/st0 to the VM.
In short, is there a way to do this directly from a container, or is VM the best vector? If VM is the way to go, how can I pass the tape library so that software can do its thing?
Thanks in advance for any assistance you can provide!
With LTO, the device drivers generally need to be running on the host to be seen. This is especially problematic for Macs since macOS does this through a system call rather than exposing a device under /dev. You may or may not face similar problems trying to expose host drivers in a container, and whether or not you can even expose a device may vary.
Your best bet is to use a VM, but again it’s not guaranteed that the host driver will work or that you can install the driver in the VM, especially if you’re using a non-native interface adapter (e.g. an ATTO HBA card) rather than Thunderbolt. If you get it to work, let us know how.
My professional recommendation is to mount an LTFS file system on the host and then share the LTFS mount via SMB. It’s imperfect for a lot of reasons, but is likely more reliable than trying to do device or driver pass-through since LTFS (when properly mounted) is seen by the host as a file system rather than a tape device.
If you don’t need special drivers for your drive, library, or adapter card, you could always consider installing IBM’s LTFS utilities directly onto TrueNAS and mount it there. Adding LTFS would make it an unsupported configuration from iX’s perspective, but would certainly simplify the daisy chain of hosts and clients you currently have. It’s at least worth considering.
That’s interesting advice, and I thank you for it. My concern is that, because it’s a robotic library, mounting it as an LTFS destination removes the backup client’s ability to communicate with the library’s tape swapping mechanism and barcode tracking system.
Increasingly, it looks like I need to stand up another physical host on my network with an HBA to the library and use it to consume the data remotely via share. It just seems like I should be able to do it natively through TrueNAS.
there is no nice way to do this, and it only gets much worse much quicker if you have an library/autoloader. I would love if there was just a way to set up an LTO library that points directly at ZFS snapshots but there is no such thing (paid or free), currently.
I understand your frustration–I share it around the lack of drive support for drive-native encryption, which has nothing to do with TrueNAS–but some of this is a limitation of how LTO currently works and the lack of standardization among the LTO consortium when it comes to interfaces and tape library support. Only one company currently produces LTO-9 drives with native Thunderbolt. So, everything else not only needs drive, adapter, and possibly library drivers (especially if your library does anything complicated or requires KMIP). This will then complicate your life with TrueNAS.
Assuming you can get your hardware to work at all on TrueNAS, at best you’d be exposing a raw tape device. The “smarts” of LTO backups are generally either in the backup software or in the firmware/hardware for hardware compression and (if you have the right KMIP software) the hardware-based AES-256-GCM encryption. That means if you don’t have problems with the drivers, you’ll still need to find backup software that can see the right data, and I’m not sure that ZFS snapshots capture sufficient data to restore from tape if your backup software doesn’t know how to treat them as ZFS snapshots. That’s why I’m suggesting LTFS rather than block storage.
Some of this may be the way iX seems to draw the line between “preconfigured appliance” and “flexible ZFS-based RAID arrays with additional services,” but in fairness most of the complexity is in ZFS itself and in the hardware and software limitations of the current generation of LTO devices. That’s why some LTO backup systems opt for LTFS by default: it’s dealing with what appear to be files, not raw data blocks, and that often makes differentials and appends faster and more time-efficient (although less space-efficient) to handle via LTFS and its index rather than as data blocks, whether raw or as part of a tar-like backup.
Except for raw block storage, the LTFS approach is likely a better approach for most (although certainly not all) use cases. Even in the case of a block-storage zvol, you can always find a way to treat it as a file (e.g. dd or a disk image cloner) rather than as just ZFS-specific data blocks. LTFS doesn’t care; it’s just a file system, so as long as you can copy the data you want from one filesystem to another “it just works.” Snapshots are more of a filesystem internal, so any non-LTFS backup software would need to understand the internals to handle snapshots as something other than files or a disk image.
Your mileage will definitely vary, but I would personally recommend Canister for doing efficient backups onto LTFS unless it lacks support for some particular hardware. It currently handles all drives (most are really IBM drives anyway) that can use LTFS, but your mileage may vary with libraries as Canister can grow your differential or incrementals (including offline indexes) across tapes automagically, but AFAIK it leaves the tape library functions to your drivers or hardware.
Whatever you decide to do, let us know. I’m deeply interested in how people are stringing their LTO solutions together, and you may discover something I haven’t run across yet.
I’m marking your post as a solution. It isn’t what I want to hear, but your expertise matches what I’ve been reading (and then some).
My plan from here is to stand up a Windows host on the network with an HBA for the library and run something from that to consume the data via SMB. I had hoped for something more elegant, but I recognize the technical hurdles betwixt my wants and an actual solution.
The only (and the best) way to do that would be to pass the whole HBA to the guest using PCI-E Passthru. This does unfortunately mean you’d be unable to use the other ports in the HBA for other things on the host. The PCI-E device will potentially have multiple sub devices, you need to pass each through to the VM. You can see here that the video card actually has a total of 4. 81:00:0 - 81:00:3 in this example.
But its all one addon card.
Load order shouldn’t matter in most situations. Just pick something 1002 or higher.
Good information, Nick. I have a second HBA, but I’m out of slots in my server to install it. Maybe I’ll move things around a bit and see if I can gain a slot. Thanks again!
Of course they do, they would be pretty damn useless if they didn’t.
Managing snapshots independently of ZFS is always going to be a pain, but there’s nothing keeping you from just streaming snapshots out to tape, as long as you deal with making sure they fit on the tape.
I understand what you mean here, but I’m not sure I agree.
In a world with non-homogeneous file systems ,having an agnostic third party like Veeam manage your backups is not a pain. Rather, it’s a different tool for a similar job.
As long as the snapshot is smaller than the tape. Special tooling would be required to span tapes in a meaningfully reproducible and useful way.
In general tho I don’t think doing this would be practical.