My Ripping Machine usage notes

I thought after struggling with Automatic Ripping Machine the past few days I would post some notes to help others.

  • A lot of the errors I see being posted similar to mine can be characterized as device naming issues. In typical Linux fashion the /dev device names for an optical drive are not predictable. In addition to /dev/sr0 I needed to add the raw SCSI device (ex. /dev/sg10). A.R.M. stopped working this morning after a TrueNAS reboot. The SCSI device name changed from /dev/sg10 to /dev/sg8 because I had added a drive (I presume). Also, depending what USB port I attach the drive to /dev/sr0 could be /dev/sr1. So in general lsscsi -g is your friend. I’m not sure if a UUID or some other option is possible, but ARM does a sanity check, so /dev/sr? may be the only choice.
  • If you plan to use a GPU for embedded HandBrake you must edit the config file and change the method to one that matches your GPU type.
  • While using ARM my ZFS cache gobbles up nearly all of my 128G system memory, despite the fact I have a dedicated SSD for my pool. So running two transcodes is sketchy.
  • My Nvidia 5060 GPU seems to be barely impacted by the transcodes, but nvidia-smi verifies it’s running a job. The container cpu load is pretty high, so it seems the cpu and gpu share the load equally #shrug
  • So far I can’t rip DVD’s at all. Only BluRay is working. Seems like a decrypt/decode issue.

I see you made progress since you last post about problems with A.R.M.
Now I got some questions since you got further then I did.

I’m having problems getting my GTX 1060 to work.

I believe the problem is nvenc API 12 that my GPU uses and what’s supported by the driver that Truenas comes with.

Whenever I transcode i get and error that handbrake uses API 13 and errors out. I assume that I was just out of luck since TrueNAS uses nvidia’s 550.142 driver by defualt and you can’t “change” it without manuly installing drivers yourself as root. So I have to ask.

Do you have more then one GPU one for the host and another for the A.R.M or just one shared?
Your drive for the Nvidia GPU is it 550.142 or something else?
what’s your output of nvidia-smi on your host and in the container?

If you can share with me what you did to enable hardware transcoding that would be great.
I’ve messed with all the setting and configs, even tried the manual driver install.

But I can’t seem to get out of the problem that my GPU uses 12 api and handbreak uses 13 api.

I have a 5060 Blackwell reserved, and the crappy iGPU handles TrueNAS video. I just clicked install Nvidia driver… nothing else. Aside from having to tell ARM to use NVENC (HandBrakeCLI -z).

I’ve had zero luck using the GPU in a VM. It seems to work in all containers I try.