MacOS as a VM on TrueNAS

I started having this issue today while trying to modify my raw file path: Can't add a new Raw File device to a virtual machine

Screenshots:
This image is if I try to add a new device:

This image is the details for an existing device that gives the same error if I hit edit and don’t change anything and hit save:

This is what I am now having issues editing lol. And this is after upgrading from 24.10.2.2 to 25.04.2.4

I skipped the early Fangtooth versions to avoid having to deal with the VM mess.

I have all of my setting and configuration of my phone, all of my data and apps exactly as I want them. I want that backed up for easy restore in case I lose my phone.

Hope that answers your question.

@Rocketplanner83 was asking for use cases and thats my big usecase,

Maybe its not yours,but its mine :slight_smile:

If anyone has a way of building a native iphone backup using truenas that doesn’t involve a mac vm I’d love to hear about it!

Thanks.

Other than a Windows VM and either iTunes or commercial backup software like Copytrans, sorry, no.

I wasn’t questioning your use case in general, but simply curious. You asked if “nobody did backups” so I wrote about my approach.

1 Like

My workaround was to delete the file from where I wanted it, set the Raw File settings to that location with a Raw Filesize of 1 GiB (the minimum) and then copy over the file.

1 Like

I think this is the core distinction in this thread:

  • Media/files → Nextcloud, Immich, Syncthing, etc., handle that really well on TrueNAS.
  • Full-device backups (settings, apps, layout, messages) → Apple locks this behind iTunes/Finder/iMazing, no way around it.

There isn’t a “native” iOS-to-NAS path like there is for Android. The only way to capture acompletel, restorable backup is to run either:

  • Windows VM → Install iTunes or iMazing, then point the backup directory at a TrueNAS SMB share. On Windows, you can even symlink the iTunes backup folder directly to your dataset, e.g.:
mklink /J "%APPDATA%\Apple Computer\MobileSync\Backup" "Z:\iPhone-Backups"
  • macOS VM → Finder does the same thing, and you can relocate its backup folder onto your NAS.

Everything else (such as libimobiledevice in Docker) falls apart quickly and won’t provide a restorable backup.

So really it comes down to:

  • If you care about restore fidelity after a lost phone → VM + iTunes/Finder/iMazing + NAS storage.
  • If you care about photos/videos/documents → Immich/Nextcloud to TrueNAS, which avoids iCloud bills without the VM overhead.

Apple makes sure there’s no middle ground. You either embrace their tooling or you focus on media/file sync.

1 Like

My use case is this: my kids and my (awesome) wife have Iphones, but I’m on android.
As far as I know it is not possible to set the screen time settings from a non Apple device. Hence, I’m not able to allow or restrict my kids screen time settings if my wife isn’t available.

Currentlly I use a MacOS VM on my (Linux) Laptop, which was pretty straight forward using Quickemu.

But since the VM is pretty big and I only use it once a month or so, I would rather put it on the NAS instead of filling my local laptop SSD.

1 Like

Thank you, this crazy workaround works for me :rofl:

1 Like

This was OBE

One thing that helped me cut through a lot of the YAML/CLI frustration was not editing the VM config directly at all.

On recent SCALE releases (including 25.10 / Goldeye), manually editing the VM YAML or trying to keep lines uncommented is brittle. The UI and backend will happily re-comment things even when the values did apply.

What worked reliably for me was updating VM settings via the middleware API instead of editing YAML:

midclt call vm.update <VM_ID> ‘{
“command_line_args”: “-device isa-applesmc,osk=ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc -smbios type=2 -global nec-usb-xhci.msi=off -global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off”
}’

Key points that made this sane again:

  • Put all command line args on a single line

  • Let midclt handle persistence instead of fighting YAML indentation

  • Don’t worry if the UI later shows the line commented; if midclt accepted it, the args are applied

  • Same approach works for things like CPU mode, machine type, etc.

This avoided:

  • YAML whitespace issues

  • Settings “not sticking”

  • CLI errors from malformed quoting

It doesn’t solve every macOS-on-SCALE problem, but it at least makes the configuration side predictable again.

If others are still editing the VM YAML directly and getting nowhere, I’d strongly recommend trying middleware updates instead.

After way too much testing and rabbit-hole research, I stopped trying to force macOS into weird VM gymnastics and just built around a Mac Mini tied directly into my TrueNAS stack.

Mac Mini runs native, mounts ZFS storage over high-speed internal networking, and lives inside the same infrastructure as the rest of my servers. Full performance, snapshot protection, centralized storage, and zero virtualization headaches.

Honestly, it’s been the cleanest and most stable way I’ve found to run macOS in a serious homelab setup.

Sometimes the best solution isn’t forcing it to work… it’s letting the right hardware do what it’s good at.

I wanted to follow up with what actually worked for me on TrueNAS SCALE 25.10.1 (Goldeye), both for completing the macOS install and for the original goal that led me down this path: local iPhone backups backed by ZFS. I also want to clarify the current state of Apple Content Caching in this setup.

Keyboard input during install (Goldeye-specific)
On Goldeye, I was able to boot the macOS installer cleanly via OpenCore and reach Setup Assistant, but I had no keyboard input at the account creation screen. Mouse input worked, the installer progressed, and CPU and disk activity were clearly visible, but I could not type anything to finish setup.

What ultimately solved this was passing the entire USB controller through the PCI bus to the macOS VM.

Once I:

  • Passed through a complete USB controller (not individual USB devices)
  • Completed initial macOS account creation using a physical USB keyboard
  • Enabled Remote Management / Screen Sharing inside macOS

…the issue disappeared entirely.

After that first login, I now have complete VNC control from any machine on my network. Keyboard and mouse input typically works over VNC, and USB controller passthrough is no longer required for day‑to‑day use.

iPhone backups to ZFS (primary use case)
With macOS up and stable, I attached a dedicated ZFS‑backed dataset to the VM and redirected iPhone backups to it.

Specifically:

  • I attached a dataset from my SSD pool to the VM (mounted as /Volumes/mac_cache)
  • Moved Finder/iTunes backups off the VM boot disk
  • Symlinked the MobileSync directory to ZFS:

mkdir -p /Volumes/mac_cache/MobileSync
mv ~/Library/Application\ Support/MobileSync/* /Volumes/mac_cache/MobileSync
rmdir ~/Library/Application\ Support/MobileSync
ln -s /Volumes/mac_cache/MobileSync ~/Library/Application\ Support/MobileSync

Finder now backs up my iPhone directly into ZFS, with:

  • No iCloud dependency
  • No VM boot disk growth
  • Snapshot and replication support on the NAS side

Backup growth is visible immediately in the dataset, and restores work normally.

Apple Content Caching (current status)
Content Caching was part of my original goal, but I want to be explicit about the current behavior.

Even with a valid Mac mini SMBIOS, stable networking, and SIP in a custom/disabled configuration, Apple Content Caching will not activate inside the macOS VM. AssetCacheManagerUtil consistently reports that it is running in a virtual machine and refuses to register or start.

This appears to be an intentional Apple restriction. Disabling SIP alone is not sufficient to make Content Caching work in a VM. The only known ways around this involve patching or hooking Apple system binaries to bypass VM detection, which is unsupported and brittle across OS updates.

For now, I am leaving Content Caching disabled and treating this VM strictly as a reliable, long‑term iPhone backup target backed by ZFS.

Key takeaways for Goldeye (25.10.x)

  • macOS installer can stall at Setup Assistant due to unreliable HID input
  • Legacy QEMU input devices are no longer exposed in the UI
  • Full USB controller passthrough provides a reliable path through initial setup
  • Once Remote Management is enabled, VNC input works normally
  • Redirecting MobileSync to ZFS works cleanly and supports full‑device iPhone restores
  • Apple Content Caching does not work in a macOS VM without unsupported system patching

This combination—USB controller passthrough for the first boot, then VNC-only operation with ZFS-backed iPhone backups—wasn’t apparent from existing guides, so I wanted to document what actually worked and what didn’t.

hey I looked through GUIDE - How to Install macOS on TrueNAS [Intel/AMD] - PCI Passthrough Guide | EliteMacx86 Forum and read this whole forum post, and got 2 files, BaseSystem.img and OpenCore-v21.iso both loaded as raw ahci, device order opencore 1000 and base system as 1001, 1002 is the zvol.
i got opencore from here:
Release v21 - OpenCore for Catalina, Big Sur, Monterey, Ventura on Proxmox · thenickdude/KVM-Opencore · GitHub
I am stuck on this screen, what am I doing wrong?
my image is Sonoma
got it from Making the installer in Windows | OpenCore Install Guide

my system specs:
CPU Model
AMD Ryzen 5 3600 6-Core Processor
Memory: ddr4 48gb,
vm vcpu: tried with 1 and 4
cores 6
thread 2
I got no GPU

A couple of things to clarify that will save you a lot of time.

First, for the initial boot, you should set the VM to 1 vCPU / 1 thread. macOS is notoriously picky during early boot and installer handoff, and multi-threading can cause hangs before graphics ever come up. You can increase the number of cores later, once the OS is installed.

That said, the real blocker isn’t OpenCore, disk order, or installer media. It’s this line from your specs:

“I got no GPU.”

On AMD CPUs, macOS requires a real, macOS-supported GPU passed through via PCIe. There is no usable software or virtual display fallback on Ryzen. QEMU/VNC/SPICE can load OVMF and OpenCore, but macOS will stall exactly where your screenshot shows—right after the OpenCore → macOS handoff.

Because the Ryzen 5 3600 has no iGPU, you must do one of the following:

  • Pass through a supported AMD GPU (RX 5xx / RX 6xxx), or
  • Move the VM to an Intel system with an iGPU

Without one of those, macOS will never progress past the TianoCore/OpenCore stage, regardless of whether it’s Sonoma or Ventura, the OpenCore version, or the installer source.

Small but critical distinction compared to my setup: CPU architecture.

I’m running macOS on Intel hardware, and it can boot and run without a physical GPU passthrough once the OS is installed. On Intel, the system can run “headless,” and VNC/Screen Sharing works fine after initial setup.

On AMD Ryzen, there is no such fallback. macOS must see and initialize a real, supported GPU. With no iGPU and no discrete GPU passed through, it will always stall exactly where you’re stuck.

So in short:

  • Reducing to 1 thread is correct and recommended
  • Your OpenCore and disk setup are fine
  • No GPU on AMD is a hard stop

That’s why my VM works headless on Intel, and why yours currently cannot on Ryzen without GPU passthrough.

1 Like

Thank you so much for the reply, and just to confirm if its possible if I use a gpu just for installation and then I use it without gpu? On my current setup.

And if i add a gpu and pass it through pcie, do i need to change anything? Like in cli or something?

Yes, that’s totally possible.

You can use a GPU just for the macOS installation and initial setup, and then remove it afterward and run the VM headless. macOS does not require a physical GPU once it’s installed, as long as you have a virtual display device (VNC/QXL/SPICE) for access.

If you later decide to add a GPU via PCIe passthrough, you usually do not need to change anything in OpenCore or macOS, assuming the GPU is natively supported. Just make sure the VM is fully powered off (not rebooted or suspended) before adding the GPU, and pass through both the GPU and its HDMI/DP audio function if it has one.

The main thing to be careful about is stability: adding or removing PCI devices changes the VM’s hardware layout. It’s best to do it once, confirm the VM boots normally, and then leave the configuration alone. Avoid making multiple hardware changes at the same time.

In short:

  • GPU for install only → perfectly fine
  • Headless after install → fine
  • GPU passthrough later → fine, no major config changes needed, just power off first

sorry, very silly question, what is the way to use the opencore image? i just downloaded and added as raw, but the elitemax89 gpu compatibility chart shows my card is compatible but needs spoofing, so I read through this, and it says add it to EFI/OC/ACPI and also edit the config.plist. can you please tell me steps? I am very new to this.

I have an old Sapphire TOXIC R9 280X 3G GDDR5

or I can get a brand new gpu just to install it. If I wont have to change the efi or open core n shit and be fine with this: Release v21 - OpenCore for Catalina, Big Sur, Monterey, Ventura on Proxmox · thenickdude/KVM-Opencore · GitHub. I would prefer this simpler route. I should be good with any gpu from this list that doesnt say needs spoofing under the table? I Ordered Radeon HD 6570 Graphics Card, Dual HDMI, 1G DDR3https://elitemacx86.com/threads/nvidia-gpu-compatibility-list-for-macos.614/

Thank you so much in advance.

I’ve been hackintoshing for about 10 years and messing with Linux/KVM for the last couple, so I’ll keep this simple.

You don’t need to mess with ACPI or hand-edit config.plist just to get macOS installed if you’re using the thickncdude/KVM-OpenCore image. Attach the OpenCore image, attach the macOS installer, boot OpenCore, and install. That’s basically it.

The R9 280X isn’t supported on modern macOS (Big Sur+). Spoofing those older AMD cards is pretty hit-or-miss and usually more pain than it’s worth.

You can install macOS headless using the virtual display, or use a supported AMD GPU just for the install and remove it later; no OpenCore changes needed either way.

NVIDIA GPUs aren’t really an option for modern macOS, so I’d skip that path.

I’d recommend getting macOS booting and stable first, then worrying about GPU passthrough if you actually need it.

I just installed RX 5500 XT, did the passthrough with these settings, it booted into opencore screen, i chose macos base system option, and now im stuck at apple logo, what am I doing wrong now?
tried os 15,14,13
same thing
btw, had to add the gpu in isolated gpu list in advance settigns of truenas

That behavior is expected with an RX 5500 XT on Sonoma.

A few key points to check:

  1. RX 5500 XT is not supported in macOS Sonoma (14.x). Apple dropped support for Navi GPUs. It will boot OpenCore, start loading macOS, then hang at the Apple logo exactly like you’re seeing.
  2. If you want that card to work, you need to install Ventura (13.x) or earlier. Sonoma won’t work with it, passthrough or bare metal.
  3. Make sure you are not using VNC/QXL at the same time as GPU passthrough. Once the GPU is passed through, remove the virtual display device.
  4. Isolating the GPU in TrueNAS was correct; no issue there.

If your goal is just to install macOS and later run headless, I’d recommend:

  • Install Ventura with the RX 5500 XT
  • Complete setup and updates
  • Then remove the GPU and run headless if you don’t need acceleration

Sonoma + AMD Navi GPUs is the blocker here, not your VM settings.