TLDR; opnsesne network performance is bad with virtio (550mb) and e1000 (100mb). Has anyone got opnsense working well on Truenas Scale?
So, I run opnsense virtually on a ESXi box, working fine at full performance, virtualising freebsd is always a bit of an issues so I’m familuar with the challenges.
With ESXi becoming closed, I wanted to move it to Truenas SCALE. Even just temporarily while I sort the ESXi box.
I have the VM working and passing traffic, etc. but the performance is shocking. using virtio drivers, I get no more than 550MB, I know virtio drivers can be an issue so I dropped in some e1000’s and I get 100MB. also libvirt at guest idle (1% CPU) spikes upto 100 to 200% on the host, idling normally to 7-20%.
I’ve tried various things, such as CPU, disabling/enabling hardware offloading. Different network configuration on the host. I’m hitting a wall and I am unsure if this is a KVM thing or Truenas?
Frankly. OPNSense is not is not recommended to be in a VM and I sure wouldn’t do it on Truenas. If you are looking for a solid solution to replace VMWare, go look at Proxmox. Does all the cool networking you want and runs Truenas Scale in a VM well (if you read up on drive/controller pasthrough). I can run a ton of VMs without having to mess with my SAN. I use Truenas for things like Nextcloud/ChannelsDVR/etc.
I’m thinking thats as an opinion, which is cool-an-all and thanks for responding with not getting must engagement. IMO, it depends on use case, would I recommend it for a client, mmm… probaby not, would I say its not recommended, that depends on use case and for my use case, I wouldn’t do anything other. Opnsense themselves have documentation on virtual instances:
I’d really like a couple more options for virtualisation on truenas, but saying that its not my goto virtual platform and right now really shouldn’t be. However, it doesn’t mean things cannot be improved. For me its temporary or as a backup, which is my final goal.
The problem is really freebsd, it’s always been a problem child, although having investigated this, I’m seeing high CPU on the host for other VMs, that are idle. It feels like an underlying timing issue and with this being KVM, its potentually an issue with most other platforms.
Proxmox, I’m not a fan… too much to post on this topic.
Where I’m at:
For anyone interested and the one person in the future.
Testing with a freebsd instance, has the same issues.
E1000 are normally the simple approach, but there shocking in performance, and I have no idea why ATM under freebsd.
VirtIO seams to be the best option ATM, unless hardware passthough is an option.
I have good, not as good as ESXi performance (about 50Mbps slower), with the tunables below.