If you read my last post, you know I love avoiding manual labor so much that I will gladly spend 40 hours automating a 5-minute task. Recently, my eyes fixed on my home media server, affectionately dubbed “Karlflix.”
Karlflix lived on a bloated, heavy Debian 12 KVM Virtual Machine. It hogged 16 GB of RAM, took three whole minutes to boot, and used a clunky NFS share to talk to the Proxmox host. Oh, and hardware transcoding? Forget about it. My CPU was crying every time someone tried to stream a 4K HEVC file to a toaster.
It was time for a glow-up. I decided to tear down the entire VM and rebuild the stack inside a sleek, privileged Debian 13 LXC container.
⚠️ Disclaimer: I have scrubbed the IPs, domains, and API keys in this post to protect my own network from you lovely internet strangers. Replace my dummy
10.99.0.xIPs with your own if you follow along.
🎬 Disclaimer: Just so we are absolutely, unequivocally clear: I use this infrastructure exclusively for managing my own personal media, legally acquired content, and open-source Linux ISOs. I do not pirate media, I do not condone piracy, and neither should you. This guide is purely an educational showcase of homelab networking and automation architecture.
Here is the tale of how I successfully migrated my entire *arr stack, configured a rock-solid WireGuard VPN kill-switch, set up Single Sign-On (SSO), and finally got Jellyfin hardware transcoding to work.
The “Before and After” Flex
Before we get into the weeds, let’s look at why this migration was worth the headache:
| Area | Before (The Bloated VM) | After (The Sleek LXC) |
| Container Type | Full KVM VM | Privileged LXC |
| RAM Allocated | 16 GB | 8 GB (Still only uses ~1.5 GB) |
| CPU | 8 Cores | 4 Cores (probably will go down to 2) |
| GPU Transcoding | Nope (CPU crying) | VA-API enabled (13.5× realtime) |
| VPN | OpenVPN (slow, dropped randomly) | WireGuard (fast, enforced kill-switch) |
| Single Sign-On | 6 different login forms | Authentik proxy auth everywhere |
| Disk Waste | Duplicate files on import | Zero-waste Hardlinks |
| Backup Size | 16 GB full disk image | ~9 GB rootfs only (Media excluded!) |
| Startup Time | ~3 minutes | ~20 seconds |
Phase 1: Creating the LXC and Escaping “Permission Denied” Hell
LXC containers are incredibly fast, but they have a massive catch: they cannot use host device files by default. If you want your container to access a GPU or a VPN TUN device, you can’t just plug it in.
You need two things: a cgroup2 device allowlist (telling the kernel it’s okay) and an lxc.mount.entry (actually pushing the device into the container). Get it wrong, and you get a silent Permission denied error.
I spun up a new Privileged Debian 13 container. Here is the magic configuration I dropped into /etc/pve/lxc/1XX.conf on the Proxmox host to pass through my AMD iGPU and the TUN device:
Ini, TOML
# TUN device for gluetun VPN
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
# AMD iGPU (Raphael RDNA2) VA-API passthrough
lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
# Network (Proxmox bridge routing requires /32, not /24!)
net0: name=eth0,bridge=vmbr0,ip=10.99.0.25/32,gw=10.99.0.1
# Direct Media Bind Mounts (Goodbye, NFS overhead)
mp0: /mnt/pve/wd_hdd_internal/media,mp=/data/media,backup=0
mp1: /mnt/pve/wd_hdd_internal/starr-config,mp=/data/config
Lazy Pro-Tip: Notice that backup=0 on the media mount? That excludes my 59 GB media library from Proxmox backups (vzdump). I can redownload Linux ISOs; I only care about backing up my configurations.
Phase 2: Gluetun, WireGuard, and the Ultimate Kill-Switch
My old OpenVPN setup had no kill-switch. If the VPN dropped, my torrent traffic just happily continued naked on my ISP’s network. Not great.
Enter Gluetun. Gluetun is a Docker container that connects to your VPN and creates a secure network namespace. You then tell your other containers (qBittorrent, Radarr, Sonarr) to share Gluetun’s network. If Gluetun dies, they lose the internet entirely. Forced security. I love it.
Setting up Private Internet Access (PIA) with WireGuard manually is highly annoying because they don’t publish standard config files. You have to generate keys, hit an API for a token, and register it. Once I had my tunnel IP and endpoint, I built my docker-compose.yml:
gluetun:
image: qmcgaw/gluetun:latest
environment:
VPN_SERVICE_PROVIDER: custom
VPN_TYPE: wireguard
WIREGUARD_ADDRESSES: 10.12.240.174/32
# Crucial: Allow LAN traffic so Jellyfin and Authentik can talk!
FIREWALL_OUTBOUND_SUBNETS: 10.99.0.0/24
FIREWALL_INPUT_PORTS: 7878,8989,9696,8080
radarr:
network_mode: service:gluetun
volumes:
- /data/media:/data/media
Now, WireGuard handshakes in milliseconds, uses 10% of the CPU of OpenVPN, and is basically bulletproof.
VPN Provider: Know your Threat Model. If I was to do highly illegal activities I probably would not go to PIA. I do not endorse or am not endorsed by any VPN provider.

Phase 3: Hardware Transcoding (Make the GPU do the Work)
Software transcoding on Jellyfin was destroying my CPU. Getting VA-API to work inside a Docker container, which is inside an LXC container, is like playing a twisted game of Russian nesting dolls.
For it to work, the Jellyfin process needs to be in the render group. By default on Debian, this is GID 104.
jellyfin:
network_mode: host
devices:
- /dev/dri/renderD128:/dev/dri/renderD128
group_add: ["44", "104"] # video (44) + render (104)
# 104 is not guranateed, run getent group render | cut -d: -f3 to check
Inside Jellyfin, I enabled VA-API, pointed it to /dev/dri/renderD128, and enabled HDR tonemapping (bt2390).
The result? I am now getting 13.5× realtime 1080p→720p transcodes using my AMD RDNA2 iGPU, while the GPU load sits at a breezy 15%.

I did not want to use my AMD RX 7900 XTXT because of electricity cost.

Please don’t show this to my girlfriend 🙇♂️
Phase 4: Fixing Hardlinks (Stop Wasting Disk Space)
In my old VM, qBittorrent downloaded to /downloads and Radarr imported to /movies. Because Docker saw these as two different filesystems, it copied the file. A 20GB file temporarily used 40GB of disk space.
The Golden Rule of Hardlinks: Every container must have the exact same volume mount path.
I changed everything to /data/media.
- qBittorrent saves to
/data/media/downloads/complete/radarr/movie.mkv - Radarr creates a hardlink at
/data/media/movies/movie.mkv - Radarr tells qBittorrent to delete the original.
Because they share the exact same volume structure, the hardlink is instantaneous and uses zero extra space.
Phase 5: One Login to Rule Them All (Authentik SSO)
I was tired of having 6 different passwords for Radarr, Sonarr, Prowlarr, Bazarr, Seerr, and Jellyfin.
I spun up Authentik as my Single Sign-On (SSO) proxy. By routing everything through Nginx Proxy Manager (NPM), NPM intercepts the request, asks Authentik “Who goes there?”, and passes a trusted X-Authentik-Username header to the backend.
The trick with the *arr apps is getting the auth settings right:
authenticationMethod: external(Trust the Authentik header)authenticationRequired: enabled(Do not leave your API open to the LAN!)
Now, I log in once with MFA, and I have access to my entire stack. If I want to revoke access, I disable one LDAP user, and they are locked out of everything instantly.
One of these days I will hook my Macbook into this as well and also set up Samba AD.
Phase 6: Let the Server Manage Itself
Because I am exceptionally lazy, I added a few more tools to make sure I never have to touch this LXC again:
- Watchtower: Runs at 04:00 daily, checks for Docker image updates (for approved containers only), installs them, and cleans up old image layers to save disk space.
- Cleanuparr: Automatically clears out stalled torrents, blocks known malware hashes, and deletes leftover garbage in my torrent client.
- Discord Notifications: I set up webhooks for everything. If Radarr grabs a file, Watchtower updates an image, or Proxmox finishes a backup, a bot drops a message in my private Discord server. I never have to open a web UI to check health statuses again.

Look, I get it. The “what if an update breaks everything” crowd is loud. But let’s be real: an update can catch an attitude whether I’m staring at the progress bar or grabbing a coffee.
I am on call anyways, which means my laptop is basically a permanent appendage anyway, I’m not losing sleep over it. In the last five years, this approach has been rock solid, unless we’re talking about Windows, but we don’t do that here.
Final Words
Was migrating this entire stack an absolute nightmare of undocumented API calls, silent Docker networking failures, and iptables wizardry?
No, I let Clause Sonnet 4.6 do it all through Copilot for 10€ a month. I did not do a single thing 😂.
If you are running a heavy VM for your Docker apps, take the weekend, spin up an LXC, and set your CPU free. Sleep tight now, and may your hardlinks always resolve! ❤️
🐛 The “Why is this broken?” Cheat Sheet
(For my fellow homelabbers Googling error codes at 3 AM)
| The Problem | The Root Cause | The Fix |
| Docker containers bypass firewall | Docker inserts its own iptables. UFW ignores it. | Insert rules into the DOCKER-USER chain, ensuring ESTABLISHED,RELATED -j RETURN is at the very top. |
| Bazarr writes subtitles to the void | Bazarr volume was still set to /movies instead of /data/media. | Match the volume mounts across all containers. |
LXC has no internet despite /24route | Proxmox bridge networking for LXCs needs a /32 per-host route. | Change ip=10.99.0.25/24 to ip=10.99.0.25/32 in the .conffile. |
