Tag: gaming

  • How I Automated My WoW Nerd Obsession with n8n, Browserless & Python (A Self-Hosting Guide)

    How I Automated My WoW Nerd Obsession with n8n, Browserless & Python (A Self-Hosting Guide)

    In which a grown adult builds an entire self-hosted automation flow just to find out which World of Warcraft specs are popular this week.

    Priorities? Never heard of her.

    The Problem Nobody Asked Me to Solve

    Look, every Wednesday after the weekly Mythic+ reset, I used to open raider.io, squint at the spec popularity tables, and whisper “Frost Mage mains in shambles” or “When will Ret Pala finally get nerfed??” to myself like some kind of WoW-obsessed gremlin.

    Then one day I thought: “What if a robot did this for me and posted the results to Discord?” – which I then also check once a week, but it is different! Don’t question meeee!

    I have this n8n instance running, I basically never have a real use case for it. Most of the flows people build, in my opinion, are pretty wild like connecting ChatGPT to Tinder.. I am writing about using coding and n8n to automate World of Warcraft..you think Tinder is a use case for me ??

    This guide will walk you through the entire self-hosted setup. Even if you don’t care about WoW (first of all, how dare you), the stack itself is incredibly useful for any web scraping or automation project.

    The Stack: What Are We Even Working With?

    Here’s the dream team:

    ServiceWhat It DoesWhy You Need It
    n8nVisual workflow automation (think Zapier, but self-hosted and free)The brains of the operation
    BrowserlessHeadless Chrome as a service, accessible via APIRenders JavaScript-heavy pages so you can scrape them
    Python Task RunnerA sidecar container that executes Python code for n8nBecause sometimes JavaScript just isn’t enough (don’t @ me)
    PostgreSQLDatabase for n8nStores your workflows, credentials, and execution history
    WatchtowerAuto-updates your Docker containersSet it and forget it, like a slow cooker for your infrastructure

    Step 0: What we will build

    You will need these files in your directory:

    deploy/
    ├── docker-compose.yml
    ├── Dockerfile
    ├── n8n-task-runners.json
    └── .env

    This is my example flow it gets the current top classes in world of warcraft, saves them to a database, and calculates deltas in case they change:

    This is the final output for me, them main goal is to show n8n and the “sidecar” python container. I just use it for World of Warcraft stuff and also reoccurring billing for customers of my consulting business.

    One major bug I noticed is that the classes I play are usually never the top ones. I have not found a fix yet.
    Guardian Druid and Disc Priest, if you care 😘

    Step 1: The Docker Compose File

    Create a deploy/ folder and drop this docker-compose.yml in it. I’ll walk through exactly what’s happening in each service below.

    services:
      browserless:
        image: browserless/chrome:latest
        ports:
          - "3000:3000"
        environment:
          - CONCURRENT=5
          - TOKEN=your_secret_token # <- change this 
          - MAX_CONCURRENT_SESSIONS=5
          - CONNECTION_TIMEOUT=60000
        restart: unless-stopped
    
      n8n:
        image: docker.n8n.io/n8nio/n8n:latest
        restart: always
        ports:
          - "5678:5678"
        environment:
          - N8N_PROXY_HOPS=1
          - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
          - DB_TYPE=postgresdb
          - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
          - DB_POSTGRESDB_HOST=${POSTGRES_HOST}
          - DB_POSTGRESDB_PORT=${POSTGRES_PORT}
          - DB_POSTGRESDB_USER=${POSTGRES_USER}
          - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
          - N8N_BASIC_AUTH_ACTIVE=true
          - N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER}
          - N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD}
          - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
          - WEBHOOK_URL=https://${DOMAIN_NAME}
          # --- External Python Runner Config ---
          - N8N_RUNNERS_ENABLED=true
          - N8N_RUNNERS_MODE=external
          - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
          - N8N_RUNNERS_AUTH_TOKEN=${N8N_RUNNERS_AUTH_TOKEN}
          - N8N_RUNNERS_TASK_TIMEOUT=60
          - N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
        volumes:
          - n8n_data:/home/node/.n8n
          - ./n8n-storage:/home/node/.n8n-files
        depends_on:
          - postgres
    
      task-runners:
        build: .
        restart: always
        environment:
          - N8N_RUNNERS_TASK_BROKER_URI=http://n8n:5679
          - N8N_RUNNERS_AUTH_TOKEN=${N8N_RUNNERS_AUTH_TOKEN}
          - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
          - N8N_RUNNERS_STDLIB_ALLOW=*
          - N8N_RUNNERS_EXTERNAL_ALLOW=*
          - N8N_RUNNERS_TASK_TIMEOUT=60
          - N8N_RUNNERS_MAX_CONCURRENCY=3
        depends_on:
          - n8n
        volumes:
          - ./n8n-task-runners.json:/etc/n8n-task-runners.json
    
      postgres:
        image: postgres:15
        restart: always
        environment:
          - POSTGRES_DB=${POSTGRES_DB}
          - POSTGRES_USER=${POSTGRES_USER}
          - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
        volumes:
          - postgres_data:/var/lib/postgresql/data
    
      watchtower:
        image: containrrr/watchtower
        restart: always
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
        command: --interval 3600 --cleanup
        environment:
          - WATCHTOWER_CLEANUP=true
    
    volumes:
      n8n_data:
        external: false
      postgres_data:
        external: false
    

    You will notice that I use a .env file, it looks like this:

    # General settings
    DOMAIN_NAME=n8n.home.karl.fail
    GENERIC_TIMEZONE=Europe/Berlin
    
    # Database configuration
    POSTGRES_DB=n8n
    POSTGRES_USER=randomusername
    POSTGRES_PASSWORD=change_this
    POSTGRES_HOST=postgres
    POSTGRES_PORT=5432
    
    # Authenticatio
    N8N_BASIC_AUTH_USER=[email protected]
    N8N_BASIC_AUTH_PASSWORD=change_this
    
    # Encryption
    N8N_ENCRYPTION_KEY=supersecretencryptionkey
    N8N_RUNNERS_AUTH_TOKEN=change_this

    Breaking Down the Logic

    Let’s actually look at what we just pasted.

    1. Browserless (The Headless Chrome Butler)

    A lot of modern websites (including raider.io) render their content with JavaScript. If you just curl the page, you get a sad empty shell. I chose Browserless, because of the simple setup for headless browser with REST API.

    • image: browserless/chrome: This spins up a real Chrome browser.
    • TOKEN: This is basically the password to your Browserless instance. Change this! You’ll use this token in your Python script later.

    2. n8n (The Workflow Engine)

    • N8N_RUNNERS_MODE=external: This tells n8n, “Hey, don’t run code yourself. Send it to the specialized runner container.” This is critical for security and stability.
    • N8N_RUNNERS_AUTH_TOKEN: This is a shared secret between n8n and the task runner. If these don’t match, the runner won’t connect, and your workflows will hang forever.

    3. The Task Runner (The Python Powerhouse)

    I really wanted to try this, I run n8n + task runners in the same LXC so it does not give me any performance benefits, but it is nice to know I could scale this globally if I wanted:

    Source: orlybooks
    • N8N_RUNNERS_EXTERNAL_ALLOW=*: This allows you to import any Python package (like pandas or requests). By default, n8n blocks imports for security. We are turning that off because we want to live dangerously (and use libraries).
    • volumes: We mount n8n-task-runners.json into /etc/. This file acts as a map, telling the runner where to find the Python binary.

    Step 2: The Python Task Runner Configuration

    This is the section that took me the longest to figure out. n8n needs two specific files in your deploy/ folder to run Python correctly.

    The official n8n documentation is really bad for this (at the time of writing), they have actually been made aware as well by multiple people but do not care. (I think Node-RED is much much better in that regard)

    We need to build an image that has our favorite Python libraries pre-installed. The base n8n runner image is bare-bones. We use uv (included in the base image) because it installs packages significantly faster than pip.

    FROM n8nio/runners:latest
    
    USER root
    
    ENV VIRTUAL_ENV=/opt/runners/task-runner-python/.venv
    ENV PATH="$VIRTUAL_ENV/bin:$PATH"
    
    RUN uv pip install \
        # HTTP & web scraping
        requests \
        beautifulsoup4 \
        lxml \
        html5lib \
        httpx \
        # Data & analysis
        pandas \
        numpy \
        # Finance
        yfinance \
        # AI / LLM
        openai \
        # RSS / feeds
        feedparser \
        # Date & time
        python-dateutil \
        pytz \
        # Templating & text
        jinja2 \
        pyyaml \
        # Crypto & encoding
        pyjwt \
        # Image processing
        pillow
    
    USER runner
    

    ⚠️ Important: If you need a new Python library later, you must add it to this file and run docker compose up -d --build task-runners. You cannot just pip install while the container is running.

    You can choose different libraries, those are just ones I use often.

    The n8n-task-runners.json

    This file maps the internal n8n commands to the actual binaries in the container. It tells n8n: “When the user selects ‘Python’, run this command.”

    {
      "task-runners": [
        {
          "runner-type": "javascript",
          "workdir": "/home/runner",
          "command": "/usr/local/bin/node",
          "args": [
            "--disallow-code-generation-from-strings",
            "--disable-proto=delete",
            "/opt/runners/task-runner-javascript/dist/start.js"
          ],
          "health-check-server-port": "5681",
          "allowed-env": [
            "PATH",
            "GENERIC_TIMEZONE",
            "NODE_OPTIONS",
            "N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT",
            "N8N_RUNNERS_TASK_TIMEOUT",
            "N8N_RUNNERS_MAX_CONCURRENCY",
            "N8N_SENTRY_DSN",
            "N8N_VERSION",
            "ENVIRONMENT",
            "DEPLOYMENT_NAME",
            "HOME"
          ],
          "env-overrides": {
            "NODE_FUNCTION_ALLOW_BUILTIN": "crypto",
            "NODE_FUNCTION_ALLOW_EXTERNAL": "*",
            "N8N_RUNNERS_HEALTH_CHECK_SERVER_HOST": "0.0.0.0"
          }
        },
        {
          "runner-type": "python",
          "workdir": "/home/runner",
          "command": "/opt/runners/task-runner-python/.venv/bin/python",
          "args": [
            "-m",
            "src.main"
          ],
          "health-check-server-port": "5682",
          "allowed-env": [
            "PATH",
            "GENERIC_TIMEZONE",
            "N8N_RUNNERS_LAUNCHER_LOG_LEVEL",
            "N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT",
            "N8N_RUNNERS_TASK_TIMEOUT",
            "N8N_RUNNERS_MAX_CONCURRENCY",
            "N8N_SENTRY_DSN",
            "N8N_VERSION",
            "ENVIRONMENT",
            "DEPLOYMENT_NAME"
          ],
          "env-overrides": {
            "PYTHONPATH": "/opt/runners/task-runner-python",
            "N8N_RUNNERS_STDLIB_ALLOW": "*",
            "N8N_RUNNERS_EXTERNAL_ALLOW": "*",
            "N8N_RUNNERS_MAX_CONCURRENCY": "3",
            "N8N_RUNNERS_HEALTH_CHECK_SERVER_HOST": "0.0.0.0"
          }
        }
      ]
    }

    You can and should only allow the libraries you actually use, however at some point I got so annoyed with n8n telling me that even if I built the darn Dockerfile with the lib in it and it is installed, I can not use it because the config does not list it.

    The most annoying part was that Python standard libs kept getting blocked because I did not include them all…

    Judge me if you must 💅

    Step 3: Fire It Up

    Alright, moment of truth. Make sure your file structure looks like this:

    deploy/
    ├── docker-compose.yml
    ├── Dockerfile
    ├── n8n-task-runners.json
    └── .env
    

    (Don’t forget to create a .env file with your secrets like POSTGRES_PASSWORD and N8N_RUNNERS_AUTH_TOKEN!)

    If you scroll up a little I included an example .env

    cd deploy/
    docker compose up -d --build
    

    The --build flag ensures Docker builds your custom Python runner image. Grab a coffee ☕, the first build takes a minute because it’s installing all those Python packages.

    Once it’s up, visit http://localhost:5678 and you should see the n8n login screen.

    Bonus: What I Actually Use This For

    Okay, now that you’ve got this beautiful automation platform running, let me tell you what I did with it.

    The WoW Meta Tracker

    Every week, I wanted to know: which specs are dominating Mythic+ keys?

    I do this because I want to play these classes so people take me with them on high keys: It be like that sometimes. In Season 2 of TWW I had a maxed out, best in slot, +3k rating guardian druid and people would not take me as their tank because it was not meta.

    Here’s the n8n workflow logic:

    1. Schedule Trigger: Runs every Wednesday at 13:00 UTC.
    2. Grab data
    3. Prepare and store the data
    4. Send to Discord

    I will show you some of the code I use below

    # this sets the URL to fetch 
    
    BASE = "https://raider.io/stats/mythic-plus-spec-popularity"
    
    sources = [
        {
            "label": "Last 4 Resets (7-13)",
            "scope": "last-4-resets",
            "url": BASE + "?scope=last-4-resets&minMythicLevel=7&maxMythicLevel=13&groupBy=popularity",
        },
    ]
    
    results = []
    for s in sources:
        results.append({"json": {
            "label": s["label"],
            "scope": s["scope"],
            "browserless_body": {
                "url": s["url"],
                "waitFor": 8000,
            },
        }})
    
    return results

    This code here actually fetches the HTML data from raider.io:

    # n8n Code Node: Fetch HTML
    # Calls browserless to render the raider.io page.
    import requests
    
    BROWSERLESS_URL = "http://browserless:3000/content?token=your_secret_token"
    
    item = _items[0]["json"]
    
    try:
        resp = requests.post(
            BROWSERLESS_URL,
            json=item["browserless_body"],
            timeout=(5, 20),
        )
        resp.raise_for_status()
        html = resp.text
    except Exception as e:
        return [{"json": {
            "label": item["label"],
            "scope": item["scope"],
            "error": str(e),
        }}]
    
    return [{"json": {
        "label": item["label"],
        "scope": item["scope"],
        "html": html,
    }}]
    

    The Result:

    (I just built this today, so no deltas yet)

    Production Tips (For the Responsible Adults)

    If you’re putting this on a real server:

    1. Reverse Proxy: Put n8n behind Nginx or Traefik with HTTPS. Set N8N_PROXY_HOPS=1 so n8n trusts the proxy headers.
    2. Firewall: Don’t expose Browserless (port 3000) or the DB (port 5432) to the internet. Only port 5678 (n8n) should be accessible via your proxy.
    3. Secrets: Use a .env file. Do not hardcode passwords in docker-compose.yml.
    4. Backups: The postgres_data volume holds everything. Back it up regularily.

    Troubleshooting

    “Python Code node doesn’t appear in n8n”

    • Check if N8N_RUNNERS_ENABLED=true is set on the n8n container.
    • Check logs: docker compose logs task-runners. It should say “Connected to broker”.

    “ModuleNotFoundError: No module named ‘requests’”

    • You probably didn’t set N8N_RUNNERS_EXTERNAL_ALLOW=* in the environment variables.
    • Or, you modified the Dockerfile but didn’t rebuild. Run docker compose up -d --build task-runners.

    “Task timed out after 60 seconds”

    • Web scraping is slow. Browserless takes time. Increase N8N_RUNNERS_TASK_TIMEOUT to 120 in the docker-compose file.

    Summary

    You should now have a working local n8n instance with browser API and remote python task runner. You can build all sorts of cool things, automate tasks and go touch some grass sometimes with all that free time.

    Thanks for reading xoxo, hugs and kisses. Sleep tight, love you 💕

  • Unlocking Full PS5 DualSense Features in Moonlight & Sunshine

    Unlocking Full PS5 DualSense Features in Moonlight & Sunshine

    There is nothing worse than buying premium hardware and having your software treat it like a generic accessory.

    I recently picked up a PS5 DualSense controller. It wasn’t cheap, but I bought it for a specific reason: that trackpad. I wanted to use the full capabilities of the controller, specifically for mouse input, while streaming.

    However, I ran into a wall immediately. No matter what I did, my setup kept auto-detecting the DualSense as a standard Xbox Controller. This meant no trackpad support and missing button functionality.

    I went down the rabbit hole of forums and documentation so you don’t have to. If you are running a similar stack, here is the fix that saves you the headache.

    The Setup

    Just for context, here is the hardware and software I’m running to play World of Warcraft:

    • Host: Virtual CachyOS running Sunshine
    • Client: MacBook (M4 Air) running Moonlight
    • Controller: PS5 DualSense
    • The Goal: Play WoW on the CachyOS host using the DualSense trackpad for mouse control and scrolling.

    The Problem

    Sunshine usually defaults to X360 (Xbox) emulation to ensure maximum compatibility, if not then Steam will. While great for most games, it kills the specific features that make the DualSense special. If you want the trackpad to work as a trackpad, you need the host to see the controller as a DualSense, not an Xbox gamepad.

    The Solution

    The fix came down to two specific steps: fixing a permission error on the Linux host and forcing Sunshine to recognize the correct controller type.

    Step 1: Fix the Permission Error

    First, we need to ensure the user has the right permissions to access the input devices.

    sudo nano /etc/udev/rules.d/60-sunshine.rules
    # sudo nano /usr/lib/udev/rules.d/60-sunshine.rules
    
    
    # Allows Sunshine to access /dev/uinput
    KERNEL=="uinput", SUBSYSTEM=="misc", OPTIONS+="static_node=uinput", TAG+="uaccess"
    
    # Allows Sunshine to access /dev/uhid (Added subsystem for persistence)
    KERNEL=="uhid", SUBSYSTEM=="misc", TAG+="uaccess"
    
    # Joypads (Broadened to ensure the tag hits before the name is fully registered)
    SUBSYSTEM=="hidraw", KERNEL=="hidraw*", MODE="0660", TAG+="uaccess"
    SUBSYSTEM=="input", ATTRS{name}=="Sunshine*", MODE="0660", TAG+="uaccess"
    sudo udevadm control --reload-rules && sudo udevadm trigger
    # optional reboot

    Source: https://github.com/LizardByte/Sunshine/issues/3758

    Step 2: Force PS5 Mode in Sunshine

    Next, we need to tell Sunshine to stop pretending everything is an Xbox controller.

    1. Open your Sunshine Web UI.
    2. Navigate to Configuration -> Input.
    3. [Insert your specific steps here, likely setting “Gamepad Emulation” to “DS4” or using a specific flag]

    Now restart Sunshine or do a full reboot. Test your controller, it should pop up in Steam now as well:

    Screenshot

    Happy Gaming! Hopefully, this saves you the hours of troubleshooting it took me. Now, back to Azeroth.


    Bonus:

    By the way, World of Warcraft with controller still has a long way to go, however I find that Questing, Farming and Delving are some activities one can easily do with a controller. I would not recommend Tanking, I am a main Guardian Druid and while really enticing due to “not that many buttons” tanking is too dynamic for controllers. PVP is extremely hectic, people will run through you to get behind you and you wont be able to turn that fast.

    All in all, I guess you can get used to anything, theoretically you also have potential to win the lottery or become a rockstar, but usually a regular job is a more stable income – as is mouse and keyboard for WoW. This was a weird analogy. It’s late here 🙁

  • The 2026 Guide to Linux Cloud Gaming: Proxmox Passthrough with CachyOS & Sunshine

    The 2026 Guide to Linux Cloud Gaming: Proxmox Passthrough with CachyOS & Sunshine

    How I turned my server into a headless gaming powerhouse, battled occasional freezes, and won using Arch-based performance and open-source streaming.

    Sorry for the clickbait, AI made me do it. For real though, I am gonna show you how to build your own stream machine, local “cloud” gaming monster.

    There are some big caveats here before we get started (to manage expectations):

    • Your mileage may vary, greatly! Depending on your hard and software versions you may not have any of the problems I have had, but you may also have many many more
    • As someone new to gaming on Linux the whole “run an executable through another layer ob virtualization/emulation” feels wrong, but I guess does not make that much of a performance difference in the end.

    If you guessed that this will be a huge long super duper long post, you guessed right… buckle up buddy!

    My Setup

    Hardware

    • ASUS TUF Gaming AMD Radeon RX 7900 XTX OC Edition 24GB
    • AMD Ryzen 7 7800X3D (AM5, 4.20 GHz, 8-Core)
    • 128GB of DDR5 RAM
    • Some HDMI Dummy Adapter: I got this one

    Software

    • Proxmox 9.1.4
    • Linux Kernel 6.17
    • CachyOS (It’s Arch btw)
    • Sunshine and Moonlight
    • Lutris (for running World of Warcraft.. yea I am that kind of nerd, I know.)

    Preperation

    Proxmox Host

    This guide is specifically for my Hardware so again: Mileage may vary.

    SSH into your Proxmox host as root or enter a shell in any way you like. We will change some stuff here.

    nano /etc/default/grub
    # look for "GRUB_CMDLINE_LINUX_DEFAULT" and change it to this
    GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt amdgpu.mes=0 video=efifb:off video=vesafb:off"
    update-grub
    # Blacklist
    echo "blacklist amdgpu" > /etc/modprobe.d/blacklist.conf
    echo "blacklist radeon" >> /etc/modprobe.d/blacklist.conf
    echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf
    echo "blacklist nvidia" >> /etc/modprobe.d/blacklist.conf
    
    # VFIO Modules
    echo "vfio" > /etc/modules
    echo "vfio_iommu_type1" >> /etc/modules
    echo "vfio_pci" >> /etc/modules

    Basically this enables passthrough and forces to proxmox host to ignore the graphics card (we want this).

    # reboot proxmox host
    reboot

    Okay for some quality of life we will add a resource mapping for our GPU in Proxmox.

    Datacenter -> Resource Mappings -> Add

    Screenshot

    Choose a name, select your devices (Audio + Graphic Card)

    Screenshot

    Now you can use mapped devices, this will come in handy in our next step.

    CachyOS VM

    Name it whatever you like:

    You will need to download CachyOS from here

    Copy all the settings I have here, make sure you disabled the Pre-Enrolled keys, this will try to verify that the OS is signed and fail since most Linux distros aren’t:

    Leave all the defaults but use “SSD emulation” IF you are on an SSD (since we are building a gaming VM you should be):

    CPU needs to be set to host, I used 6 Cores, you can pick whatever (number of CPUs you actually have):

    Pick whatever memory you have and want to use here I am going with 16GB, disable “Ballooning” in the settings, this disabled dynamic memory management, simply put when you run this VM it will always have the full RAM available otherwise if it doesnt need it all it would ge re-assigned which is not a great idea for gaming where demands change:

    The rest is just standard:

    🚨NOTE: We have not added the GPU, yet. We will do this after installation.

    Installing CachyOS

    Literally just follow the instructions of the live image. It is super simple. If you get lost visit the CachyOS Wiki but literally just click through the installer.

    Then shut down the VM.

    Post Install

    You will want to setup SSH and Sunshine before adding the GPU. We will be blind until Sunshine works and SSH helps a lot.

    # enable ssh 
    sudo systemctl enable --now sshd
    
    # install and enable sunshine 
    sudo pacman -S sunshine lutris steam
    sysetmctl --user enable --now sunshine
    sudo setcap cap_sys_admin+p $(readlink -f $(which sunshine))
    echo 'KERNEL=="uinput", SUBSYSTEM=="misc", OPTIONS+="static_node=uinput", TAG+="uaccess"' | sudo tee /etc/udev/rules.d/85-sunshine-input.rules
    echo 'KERNEL=="uinput", SUBSYSTEM=="misc", OPTIONS+="static_node=uinput", TAG+="uaccess"' | sudo tee /etc/udev/rules.d/60-sunshine.rules
    systemctl --user restart sunshine
    # had to run all these to get it to work wayland is a bitch

    Sunshine settings that worked for me:

    # nano ~/.config/sunshine/sunshine.conf
    adapter_name = /dev/dri/renderD128 # <- leave auto detect or change to yours
    capture = kms
    encoder = vaapi # <- AMD specific
    locale = de
    output_name = 0 # <- depends on your actual dispslay 
    
    # restart after changing systemctl --user restart sunshine

    Edit the Firewall, CachyOS comes with ufw enabled by default:

    # needed for sunshine and ssh of course
    sudo ufw allow 47990/tcp
    sudo ufw allow 47984/tcp
    sudo ufw allow 47989/tcp
    sudo ufw allow 48010/tcp
    sudo ufw allow 47998/udp
    sudo ufw allow 47999/udp
    sudo ufw allow 48000/udp
    sudo ufw allow 48002/udp
    sudo ufw allow 48010/udp
    sudo ufw allow ssh

    Before we turn off the VM we need to enable automatic sign in and set the energy saving to never. We have to do this because Sunshine runs as user and if the user is not logged in then it does not have a display to show, if the energy saver shuts down the “Display” Sunshine wont work either.

    As a security person I really don’t like an OS without proper sign in. Password is still needed for sudo, but for the sign in none is needed. I recommend tightening your Firewall or using Tailscale or Wireguard to allow only authenticated clients to connect.

    Now you will turn off the VM and remove the virtual display:

    Screenshot

    You need to download the Moonlight Client from here, they have a client for pretty much every single device on earth. The client will probably find your Sunshine server as is but if not you can just add the client manually (like I had to do).

    This step is so easy that I didn’t think I needed to add any more info here.

    Bringing it all together

    Okay, now add the GPU to the VM, double check that it is turned off.

    Select the VM -> Hardware -> Add -> PCI Device

    Select your mapped GPU, ensure Primary GPU is selected, select the ROM-Bar (Important! This will help with the GPU getting stuck on reboot and shutdown, yes that is a thing). Tick on PCI-Express:

    It should look something like this:

    Now insert the HDMI Dummy Plug into the GPU and start the VM

    You should now be able to SSH into your VM:

    Screenshot

    Testing

    If you are lucky then everything works out of the box now. I am not lucky.

    I couldn’t get games to start through Steam thy kept crashing, the issue seemed to be old / non-existent Vulkan drivers for the GPU.

    sudo pacman -Syu mesa lib32-mesa vulkan-radeon lib32-vulkan-radeon lib32-vulkan-mesa-layers lib32-libdisplay-info
    sudo pacman -Syu

    That fixed my Vulkan errors:

    ~ karl@cachyos-x8664
     vulkaninfo --summary
    .....
    Devices:
    ========
    GPU0:
            apiVersion         = 1.4.328
            driverVersion      = 25.3.4
            vendorID           = 0x1002
            deviceID           = 0x744c
            deviceType         = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
            deviceName         = AMD Radeon RX 7900 XTX (RADV NAVI31)
            driverID           = DRIVER_ID_MESA_RADV
            driverName         = radv
            driverInfo         = Mesa 25.3.4-arch1.2
            conformanceVersion = 1.4.0.0
    ....

    Here you can see Witcher 3 running:

    Installing Battle.net

    You can follow this guide here for the installation of Lutris. I just did:

    sudo pacman -S lutris

    Maybe that is why I have had issues? Who knows, it works now.

    The rest is really simple:

    • Start Lutris
    • Add new game
    • Search for “battlenet”
    • Install (follow the instructions, this is important)

    Once installed you need to add Battle.net App into Steam as a

    Screenshot

    Once you pressed play you can log in to your Battle.net Account and start:

    Screenshot
    • Resolution: 4K (3840×2160)
    • Framerate: Solid 60 FPS
    • Latency: ~5.6ms Host Processing (Insanely fast!)
    • Codec: HEVC (Hardware Encoding working perfectly)

    Wrapping Up: The 48-Hour Debugging Marathon

    I’m not going to lie to you, this wasn’t a quick “plug-and-play” tutorial. It took me a solid two days of tinkering, debugging, and staring at terminal logs to get this setup from “broken mess” to a high-performance cloud gaming beast.

    We battled through Proxmox hooks, fought against dependency hell, and wrestled with Vulkan drivers until everything finally clicked.

    I honestly hope this post acts as the shortcut I wish I had. If this guide saves you even just an hour of the headaches I went through, then every second of my troubleshooting was worth it.

    And if you’re still stuck? Just know that we have suffered together, and you are not alone in the Linux trenches! 😂

    For my next experiment, I think I’m going to give Bazzite a spin. I’ve heard great things about its “out-of-the-box” simplicity and stability. But let’s be real for a second: Bazzite isn’t Arch-based. If I switch, I lose the sacred ability to drop “I use Arch, btw” into casual conversation, and I’m not sure I’m emotionally ready to give up those bragging rights just yet.

    Anyway, thank you so much for sticking with me to the end of this guide. You made it!

    Love you, cutiepie! ❤️ Byyyeeeeeeeee!