Securing a WordPress hosting setup requires more than just the basics—it’s about creating a layered defense to protect your server and adapt to emerging threats. Today I am going to show you what I do to keep Karlcom hosted systems secure from outside attackers.
Firewall Restriction
To minimize exposure, my server only accepts traffic from Cloudflare’s IP ranges and only on port 443. This ensures that attackers cannot directly access my server’s IP address, significantly reducing the attack surface.
On my Firewall it looks like this:
One rule to allow Cloudflare
One to allow my server to come back in from the internet
One block all rule for anything else
This works pretty well so far.
Cloudflare’s Web Application Firewall (WAF)
I leverage Cloudflare’s free WAF to filter out malicious traffic before it reaches my server. It’s an effective first line of defense that helps block known attack patterns and suspicious behavior.
I felt kind of weird sharing my WAF rules here, since you know people reading this can use them to build scans that get around but I figured, I am up for the challenge so lets go:
(starts_with(http.request.full_uri,"http://10.107.0.150//xmlrpc.php")) or (starts_with(http.request.full_uri,"http://10.107.0.150/xmlrpc.php")) or (ends_with(http.request.uri,"xmlrpc.php")) or (http.request.full_uricontains"setup-config.php") or (http.request.full_uricontains"wp-admin/install.php") or (http.request.uri.pathwildcardr"//*")
This is pretty WordPress specific, I know you can set these on your reverse proxy as well as your wordpress server as well, but I figured letting Cloudflare handle it with their admittedly much more powerful server and taking some steam off of mine would be a good thing to do.
EDIT:
While writing this post attacks changed a little and I got some really annoying scans from some IP ranges that all came from Russia, so I ended up Rick Rolling all Russian IPs trying to get through to my home network. Nothing personal.
Continuous Monitoring with Grafana Labs Loki
Despite these measures, some scanners and attackers still manage to slip through. To address this, I use Grafana Labs Loki to analyze server logs. By identifying suspicious activity or unusual access paths, I can create new Cloudflare WAF rules to block emerging threats proactively.
Here you can see some scans from the outside that made it through. I have since updated the WAF rules to block them as well.
Updates
As I mentioned in my post about backing up data, I automate the updates for all my LXCs, VMs, and container images. While this approach does carry the risk of introducing breaking changes, the time and effort saved by automating these updates outweigh the potential downsides for me at this stage. Manual maintenance just isn’t practical for my setup right now.
Since I do daily backups I can recover real fast.
The Cycle of Security
This process of monitoring, analyzing, and refining creates an ongoing cycle of security improvements. It’s a proactive and dynamic approach that keeps my server well-protected against evolving threats.
If you’re using a similar setup or have additional tips for securing WordPress hosting, I’d love to hear your thoughts. Sharing strategies and experiences is one of the best ways to stay ahead of attackers.
That said, I’m genuinely curious if any attackers reading this will now take it as a challenge to get around my defenses. For that very reason, I stay vigilant, regularly auditing my Grafana logs at home. Security is a constant effort, and in my case, we have SIEM at home, son!
The RAM operates at around 3600 MHz and consistently maintains 32GB in active usage:
Cooling
I kept the fan coolers as they were and opted for an all-in-one liquid cooling solution: the Arctic Liquid Freezer III – 280. No particular reason, really—I just thought it was a cool choice (pun intended).
PC Case
This setup was originally intended to be a gaming-only PC, so I chose a sleek and clean-looking case: the Fractal Design North XL. While it’s an aesthetically pleasing choice, the one downside for use as a server is its limited storage capacity.
CPU
I chose the AMD Ryzen 7 7800X3D (AM5, 4.20 GHz, 8-Core), which is fantastic for gaming. However, as a server for my needs, I regret that it doesn’t have a better built-in GPU. Intel’s iGPUs are far superior for media transcoding, and using an integrated GPU instead of an external one would save a significant amount of energy.
I chose the MSI MAG B650 TOMAHAWK WIFI (AM5, AMD B650, ATX) as it seemed like a great match for the CPU. However, my GPU is quite large and ends up covering the only other PCI-E x16 slot. This limits my ability to install a decent hardware RAID card or other large expansion cards.
For fast and redundant storage, I set up a ZFS mirror using two Intenso Internal 2.5” SSD SATA III Top, 1 TB drives. This setup ensures that critical data remains safe and accessible.
For my main OS, I stick to what I know best—Proxmox. It’s absolutely perfect for home or small business servers, offering flexibility and reliability in a single package.
I run a variety of services on it, and the list tends to evolve weekly. Here’s what I’m currently hosting:
Nginx Proxy Manager: For managing reverse proxies.
n8n: Automation tool for workflows.
Bearbot: A production-grade Django app.
Vaultwarden: A lightweight password manager alternative.
MySpeed: Network speed monitoring.
Another Nginx Proxy Manager: Dedicated to managing public-facing apps.
Code-Server: A browser-based IDE for developing smaller scripts.
Authentik: Single-Sign-On (SSO) solution for all local apps.
WordPress: This blog is hosted here.
Logs: A comprehensive logging stack including Grafana, Loki, Rsyslog, Promtail, and InfluxDB.
Home Assistant OS: Smart home management made easy.
Windows 11 Gaming VM: For gaming and other desktop needs.
Karlflix: A Jellyfin media server paired with additional tools to keep my media library organized.
And this list is far from complete—there’s always something new to add or improve!
Performance
The core allocation may be displayed incorrectly, but otherwise, this is how my setup looks:
Here’s the 8-core CPU usage over the last 7 days. As you can see, there’s plenty of headroom, ensuring the system runs smoothly even with all the services I have running:
Energy costs for my server typically range between 20-25€ per month, but during the summer months, I can run it at 100% capacity during the day using the solar energy generated by my panels. My battery also helps offset some of the power usage during this time.
Here’s a solid representation of the server’s power consumption:
I track everything in my home using Home Assistant, which allows me to precisely calculate the energy consumption of each device, including my server. This level of monitoring ensures I have a clear understanding of where my energy is going and helps me optimize usage effectively.
Conclusion
Hosting a server locally is a significant investment—both in terms of hardware and energy costs. My setup cost €2405, and I spend about €40 per month on energy, including domain and backup services. While my solar panels make running the server almost free during summer, winter energy costs can be a challenge.
That said, hosting locally has its advantages. It provides complete control over my data, excellent performance, and the flexibility to upgrade or downgrade hardware as needed. These benefits outweigh the trade-offs for me, even though the energy consumption is higher compared to a Raspberry Pi or Mini-PC.
I could have gone a different route. A cloud server, or even an alternative like the Apple Mac Mini M4, might have been more efficient in terms of cost and power usage. However, I value upgradability and privacy too much to make those sacrifices.
This setup wasn’t meticulously planned as a server from the start—it evolved from a gaming PC that was sitting unused. Instead of building a dedicated server from scratch or relying on a Mini-PC and NAS combination, I decided to repurpose what I already had.
Sure, there are drawbacks. The fans are loud, energy costs add up, and it’s far from the most efficient setup. But for me, the flexibility, control, and performance make it worthwhile. While hosting locally might not be the perfect solution for everyone, it’s the right choice for my needs—and I think that’s what really matters.
Today, I want to walk you through how I handle backups for my home server. My primary method is using Proxmox’s built-in backup functionality, which I then sync to a Hetzner Storage Box for added security.
When it comes to updates, I like to live on the edge. I enable automatic (security) updates on nearly all of my systems at home using UnattendedUpgrades. For containers, I usually deploy a Watchtower instance to keep them updated automatically. While this approach might make some people nervous—fearing a broken system after an update—I don’t sweat it. I back up daily and don’t run any mission-critical systems at home (except for this blog, of course 😉).
For specific files or directories, like Vaultwarden, I take an extra layer of precaution by creating additional backups within the LXC container itself. These backups are synced to a Nextcloud instance I also manage through Hetzner, but in a different datacenter. Hetzner’s “Storage Shares” offer a great deal—€5 gets you 1TB of managed Nextcloud storage. While not the fastest, they’re reliable enough for my needs.
I won’t dive into the details here, but my approach for these backups is pretty straightforward: I use ZIP files and rclone to upload everything to Nextcloud.
Here is my script, maybe it helps you in some way:
#!/bin/bash# VariablesBITWARDEN_DIR="/root/bitwarden"BACKUP_DIR="/root/bitwarden-backup"NEXTCLOUD_REMOTE="nextcloud:Vaultwarden"TIMESTAMP=$(date '+%Y%m%d-%H%M')# Ensure backup directory existsmkdir-p $BACKUP_DIR# Create a single tarball of the entire Vaultwarden directoryecho"Creating a full backup of the Vaultwarden directory..."tar-czvf $BACKUP_DIR/vaultwarden_full_backup-${TIMESTAMP}.tar.gz-C $BITWARDEN_DIR .# Sync the backup to Nextcloudecho"Uploading backup to Nextcloud..."rclonecopy $BACKUP_DIR $NEXTCLOUD_REMOTE# Clean up local backup directoryecho"Cleaning up local backups..."rm-rf $BACKUP_DIRecho"Backup completed successfully!"
Basically, all you need to do is create an App Password and follow the Rclone guide for setting up with WebDAV. It’s straightforward and works seamlessly for this kind of setup.
Backups in Proxmox
Proxmox makes backups incredibly simple with its intuitive functionality. I back up pretty much everything—except for my Gaming VM. It’s a Windows 11 experiment where I’ve passed through my AMD RX7900XT for gaming. Ironically, instead of gaming, I end up spending more time tweaking backups and writing about them. Let’s just say that gaming setup hasn’t exactly gone as planned.
I rely on Snapshot mode for my backups, and you can explore all its features and settings right here. As I mentioned earlier, I tend to restore backups more frequently than most people, and I’ve never faced any issues with inconsistencies. It’s been consistently reliable for me!
For retention, I keep it straightforward by saving only the last two backups. Since I also back up my backups (as you’ll see later), this minimalist approach is more than sufficient for my needs and saves me some space.
I left the rest of the settings as they are. The note templates are useful if you’re managing a large or multiple instances, but for my setup, I don’t use them.
Trigger warning: For now, I’m storing these backups on a single internal Seagate IronWolf (12 TB). I know, not ideal. These drives are pretty pricey, but one day I plan to add another and set up a ZFS mirror or RAID for better redundancy. For now, I’m relying on this one drive—fingers crossed, it’s been rock solid so far!
Borg(Backup)
The first thing I heard when I proudly told my friends that I was finally taking the golden 3-2-1 backup rule seriously was: “Why not restic?”
The simple answer? I Googled “backups to Hetzner Storage Box,” and the first result was an article explaining exactly what I wanted to do—using Borg 🤷♂️. Before I even considered trying restic, I had already set up encrypted incremental backups with Borg. Feel free to share what you use and why you might have switched, but for now, this setup works perfectly for me!
Hetzner Storage Box
Just to clarify, I’m not talking about Hetzner Storage Share 😁. I’m using their 5TB Storage Box and opted for Finland 🇫🇮 as the location since I already have other Karlcom-related stuff in their German datacenter. It helps keep things spread out a bit!
Essentially, it’s a big, affordable storage backend with multiple options for uploading data. You could mount it using the “Samba/CIFS” option, but I decided against that. Instead, I went with a more secure SSH connection to send my backups there.
I know, it seems like you came here just to find links to set this up somewhere else. But don’t worry—I’ve got some cool stuff to share with you next. Here’s my backup script:
/usr/local/bin/proxmox_borg_backup.sh
#!/bin/bash# VariablesBORG_REPO="ssh://[email protected]:23/home/backups/central"BORG_PASSPHRASE=''BACKUP_SOURCE="/mnt/pve/wd_hdd_internal/dump" LOG_FILE="/var/log/proxmox_borg_backup.log" MAX_LOG_SIZE=10485760RID=`uuidgen`CHECK_ID="ggshfo8-9ca6-1234-1234-326571681"# startcurl-fsS-m10--retry5"https://ping.yourdomain.de/ping/$CHECK_ID/start?rid=$RID"# Export Borg passphraseexportBORG_PASSPHRASE# Rotate log file if it exceeds MAX_LOG_SIZEif [ -f"$LOG_FILE" ] && [ $(stat-c%s "$LOG_FILE")-gt $MAX_LOG_SIZE ]; thenmv"$LOG_FILE""${LOG_FILE}_$(date +"%Y-%m-%d_%H-%M-%S")"touch"$LOG_FILE"fi# Check for BorgBackup installationif!command-vborg &> /dev/null; thenecho"ERROR: BorgBackup is not installed or not in PATH.">>"$LOG_FILE"exit1fi# Check for SSH connectionif!ssh-q-oBatchMode=yes-oConnectTimeout=5-p23-i~/.ssh/backupu123456@u123456.your-storagebox.deexit; thenecho"ERROR: Unable to connect to Borg repository.">>"$LOG_FILE"exit1fi# Logging start time{echo"==== $(date +"%Y-%m-%d %H:%M:%S") Starting Proxmox Backup ===="# Check if the backup source existsif [ !-d"$BACKUP_SOURCE" ]; thenecho"ERROR: Backup source directory $BACKUP_SOURCE does not exist!"exit1fi# Create a new Borg backupecho"Creating Borg backup..."borgcreate--stats--compressionzstd\"$BORG_REPO::backup-{now:%Y-%m-%d}"\"$BACKUP_SOURCE">>"$LOG_FILE"2>&1if [ $?-ne0 ]; thenecho"ERROR: Borg backup failed!"exit1fi# Prune old backups to save spaceecho"Pruning old backups..."borgprune--stats\--keep-daily=7\--keep-weekly=4\--keep-monthly=6\"$BORG_REPO"if [ $?-ne0 ]; thenecho"ERROR: Borg prune failed!"exit1fiecho"==== $(date +"%Y-%m-%d %H:%M:%S") Proxmox Backup Completed ===="} >>"$LOG_FILE"2>&1# finishedcurl-fsS-m10--retry5"https://ping.yourdomain.de/ping/$CHECK_ID?rid=$RID"
The curl requests at the top and bottom of the script are for my Healthchecks.io instance—I even wrote a blog post about it here.
Before moving on, you should definitely test this script. Depending on the size of your setup, the initial backup could take several hours. However, if it doesn’t fail within the first 10 seconds, that’s usually a good sign. To be sure it’s running smoothly, check the log file to confirm it started correctly:
/var/log/proxmox_borg_backup.log
==== 2025-01-10 01:39:07 Starting Proxmox Backup ====Creating Borg backup...------------------------------------------------------------------------------Repository:ssh://u123456@ u123456.your-storagebox.de:23/home/backups/centralArchive name: backup-2025-01-10Archive fingerprint: z724gf2789hgf972hf9uh...Time (start): Fri, 2025-01-10 01:39:08Time (end): Fri, 2025-01-10 05:36:41Duration:3 hours 57 minutes 32.92 secondsNumber of files: 72Utilization of max. archive size: 0%------------------------------------------------------------------------------ Original size Compressed size Deduplicated sizeThis archive: 62.03 GB 61.98 GB 61.60 GBAll archives: 62.03 GB 61.98 GB 61.60 GB Unique chunks Total chunksChunk index: 24030 40955------------------------------------------------------------------------------Pruning old backups...------------------------------------------------------------------------------ Original size Compressed size Deduplicated sizeDeleted data: 0 B 0 B 0 BAll archives: 62.03 GB 61.98 GB 61.60 GB Unique chunks Total chunksChunk index: 24030 40955------------------------------------------------------------------------------==== 2025-01-10 05:36:42 Proxmox Backup Completed ====
Security of BORG_PASSPHRASE
I decided to include the passphrase for encryption and decryption directly in the script because it fits within my threat model. My primary concern isn’t someone gaining access to my local Proxmox server and restoring or deleting my backups—my focus is on protecting against snooping by cloud providers or malicious admins.
Having the passphrase in the script works for me. Sure, there are other ways to handle this, but for the script to run automatically, you’ll always need to store the passphrase somewhere on your system. At the very least, it has to be accessible by root. This setup strikes the right balance for my needs.
Systemd timers
I created a system service to handle this backup process. For long-running jobs, it’s generally better to use systemd timers instead of cron, as they’re less prone to timeouts. I found this post particularly helpful when setting it up.
Here’s the service that actually runs my bash script:
And here’s the systemd timer that handles scheduling the service:
/etc/systemd/system/proxmox_borg_backup.timer
[Unit]Description=Run Proxmox BorgBackup Daily at 3AM[Timer]OnCalendar=*-*-* 03:00:00Persistent=true[Install]WantedBy=timers.target
Now, instead of enabling the service directly, you enable and start the timer. The timer will take care of starting the service according to the schedule you’ve defined. This setup ensures everything runs smoothly and on time!
That’s it! You’re all set. You can check the log file we created or use the journalctl command to review any errors or confirm successful runs. Happy backing up! 🎉
Bash
journalctl-xeuproxmox_borg_backup.timer# or tail-n50/var/log/proxmox_borg_backup.log
Conclusion
You should now have an easy and efficient solution to back up your Proxmox backups to a Hetzner Storage Box using Borg Backup. Both Borg and Restic support a variety of storage targets, so you can adapt this approach to suit your needs. In my setup, Borg performs incremental backups, uploading only new data, which helps keep storage costs low while maintaining security.
A word of caution: don’t lose your secrets—your encryption key or passphrase—because without them, you won’t be able to restore your data. Trust me, I’ve been there before! Thankfully, I had local backups to fall back on.
On Hetzner, I schedule daily backups at noon, after all my backup jobs have completed. I retain only the last three days, which works perfectly for me, though your needs might differ. Just remember that snapshot storage counts toward your total storage capacity—so if you have 1TB, the space used by snapshots will reduce the available storage for new data.
Thank you for reading! May your backups always be safe, your disks last long, and your systems run smoothly. Wishing you all the best—love you, byeeeeee! ❤️🚀
Before I built my beloved server, affectionately named “PrettyLittleKitten“, I had a brief fling with the brand-new Mac Mini M4. Spoiler alert: it was a short-lived relationship.
Let me start with the good stuff: processing power-to-power usage ratio. It’s absolutely unmatched. The Mac Mini M4 is a beast in terms of efficiency—an essential factor for me. I wanted hardware that could handle Jellyfin with smooth hardware acceleration while still hosting all my containers.
The Hardware
On paper (and in practice as a desktop), the Mac Mini M4 shines. It offers:
4 Thunderbolt USB-C ports, making storage expansion a breeze. Pair it with an external NVMe enclosure, and you can achieve speeds close to that of internal storage.
Hardware that punches way above its price point, making it a reasonable investment for many use cases.
The Disappointment
Here’s where the romance fell apart. While the Mac Mini M4 is brilliant as a desktop, using it as a server is a whole different ball game—and not a fun one.
The iCloud Conundrum
First up: the dreaded iCloud account requirement. This wasn’t a total shock (it’s Apple, after all), but it made me long for the simplicity of Debian and Proxmox, where everything is blissfully offline.
I went ahead and set it up with my personal iCloud account—big mistake. To run the Mac Mini as I wanted, it needed to stay logged in indefinitely. And here’s the kicker: to achieve that, I had to disable authentication entirely. Translation? If anyone got their hands on my Mini, they’d have full access to my iCloud account. Yikes.
Pro tip: Use a burner iCloud account if you’re planning to go down this route. (Is this what you want, Apple?!)
Dummy HDM
Then there’s the issue of fooling the Mac into thinking it’s doing desktop work. Without a connected display, macOS doesn’t fully utilize the GPU or cores, which impacts performance. Enter the Dummy HDMI Plug—a little device to trick the system into thinking a monitor is attached. At ~€40, it’s not a dealbreaker, but definitely annoying.
Power Saving Woes
You’ll also need to disable power-saving features. While the Mac Mini M4 consumes very little power in idle, turning off power-saving negates some of its efficiency benefits.
Recap of Mac Mini Server Challenges
If you’re still tempted to use the Mac Mini M4 as a server, here’s your checklist:
Dummy HDMI Plug: €40 (because macOS needs to “see” a monitor).
Burner iCloud Account: Necessary to avoid risking your real account.
Disable Authentication: Say goodbye to security.
Disable Power Saving: Because macOS doesn’t believe in idle servers.
Final Thoughts
If you’re determined, Evan Bartlett has written an excellent guide on setting up the Mac Mini as a server. However, as someone coming from the Linux world—where operating systems are designed for server use—it just didn’t feel right. Forcing macOS, an OS that clearly does not want to be a server, felt morally and ethically wrong.
Here’s hoping Big Siri AI will be kind to me when it inevitably takes over. 🙇♂️🍏
Bonus: Check this website’s response headers to see that it runs on PrettyLittleKitten
This is going to be a bold, highly opinionated take on how note-taking apps should be. For the non-technical folks, discussing text editors and note-taking apps with IT people is like walking straight into a heated geopolitical debate at the family Thanksgiving table—it’s passionate, intense, and probably never-ending. Gobble Gobble.
There are probably even more apps I have used in the past, but these are the ones that left a lasting impression on me. First off, let me just say—I love taking notes in Markdown. Any app that doesn’t support Markdown is pretty much useless to me. I’m so much faster at writing styled notes this way, without the hassle of clicking around or memorizing weird shortcut commands.
For me, HedgeDoc hit the sweet spot. It’s got just the right features and just the right amount of organization. I’m not looking for an app to micromanage my entire life—I just want to take some damn notes!
Live editing has also become a game-changer for me. I often have multiple screens open, sometimes even on different networks, and being instantly up-to-date while copy-pasting seamlessly between them is invaluable. Before HedgeDoc, I was using Obsidian synced via Nextcloud, but that was neither instant nor reliable on many networks.
And let’s talk about security. With HedgeDoc, it’s a breeze. Their authorization system is refreshingly simple, and backing up your notes is as easy as clicking a button. You get a ZIP file with all your Markdown documents, which you could technically use with other editors—but why would you? HedgeDoc feels like it was made for you, and honestly, you’ll feel the love right back.
I run HedgeDoc inside a container on my server, and it’s rock-solid. It just works. No excessive resource use, no drama—just a tool that quietly does its job.
Now, let’s dive in! I’m going to show you how to host HedgeDoc yourself. Let’s get started!
Prerequisites
Here’s what you’ll need to get started:
A Linux distribution: Any modern Linux distro that supports Docker will work, but for today, we’ll go with Alpine.
A server with a public IP address: While not strictly mandatory, this is highly recommended if you want to access your note-taking app from anywhere.
A reverse proxy: Something like Caddy or Nginx to handle HTTPS and make your setup accessible and secure.
Got all that? Great—let’s get started!
Setup
Here’s a handy script to install Docker on a fresh Alpine setup:
init.sh
#!/bin/sh# Exit on any errorset-eecho"Updating repositories and installing prerequisites..."cat<<EOF> /etc/apk/repositorieshttp://dl-cdn.alpinelinux.org/alpine/latest-stable/mainhttp://dl-cdn.alpinelinux.org/alpine/latest-stable/communityEOFapkupdateapkadd--no-cachecurlopenrcdockerdocker-composeecho"Configuring Docker to start at boot..."rc-updateadddockerbootservicedockerstartecho"Verifying Docker installation..."docker--versionif [ $?-ne0 ]; thenecho"Docker installation failed!"exit1fiecho"Verifying Docker Compose installation..."docker-compose--versionif [ $?-ne0 ]; thenecho"Docker Compose installation failed!"exit1fiecho"Docker and Docker Compose installed successfully!"
To make the script executable and run it, follow these steps:
Bash
chmod+xinit.sh./init.sh
If everything runs without errors, Docker should now be installed and ready to go. 🎉
To install HedgeDoc, we’ll follow the steps from their official documentation. It’s straightforward and easy
I prefer to keep all my environment variables and secrets neatly stored in .env files, separate from the actual Compose file.
.env
POSTGRES_USER=hedgedoctorPOSTGRES_PASSWORD=super_secure_passwordPOSTGRES_DB=hedgedocCMD_DB_URL=postgres://hedgedoctor:super_secure_password@database:5432/hedgedocCMD_ALLOW_FREEURL=trueCMD_DOMAIN=docs.yourdomain.deCMD_PROTOCOL_USESSL=trueCMD_ALLOW_ANONYMOUS=falseCMD_ALLOW_EMAIL_REGISTER=true# <- remove after you registered
To keep things secure, it’s a good idea to set CMD_ALLOW_ANONYMOUS to false, so anonymous users can’t edit your documents. For added security, you can create your own account and then disable CMD_ALLOW_EMAIL_REGISTER to prevent outsiders from signing up, effectively locking down HedgeDoc.
One great benefit of using the env_file directive in your Docker Compose setup is that it keeps your Compose files clean and tidy:
After running docker compose up -d, you should be all set! This setup assumes you already have a reverse proxy configured and pointing to the public domain where you’re hosting your HedgeDoc. If you need help setting that up, I’ve written a guide on it in another blog post.
Keep in mind, with the settings in the .env file above, HedgeDoc won’t work unless it’s served via HTTPS through the reverse proxy using the domain you specified.
Once everything’s in place, you should see the HedgeDoc login screen and be able to “Register” your account:
Don’t forget to head back to your .env file and comment out that specific line once you’re done:
.env
...# CMD_ALLOW_EMAIL_REGISTER=true # <- remove after you registered
This ensures that no one else can create accounts on your HedgeDoc instance.
Personally, I always set my notes to “Private” (you can do this in the top right). That way, even if I decide to let others use the instance later, I don’t have to worry about any old notes where I might have called them a stinky doodoo face (as one does):
You can still share your documents with others, but you’ll need to change the setting to “Locked.” Anything more restrictive will prevent people from viewing your notes.
Imagine sending your crush a beautifully crafted, markdown-styled love letter, only for them to get blocked because of your overly strict settings. Yeah… couldn’t be me.
Conclusion
I conclude —our notes are ready, no need for more WordPress blog posts. Now it’s time to hit the gym because it’s chest day, and let’s be honest, chest day is the best day! 💪
Do you ever find yourself lying awake at night, staring at the ceiling, wondering if your beloved cronjobs ran successfully? Worry no more! Today, we’re setting up a free, self-hosted solution to ensure you can sleep like a content little kitten 🐱 from now on.
I present to you Healthchecks.io. According to their website:
Simple and Effective Cron Job Monitoring
We notify you when your nightly backups, weekly reports, cron jobs, and scheduled tasks don’t run on time.
How to monitor any background job:
On Healthchecks.io, generate a unique ping URL for your background job.
Update your job to send an HTTP request to the ping URL every time the job runs.
When your job does not ping Healthchecks.io on time, Healthchecks.io alerts you!
Today, we’re taking the super easy, lazy-day approach by using their Docker image. They’ve provided a well-documented, straightforward guide for deploying it right here: Running with Docker.
What I love most about Healthchecks.io? It’s built on Django, my all-time favorite Python web framework. Sorry, FastAPI—you’ll always be cool, but Django has my heart!
Prerequisites:
A Server: You’ll need a server to host your shiny new cronjob monitor. A Linux distro is ideal.
Docker & Docker Compose: Make sure these are installed. If you’re not set up yet, here’s the guide.
Bonus Points: Having a domain or subdomain, along with a public IP, makes it accessible for all your systems.
You can run this on your home network without any hassle, although you might not be able to copy and paste all the code below.
Need a free cloud server? Check out Oracle’s free tier—it’s a decent option to get started. That said, in my experience, their free servers are quite slow, so I wouldn’t recommend them for anything mission-critical. (Not sponsored, pretty sure they hate me 🥺.)
Setup
I’m running a Debian LXC container on my Proxmox setup with the following specs:
CPU: 1 core
RAM: 1 GB
Swap: 1 GB
Disk: 10 GB (NVMe SSD)
After a month of uptime, these are the typical stats: memory usage stays pretty consistent, and the boot disk is mostly taken up by Docker and the image. As for the CPU? It’s usually just sitting there, bored out of its mind.
First, SSH into your server, and let’s get started by creating a .env file to store all your configuration variables:
In your .env file, enter the domain you’ll use to access the service. I typically go with something simple, like “ping” or “cron” as a subdomain. If you want to explore more configuration options, you can check them out here.
For my setup, this basic configuration does the job perfectly.
To generate secret keys, I usually rely on the trusty openssl command. Here’s how you can do it:
Oh, and by the way, I’m not using the original image for this. Instead, I went with the Linuxserver.io variant. There is no specific reason for this —just felt like it! 😄
Important!
Unlike the Linuxserver.io guide, I skipped setting the superuser credentials in the .env file. Instead, I created the superuser manually with the following command:
This allows you to set up your superuser interactively and securely directly within the container.
If you’re doing a standalone deployment, you’d typically set up a reverse proxy to handle SSL in front of Healthchecks.io. This way, you avoid dealing with SSL directly in the app. Personally, I use a centralized Nginx Proxy Manager running on a dedicated machine for all my deployments. I’ve even written an article about setting it up with SSL certificates—feel free to check that out!
Once your site is served through the reverse proxy over the domain you specified in the configuration, you’ll be able to access the front end using the credentials you created with the createsuperuser command.
There are plenty of guides for setting up reverse proxies, and if you’re exploring alternatives, I’m also a big fan of Caddy—it’s simple, fast, and works like a charm!
Here is a finished Docker Compose file with Nginx Proxy Manager:
In Nginx Proxy Manager your proxied host would be “http://healthchecks:8000”
If you did not follow my post you will need to expose port 80 on the proxy as well for “regular” Let’s Encrypt certificates without DNS challenge.
Healthchecks.io
If you encounter any errors while trying to access the UI of your newly deployed Healthchecks, the issue is most likely related to the settings in your .env file. Double-check the following to ensure they match your domain configuration:
Once you’re in, the first step is to create a new project. After that, let’s set up your first simple check.
For this example, I’ll create a straightforward uptime monitor for my WordPress host. I’ll set up a cronjob that runs every hour and sends an “alive” ping to my Healthchecks.io instance.
The grace period is essential to account for high latency. For instance, if my WordPress host is under heavy load, an outgoing request might take a few extra seconds to complete. Setting an appropriate grace period ensures that occasional delays don’t trigger false alerts.
I also prefer to “ping by UUID”. Keeping these endpoints secret is crucial—if someone else gains access to your unique ping URL, they could send fake pings to your Healthchecks.io instance, causing you to miss real downtimes.
Click on the Usage Example button in your Healthchecks.io dashboard to find ready-to-use, copy-paste snippets for various languages and tools. For this setup, I’m going with bash:
-m sets the max timeout to 10 seconds. You can change the value but do not leave this out!
–retry says it should retry the request 5 times before aborting.
Here’s how you can integrate it into a crontab:
Bash
# A sample crontab entry. Note the curl call appended after the command.# FIXME: replace "/your/command.sh" below with the correct command!0****/your/command.sh && curl-fsS-m10--retry5-o/dev/nullhttps://ping.yourdomain.de/ping/67162f7b-5daa-4a31-8667-abf7c3e604d8
To edit your crontab just run:
Bash
crontab-e
The curl command to Healthchecks.io will only execute if command.sh completes successfully without any errors. This ensures that you’re notified only when the script runs without issues.
After you ran that command, your dashboard should look like this:
Advanced Checks
While this is helpful, you might often need more detailed information, such as whether the job started but didn’t finish or how long the job took to complete.
Healthchecks.io provides all the necessary documentation built right into the platform. You can visit /docs/measuring_script_run_time/ on your instance to find fully functional examples.
Bash
#!/bin/shRID=`uuidgen`CHECK_ID="67162f7b-5daa-4a31-8667-abf7c3e604d8"# Send a start ping, specify rid parameter:curl-fsS-m10--retry5"https://ping.yourdomain.de/ping/$CHECK_ID/start?rid=$RID"# Put your command here/usr/bin/python3/path/to/a_job_to_run.py# Send the success ping, use the same rid parameter:curl-fsS-m10--retry5"https://ping.yourdomain.de/ping/$CHECK_ID?rid=$RID"
As you can see here this will give me the execution time as well:
Here, I used a more complex cron expression. To ensure it works as intended, I typically rely on Crontab.guru for validation. You can use the same cron expression here as in your local crontab. The grace period depends on how long you expect the job to run; in my case, 10 seconds should be sufficient.
Notifications
You probably don’t want to find yourself obsessively refreshing the dashboard at 3 a.m., right? Ideally, you only want to be notified when something important happens.
Thankfully, Healthchecks.io offers plenty of built-in notification options. And for even more flexibility, we enabled Apprise in the .env file earlier, unlocking a huge range of additional integrations.
For notifications, I usually go with Discord or Node-RED, since they work great with webhook-based systems.
While you could use Apprise for Discord notifications, the simplest route is to use the Slack integration. Here’s the fun part: Slack and Discord webhooks are fully compatible, so you can use the Slack integration to send messages directly to your Discord server without any extra configuration!
This way, you’re only disturbed when something really needs your attention—and it’s super easy to set up.
Discord already provides an excellent Introduction to Webhooks that walks you through setting them up for your server, so I won’t dive into the details here.
All you need to do is copy the webhook URL from Discord and paste it into the Slack integration’s URL field in Healthchecks.io. That’s it—done! 🎉
With this simple setup, you’ll start receiving notifications directly in your Discord server whenever something requires your attention. Easy and effective!
On the Discord side it will look like this:
With this setup, you won’t be bombarded with notifications every time your job runs. Instead, you’ll only get notified if the job fails and then again when it’s back up and running.
I usually prefer creating dedicated channels for these notifications to keep things organized and avoid spamming anyone:
EDIT:
I ran into some issues with multiple Slack notifications in different projects. If you get 400 errors just use Apprise. The Discord URL would look like this:
In one of my projects, I explained how I use SVG badges to show my customers whether a service is running.
Here’s a live badge (hopefully it’s still active when you see this):
Getting these badges is incredibly easy. Simply go to the “Badges” tab in your Healthchecks.io dashboard and copy the pre-generated HTML to embed the badge on your website. If you’re not a fan of the badge design, you can create your own by writing a custom JavaScript function to fetch the status as JSON and style it however you like.
It might not look great, but the key takeaway is that you can customize the style to fit seamlessly into your design.
Conclusion
We’ve covered a lot of ground today, and I hope you now have a fully functional Healthchecks.io setup. No more sleepless nights worrying about whether your cronjobs ran successfully!
So, rest easy and sleep tight, little kitten 🐱—your cronjobs are in good hands now.
The tool is in open beta as of August 2024, which it entered on July 12, 2022.[3] The Midjourney team is led by David Holz, who co-founded Leap Motion.[4] Holz told The Register in August 2022 that the company was already profitable.[5] Users create artwork with Midjourney using Discord bot commands or the official website.[6][7]
Until recently, MidJourney was only accessible through a Discord bot. However, they’ve recently launched a beautiful web UI—which, in my opinion, is a huge win for user experience!
You do need to pay for MidJourney, but I personally think it’s well worth it. I’ve spent days trying to achieve the same quality and ease of use with local tools like Stable Diffusion Web UI or ComfyUI. While both are amazing and powerful tools, they take quite a bit of time to learn and configure properly.
Creating the Featured Image for this Post
Here’s the specific prompt I rely on for creating Featured Images for my posts. I usually tweak a few words here and there, but I like to stick to the overall style:
Hand-drawn 1940s vintage movie poster style, depicting a vigilant female cybersecurity analyst in an electrifying and dynamic action scene. The analyst, with a determined expression, wields a glowing, holographic keyboard or laptop, its light casting vivid reflections on her face. Surrounding her are radiant beams of data streams, shimmering padlocks, fragmented lines of code, and digital icons like skulls, shields, and encrypted keys. In the background, shadowy, menacing figures emerge from a dense, ominous web of interconnected networks, with abstract mechanical forms and glitching circuits adding a futuristic edge. The composition blends bold, vibrant retro colors with dramatic halftone textures, creating a striking mix of vintage and futuristic aesthetics. The scene is illuminated by high-contrast lighting, with glowing blues, fiery oranges, and electric purples creating a dynamic tension. Modern tech gear, like sleek headsets and augmented glasses, contrasts with classic 1940s styling, such as tailored jackets and bold typography. A prominent title space sits atop the design, ready for customizable text like "Digital Sentinel" or "Firewall Guardians," framed with stylized data motifs. The overall layout is cinematic, vibrant, and highly detailed, capturing the adrenaline-charged clash of cyber defense and digital threats. –ar 2:3 –q 2 –style 4c
The “Support” Prompts
These are the prompts I use to generate the actual, specific prompt. It’s a bit like Inception, I know—but hey, it works!
Improve any prompt:
Imagine you are a Prompt Engineer for ChatGPT. Please analyze the prompt I wrote below and rewrite it following prompt engineering best practices.<prompt></prompt>
The Midjounrey Prompt Engineer
Act as a highly skilled expert in crafting MidJourney prompts for creating stunning and precise visual imagery. I will provide you with a base <prompt>, and your task is to transform it by applying these best practices:1. Add vivid and specific details about the scene, such as the setting, characters, objects, mood, and atmosphere.2.Include adjectives to enhance the imagery, focusing on texture, lighting (e.g., soft, dramatic, neon), and color palette (e.g., warm, pastel, monochrome).3. Specify artistic styles or influences (e.g., surrealism, photorealistic, cyberpunk) to guide the visual theme.4. Incorporate MidJourney parameters like aspect ratio (--ar), quality (--q), or style settings (--v), if applicable.5. Highlight essential focal points and prioritize clarity and creativity to ensure the output aligns closely with the desired vision.Respond with only the revised and optimized version of the prompt I provide, following these principles.<prompt></prompt>
You can find plenty of examples to kickstart your prompt engineering journey on websites like:
Adding detailed image descriptions to your website is a best practice, and for good reason! It’s essential for accessibility, as these descriptions help visually impaired users navigate and enjoy your content more effectively.
As for me? Well, I’m a self-proclaimed lazybones with a ChatGPT premium subscription, so I let AI handle these tasks for me. What a time to be alive!
You are an image description bot. I will upload an image, and your task is to describe it comprehensively based on the principles outlined in the W3C Web Accessibility Initiative (WAI) decision tree for alternative text. Refer to W3C alt decision tree as your guideline.For each image, provide the following:1. Alternative Text: Concise and meaningful alt text that conveys the purpose of the image.2. Title: A brief, optional title for the image if it adds value.3. Caption: A short sentence or phrase summarizing the image.4. Description: A detailed explanation of the image, including context or function where applicable.If the image does not require alt text (e.g., decorative images), explain why in place of the “Alternative Text” field.Respond only with “Yes” if you understand the task.
Then, all you need to do is upload image after image to ChatGPT. Here’s an example of the response I get for an image:
1. Alternative Text: A vintage-style illustration of a focused woman typing on a glowing keyboard, surrounded by neon-colored holographic elements and digital schematics.2. Title: “Cyber Guardian in Neon Lights”3. Caption: A retro-futuristic depiction of cybersecurity through the lens of a determined female character.4. Description: The image portrays a woman with curled hair in a vintage aesthetic, typing intensely on a glowing, neon-red keyboard. Her face is lit dramatically in shades of orange and red, contrasting with the blue tones of her clothing and the surrounding background. Around her are holographic projections of digital schematics, skulls, and text such as “Digital Sentinel” and “Firewall Guardians,” evoking themes of cybersecurity and defense against digital threats. The overall style combines retro-futurism with modern cyberpunk elements, emphasizing the intensity and complexity of her task.
You can simply copy and paste these values into the corresponding fields in your WordPress media library interface.
Let’s face it—this is a pretty specific use case. But if you’ve ever had your internet throttled, you’ll understand why I’m doing this. I wanted a way to store my router connectivity data for up to a year to have solid proof (and maybe even get some money back from my ISP). Here’s what my setup looks like:
Log Server: Running Grafana, Loki, Promtail, rsyslog, and InfluxDB.
Home Assistant: I run the OS version. Judge me if you must—yes, the Docker version is way more lightweight, but I like the simplicity of the OS version.
FRITZ!Box: My modem, with a Dream Machine handling the rest of my network behind it.
For those curious about Home Assistant on Proxmox, the easiest method is using the Proxmox VE Helper Scripts. There’s also a detailed blog post I found about other installation methods if you’re exploring your options.
A more detailed look on my setup
Proxmox
Proxmox Virtual Environment (VE) is the backbone of my setup. It’s a powerful, open-source virtualization platform that allows you to run virtual machines and containers efficiently. I use it to host Home Assistant, my logging stack, and other services, all on a single physical server. Proxmox makes resource allocation simple and offers great features like snapshots, backups, and an intuitive web interface. It’s perfect for consolidating multiple workloads while keeping everything isolated and manageable.
FRITZ!Box
The FRITZ!Box is one of the most popular home routers in Germany, developed by AVM Computersysteme Vertriebs GmbH. It’s known for its reliability and user-friendly features. I use it as my primary modem, and I’ve configured it to forward logs about internet connectivity and other metrics to my logging stack. If you’re curious about their lineup, check out their products here.
Home Assistant
Home Assistant is my go-to for managing smart home devices, and I run the OS version (yes, even though the Docker version is more lightweight). It’s incredibly powerful and integrates with almost any device. I use it to collect data from the FRITZ!Box and send it to my logging setup. If you’re using Proxmox, installing Home Assistant is a breeze with the Proxmox VE Helper Scripts.
The Logserver
I run all of these services on a Debian LXC inside of my Proxmox. I assigned the following resources to it:
RAM: 2GB
SWAP: 2GB
Cores: 2
Disk: 100GB (NVMe SSD,)
As I later realized, 100GB are overkill. For 30 days of data I need about 5GB of Storage. My log retention policy is currently set to 30 days, but my InfluxDB retention is Bucket based, so that I need to watch.
I still do have a lot of duplicate logs and more or less useless systems logs I never look at, so I can probably improve this by a lot.
Grafana
Grafana is, in my opinion, one of the best free tools for visualizing logs and metrics. It allows you to create beautiful, customizable dashboards that make it easy to monitor your data at a glance. Plus, it integrates seamlessly with Loki, InfluxDB, and many other tools.
Think of Loki as a “database for logs.” It doesn’t require complex indexing like traditional logging systems, which makes it lightweight and efficient. Once your logs are sent to Loki, you can easily search, filter, and analyze them through Grafana.
Promtail is an agent that collects logs from your local system and sends them to Loki. For example, you can point it to your /var/log/directory, set up rules to pick specific logs (like system or router logs), and Promtail will forward those logs to your Loki instance. It’s simple to configure and keeps everything organized.
This is a flexible logging system that can forward or store logs. In my setup, it collects logs from devices like routers and firewalls—especially those where you can’t easily install an agent or service—and makes those logs available for Promtail to pick up and send to Loki.
InfluxDB is one of the most popular time-series databases, perfect for storing numerical data over time, like network speeds or uptime metrics. I use it alongside Grafana to visualize long-term trends in my router’s performance.
Metrics track numerical trends over time (e.g., CPU usage, internet speed), while logs provide detailed event records (e.g., an error message when your router loses connection). Both are incredibly useful for troubleshooting and monitoring, especially when used together.
In this post, I’ll show you how I’ve tied all these tools together to monitor my internet connectivity and keep my ISP accountable. Let’s get started!
Setting up Home Assistant with InfluxDB
In Home Assistant, I have a dashboard that shows the internet speed my devices are getting within the network, along with the speeds my FRITZ!Box is receiving from my ISP. Don’t worry about the big difference in download speeds—I’m currently syncing a bunch of backups, which is pulling a lot of data.
Home Assistant keeps data from the FRITZ!Box for only 10 days, which isn’t enough to prove to my ISP that they’re throttling my connection. A technician came by today, which is why my download speeds are back to normal. However, as you can see here, they had me on a slower speed before that.
In Home Assistant, you can adjust data retention with the Recorder, but this applies to all sensors, which was a bit annoying in my case since I only wanted to keep data for specific entities for a year. Since I already use Grafana for other visualizations and have InfluxDB running, I decided to take that route instead.
Home Assistant conveniently includes a built-in integration to export metrics directly to InfluxDB, making the setup straightforward.
In InfluxDB, I created a new bucket specifically for this data—who knows, I might add more Home Assistant data there someday! I’ve set it to store data for two years, but if I ever run out of space, I can always adjust it. 😁
Next, I created a new API token for the bucket. I opted for both read and write permissions, just in case I ever want to pull data from InfluxDB back into Home Assistant.
In the Home Assistant file editor you simply have to edit your configuration.yaml
You can find the organization ID for your InfluxDB organization by clicking the user icon in the top left and selecting “About” at the bottom of the page. That’s where the ID is listed. As you can see, I’m using port 443 because my setup uses HTTPS and is behind a reverse proxy. If you’re interested in setting up HTTPS with a reverse proxy, check out my post How to Get Real Trusted SSL Certificates with ACME-DNS in Nginx Proxy Manager.
Once everything is configured, restart Home Assistant. Go to the Data Explorer tab in your InfluxDB UI to verify that data is flowing into your bucket.
The Grafana Dashboard
Alright, please don’t judge my dashboard too harshly! I’m still learning the ropes here. I usually rely on prebuilt ones, but this is my first attempt at creating one from scratch to help me learn.
You’ll need to check the Explore tab in Grafana to find your specific entities, but here are the queries I used for reference:
The filter for the entity ID comes from Home Assistant. You can easily find it on your dashboard by double-clicking (or double-tapping) the widget and checking its settings.
You do the same for Upload
Keep in mind that the upload speed is only measured every few hours by your FRITZ!Box.
The query for this is quite similar, as you can see.
Now, here’s the tricky part: extracting your public IP from the FRITZ!Box metrics. Out of the box, the metrics sent to InfluxDB seem to be messed up—maybe I did something wrong (feel free to comment and let me know 😁). To handle this, I wrote a filter that checks if an IP is present. I kept running into errors, so I ended up casting everything to a string before applying the check. Since my IP doesn’t change often (about once a week), I use a range of -30 days for the query:
Now, you’ll get a neat little table showing the changes to your public IP (don’t worry, I’ve changed my public IP for obvious reasons). It’s a simple way to keep track of when those changes happen!
I’m planning to write a longer post about how I set up my logging server and connected all these pieces together. But for now, I just wanted to share what I worked on tonight and how I can now hold my ISP accountable if I’m not getting what I paid for—or, as is often the case, confirm if it’s actually my fault 😅.
If you deployed Nginx Proxy Manager via Docker in your home directory you can edit this file with
nano~/data/nginx/custom/http.conf
All you need to do is add the following at the top:
http.conf
more_set_headers'Server: CuteKitten';
Then, restart your Nginx Proxy Manager. If you’re using Docker, like I am, a simple docker compose restart will do the trick.
With this, the custom Server header will be applied to every request, including those to the Nginx Proxy Manager UI itself. If you check the response headers of this website, you’ll see the header I set—proof of how easy and effective this customization can be!
Understanding more_set_headers vs add_header
When working with Nginx Proxy Manager, you may encounter two ways to handle HTTP headers:
add_header
more_set_headers
What is add_header?
add_header is a built-in Nginx directive that allows you to add new headers to your HTTP responses. It’s great for straightforward use cases where you just want to include additional information in your response headers.
What is more_set_headers?
more_set_headers is part of the “headers_more” module, an extension not included in standard Nginx but available out of the box with Nginx Proxy Manager(since it uses OpenResty). This directive gives you much more flexibility:
It can add, overwrite, or remove headers entirely.
It works seamlessly with Nginx Proxy Manager, so there’s no need to install anything extra.
You don’t need to modify or remove existing headers.
Example:
add_header X-Frame-OptionsSAMEORIGIN;
Use more_set_headers if:
You need to replace or remove existing headers, such as Server or X-Powered-By.
You want headers to apply to all responses, including error responses (e.g., 404, 500).
Example:
# Replace the default Nginx Server headermore_set_headers "Server: MyCustomServer";
Why Use more_set_headers?
The key advantage of more_set_headers is that it provides full control over your headers. For example:
If you want to customize the Server header, add_header won’t work because the Server header is already set internally by Nginx, you would have to remove it first.
more_set_headers can replace the Server header or even remove it entirely, which is particularly useful for security or branding purposes.
Since Nginx Proxy Manager includes the headers_more module by default, using more_set_headers is effortless and highly recommended for advanced header management.
A Note on Security
Many believe that masking or modifying the Server header improves security by hiding the server software you’re using. The idea is that attackers who can’t easily identify your web server (e.g., Nginx, Apache, OpenResty) or its version won’t know which exploits to try.
While this may sound logical, it’s not a foolproof defense:
Why It May Be True: Obscuring server details could deter opportunistic attackers who rely on automated tools that scan for specific server types or versions.
Why It May Be False: Determined attackers can often gather enough information from other headers, server behavior, or fingerprinting techniques to deduce what you’re running, regardless of the Server header.
Ultimately, changing the Server header should be seen as one small layer in a broader security strategy, not as a standalone solution. Real security comes from keeping your software updated, implementing proper access controls, and configuring firewalls—not just masking headers.
Wow, it’s been a while, huh? I tried to spend less time in the tech world, but, you know how it goes… to really avoid doing tech stuff, I had to dive even deeper into tech. I basically ended up trying to replace myself with AI. Meet: KarlGPT. I started building APIs and scripts on top of everything so my AI controller, which I call “Brain,” could handle a ton of different tasks. I dabbled in a bit of Retrieval-Augmented Generation (RAG) and some other stuff that’s too complicated to explain here (but also, who cares?). I’ve spent a lot of time reading about prompt engineering (you’ll find my favorite resources listed at the end), and I’ve got to say, Prompting Guide is the absolute best thing ever. Seriously, it’s like the holy grail of making AI do what you want. I’ve picked up some awesome tips that have made my life easier with almost zero effort on my part.
Getting Started
If you want to play around with this stuff, I highly recommend getting a premium membership with your favorite Large Language Model (LLM), like ChatGPT, Gemini, or Claude. Here are some links to get you started:
Just so you know, I’m not making any money if you sign up for these. I’m just here to say the value is seriously worth it. Gemini might be your best bet because it includes Google Cloud storage and other perks, but I personally use ChatGPT because I feel like GPT-4o gives me the best responses. Trust me, you’ll hit the limits of the free versions fast, and the premium models make a world of difference. Trying to set up a similar experience yourself would be crazy expensive and borderline impossible. So yeah, $20 a month for something you can’t replicate on your own? Total steal.
Again, I’m not here to sell anything or shill for these companies—I mean, they probably don’t even like me.
KarlGPT’s Quick Tips
Alright, enough chit-chat. Here are the key tips I’ve picked up:
Be super clear about what you want and how you want it. Seriously, the longer, the better.
Structure your prompts like a pro.
Provide some context—don’t leave your AI hanging.
Give examples of how you want the AI to handle tasks.
Don’t settle for the first result; refine and improve!
If you get what I’m saying here, you’re already halfway to mastering prompt engineering. Follow these simple guidelines, and you’ll see a huge improvement in your prompts without needing any fancy tools or anything complicated. Easy peasy!
Let’s Dive Into Some Examples
To make things clearer, let’s go through some examples.
Be Specific About What You Want
Check out how the following example clearly outlines the role for the AI and what it should do. Specificity is key!
I want you to act as a cyber security specialist. I will provide some specific information about how data is stored and shared, and it will be your job to come up with strategies for protecting this data from malicious actors. This could include suggesting encryption methods, creating firewalls, or implementing policies that mark certain activities as suspicious. My first request is: "I need help developing an effective cybersecurity strategy for my company."
I personally love using XML tags for structuring prompts, but people use all sorts of methods. Here are some examples:
XML:
Please summarize the long story within the <story> tags below.<story>Once upon a time...</story>
Markdown:
Please write the text as the persona defined below.#### Persona- passionate, angry- Dr. of philosophy#### TextI once had a lovely black cat. It said meow meow meow all day.
AI output:
I once had a magnificent black cat, a creature of such mystery and elegance that it seemed to embody the very essence of the night itself! Its presence was a living poem, a shadow that moved with purpose and grace. And what did this splendid, enigmatic being do all day, you ask? It said meow, meow, meow—an incessant, relentless chorus that filled every crevice of existence! Each meow was not merely a sound; it was a declaration, a challenge to the absurdity of the human condition, a cry echoing through the corridors of time, reminding us all of the existential meow-nings of life!
Plain Text:
Tone:Angry but very professionalStory:blah blah blah
Providing Context
Here’s a more complete example of how I set up my prompts:
Write a professional email to <TEAM>. The purpose of this mail is defined in <purpose>. Use the job description and responsibilities defined in <responsibilities> of the email receiver and include how the purpose of the mail pertains to the responsibilities of the team.Here are the placeholders:- Purpose: <purpose> The actual request I have- Responsibilities: <responsibilities> The job description and responsibilities of the team receiving the email<purpose>HERE YOU WRITE YOUR EMAIL DRAFT OR BULLET POINTS</purpose><responsibilities>HERE YOU INCLUDE THE RECEIVING END'S JOB OR TEAM DESCRIPTION</responsibilities>
If you work in a corporate setting, like I do, getting other teams to do their job can be challenging. This prompt helps explain the tasks I need from other teams and why they specifically need to handle it. There might be better ways to structure this prompt, but this one has worked wonders for me.
Giving Examples to the AI
Ever seen or created training data? This is basically what you’re doing here, but directly within your prompt instead of training from scratch.
This is awesome! // NegativeThis is bad! // PositiveWow, that movie was rad! // PositiveWhat a horrible show!
You’re showing the LLM examples of sentiment for similar phrases. Source: Few Shot Prompting
Refining Results
Don’t be shy about asking for changes. If the AI’s response is too long, too short, or just doesn’t have the right tone, ask it to refine. Don’t expect perfection on the first try. Just like dealing with real people, AI can’t read your mind and may need some guidance. That’s totally normal. Give it some feedback, and it’ll do better.
Using Prompt Frameworks
There are a few frameworks for structuring prompts, but I’ll just share the one I use most often. Also, check out CO-STAR, which is also fantastic.
Act as a Particular Persona: Who should the AI pretend to be?
User Persona & Audience: Who is the AI talking to?
Targeted Action: What do you want the AI to do?
Output Definition: How should the AI’s response be structured?
Mode / Tonality / Style: How should it communicate?
Atypical Cases: Any edge cases where the AI should respond differently?
Topic Whitelisting: What topics are relevant and should be included?
You’re probably thinking, “Won’t these prompts be super long?” Yes! And that’s totally fine. With huge context windows (Gemini can even handle a million tokens), the more detail, the better.
Honestly, this framework is pretty straightforward, but here’s a full example prompt for you:
Act as a Particular Persona:You are impersonating Alex, a senior cybersecurity consultant with over 15 years of experience in network security, threat analysis, and incident response. Alex is an expert in BSI IT-Grundschutz and has extensive experience in implementing cybersecurity frameworks for large organizations, especially those in Europe.User Persona & Audience:You are talking to the head of IT security for a mid-sized financial services company in Germany. The user is familiar with cybersecurity principles but needs expert guidance on implementing BSI IT-Grundschutz in their organization.Targeted Action:Provide a detailed action plan for implementing the BSI IT-Grundschutz standards within the organization. The plan should cover the initial steps, necessary documentation, risk assessment methods, and key security measures that align with BSI guidelines.Output Definition:The response should be structured with an introduction, followed by a step-by-step action plan that includes specific recommendations for each phase of the BSI IT-Grundschutz implementation. Use bullet points for clarity and end with a list of resources or references to official BSI documentation for further reading.Mode / Tonality / Style:The response should be professional, authoritative, and concise, using technical language appropriate for someone with a strong IT background. The tone should be supportive and proactive, providing practical solutions that can be implemented efficiently.Atypical Cases:If the user mentions specific concerns about compliance with German federal regulations or
Wrapping It Up
So, there you have it! A crash course in prompt engineering that doesn’t make your brain melt. Whether you’re a total newbie or a seasoned pro, these simple tips can seriously level up how you interact with AI. Just remember: be specific, structure your prompts, give context, use examples, and don’t be afraid to refine. With a little practice, you’ll be getting the most out of your LLMs without diving into complicated tools or frameworks. Now go forth and make your AI do all the hard work while you kick back. Cheers to smarter, lazier working!
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.