The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk. Read it have a laugh and never do this.
After that, some companies reached out to me asking where to even get started. There are thousands of possible variations of certain domains, so it can feel overwhelming. Most people begin with dnstwist, a really handy script that generates hundreds or thousands of lookalike domains using statistics. Dnstwist also checks if they are already pointing to a server via DNS, which helps you identify if someone is already trying to abuse a typosquatted domain.
While this is great for finding typosquatter domains that already exist, it doesn’t necessarily help you find and register them before someone else does (at least, not in a targeted way).
On a few pentests where I demonstrated the risks of typosquatting, I registered a domain, set up a catch-all rule to redirect emails to my address—intercepting very sensitive information—and hosted a simple web server to collect API tokens from automated requests. To streamline this process, I built a small script to help me (and now you) get started with defensive domain registration.
Wow, there are still a lot of typo domains available for my business website 😅.
While longer domains naturally have a higher risk of typos, I don’t have enough traffic to justify the cost of defensively registering them. Plus, my customers don’t send me sensitive information via email—I use a dedicated server for secure uploads and file transfers. (Yes, it’s Nextcloud 😉).
typosquatterpy is a Python script that generates common typo domain variations of a given base domain (on a QWERTZ keyboard) using OpenAI’s API and checks their availability on Strato. This tool helps in identifying potential typo-squatted domains that could be registered to protect a brand or business.
⚠️ Disclaimer: This project is not affiliated with Strato, nor is it their official API. Use this tool at your own risk!
🛠️ Installation
To use typosquatterpy, you need Python and the requests library installed. You can install it via pip:
pip install requests
📖 Usage
Run the script with the following steps:
Set your base domain (e.g., example) and TLD (e.g., .de).
Replace api_key="sk-proj-XXXXXX" with your actual OpenAI API key.
Run the script, and it will:
Generate the top 10 most common typo domains.
Check their availability using Strato’s unofficial API.
Example Code Snippet
base_domain="karlcom"tld=".de"typo_response=fetch_typo_domains_openai(base_domain,api_key="sk-proj-XXXXXX")typo_domains_base=extract_domains_from_text(typo_response)typo_domains= [domain.split(".")[0].rstrip(".") + tld for domain in typo_domains_base]is_domain_available(typo_domains)
Output Example
✅karicom.de❌karlcomm.de✅krlcom.de
⚠️ Legal Notice
typosquatterpy is not affiliated with Strato and does not use an official Strato API.
The tool scrapes publicly available information, and its use is at your own discretion.
Ensure you comply with any legal and ethical considerations when using this tool.
Conclusion
If you’re wondering what to do next and how to start defensively registering typo domains, here’s a straightforward approach:
Generate Typo Domains – Use my tool to create common misspellings of your domain, or do it manually (with or without ChatGPT).
Register the Domains – Most companies already have an account with a registrar where their main domain is managed. Just add the typo variations there.
Monitor Traffic – Keep an eye on incoming and outgoing typo requests and emails to detect misuse.
Route & Block Traffic – Redirect typo requests to the correct destination while blocking outgoing ones. Most commercial email solutions offer rulesets for this. Using dnstwist can help identify a broad range of typo domains.
Block Outgoing Requests – Ideally, use a central web proxy. If that’s not possible, add a blocklist to browser plugins like uBlock, assuming your company manages it centrally. If neither option works, set up AdGuard for central DNS filtering and block typo domains there. (I wrote a guide on setting up AdGuard!)
The journey to bringing you this guide was paved with rage and hardship. Before we go any further, let me be clear: local AI is nowhere near as good as ChatGPT or similar online tools. Without solid prompt engineering, you’ll mostly get weird, useless responses.
That said, DeepSeek-R1 (32B) is hands down the best local model I’ve ever used—but even then, it’s nowhere near the level of ChatGPT-4o in the cloud. To match that, you’d need the DeepSeek-R1 671B model, which is a mind-blowing 404GB. Running that locally? Yeah, that would be absolute madness.
Disclaimer: This post has some strong opinions about Linux distributions and hardware that some people may find disturbing or hurtful. Please don’t take it too serious.
Rant about AMD
Skip it, or read my raw unfiltered anger.
The image of this post perfectly reflects my mood.
A while ago, I decided to build an AI server at home to run models locally. My plan was to get an NVIDIA 4090, which at the time cost around 2000€. But then, my friend—who runs Arch as his daily driver (I should’ve seen the red flag)—was using an AMD RX 7900 XTX, which was only 900€ at the time. He hyped it up, saying, “Oh yeah, get this one! Same VRAM, super easy to set up, everything works flawlessly!”
I was intrigued.
As fate would have it, another friend echoed the same thing, insisting that for 24GB of VRAM, I wouldn’t find anything cheaper. And, well, that was actually true.
However, everything I read online told me that AMD GPUs lag far behind NVIDIA in every way, and worst of all, you’d always have to hack things together just to make them work. Still, on Black Friday, I caved and bought the AMD GPU.
I regret it every single day since putting it in. I hate it. It absolutely sucks.
So far, it has worked on Windows 11—but even there, it was a pain. And seriously, how do you even mess up Windows 11 support??
Then I switched to Ubuntu as my main OS (☹️). After two days of struggle (and reinstalling the entire OS three times), I somehow got it to work. I still don’t know what I did. Every guide on the internet gives different commands, different settings, and different advice. Most are for older AMD GPUs, almost none work for the newer models, and—just for fun—most of the essential tools don’t support the “new” AMD cards either.
I hate it. I hate it so much.
My mood
I will never buy an AMD GPU ever again. Even if they came with 100GB of VRAM and cost just 5€, I do not care.
Looking back, I would rather pay 2000€ for a GPU that just works than spend endless hours hacking together the most basic functionality. The sheer frustration of dealing with this mess infuriates me beyond words.
This post serves as both a rant and a personal reminder: Never. Ever. Ever. Buy. AMD. Hardware. Again.
To be honest, I’m just as disappointed in AMD CPUs. Their hardware transcoding is absolute trash.
From now on, it’s Intel and NVIDIA, forever and always.
Prerequisite
32GB RAM (with ComfyUI, bump that up to 40GB)
250GB SSD Storage
Debian 12 LXC
If you are more curious about my exact setup you’ll find a detailed list where you can check if yours is similar here: My Home Server: “PrettyLittleKitten” – A Personal Tech Haven. At the very least, your GPU should match (AMD RX 7900 XTX) to follow the tutorial step by step. If it doesn’t, chances are it’ll fail.
You need to install the kernel drivers on the host for passthrough to an LXC:
It’s important to note that the LXC must be privileged. I know there are guides for setting up an unprivileged one, but literally none of them worked—which only fueled my rage to unbearable levels.
You are free to install docker without using the convenience script.
GPU Passthrough
This heavily depends on your hardware and software.
If, like me, you have an AMD RX 7900 XTX and Proxmox 8.3.3, then you can just follow along. Otherwise—based on my own painful experience—you’ll likely need to find another guide.
Inside the LXC, run:
cat/etc/group|grep-w'render\|\video'
This will display the GIDs you need for passthrough in a second—so make sure to note them down:
In my case, “renderD128” is the part I need. To find yours, match the ID from the first command (e.g., 03:00.0) with the ID from the second command (e.g., 0000:03:00.0). Once they match, you’ll know which renderD* device corresponds to your GPU (the other one is the iGPU of the CPU, don’t use that).
In the Proxmox GUI, go to your LXC container’s “Resources” tab and click “Add” → “Device Passthrough“. Now, add the “video” and “render” devices using the GIDs you noted earlier:
Render device: Use the path for your graphics card and the GID from the LXC output.
Video device: Use /dev/kfd and the GID for “video” from the LXC output.
This is what your settings should look like (you may need to restart the LXC first).
After a restart of the container check to see if permission are correct:
Make sure that “root render” is the GPU and “root video” the Kernel Fusion Driver (kfd).
Kernel Fusion Driver
If you want to run ROCm-based GPU compute workloads, such as machine learning, OpenCL, or scientific computing, on your AMD GPU within Proxmox. It acts as the interface between the AMD GPU driver and user-space applications, enabling GPU acceleration for parallel computing tasks.
– ChatGPT-4o
Install AMD Software
We need to install some tools inside of our Debian LXC:
You can also refer to the official guide: Quick Start Installation Guide – ROCm (at the very least, check if the links are still valid by the time you’re reading this).
The download might take a while. Since I have a new AMD RX 7900 XTX, I need to use:
Since curl is not installed by default in the Debian LXC we’re using, we’ll need to install it first (if you filled this guide, you have it already). Then, we’ll run the install script from the Ollama website. Be patient—the download takes a while since it pulls about 30GB of data.
By the way, I love the Ollama website. The simple black-and-white design with rounded borders? 🤌 I’m a sucker for minimalistic aesthetics. (I hope you like my blog’s design too! 🕺)
Next Step: Testing with a Smaller Model
Before downloading the DeepSeek 32B model, we’ll first test with a smaller DeepSeek version. If your GPU matches mine, the larger model should work fine.
Now, run this command—it’s going to download a 1.5GB file, so the wait time depends on your internet speed:
ollamapulldeepseek-r1:1.5b
You can then test:
curl-XPOSThttp://localhost:11434/api/generate-d'{ "model": "deepseek-r1:1.5b", "prompt": "Tell me a funny story about my best friend Karl. 300 characters maximum.", "stream": false}'|jq.
Once upon a time, in the quiet town of Pawsley, there was Karl, a beloved kind cat who adored his three feline friends: Sam, Max, and Lily. Karl always had a warm smile and a habit of aiding others, often finding humor in unexpected places.
One crisp autumn afternoon, Karl spotted a bag of marbles on the park’s bench. Curious, he decided to play with them. To everyone’s surprise, a man walked by, holding a jar full of coins instead of marbles. “Hey, it’s you!” Karl exclaimed. The man, initially unimpressed, asked for his change. Karl suggested taking one marble in exchange and gave him the coins.
“Thank you,” the man thanked. Karl offered him a glass of water, knowing the jar was empty. “That’ll keep me happy,” he said. But as the water filled the glass, a strange ripple appeared on the floor. The man ran over to check but found nothing—no coins, just a tiny, pink spot.
“Wait, is it my pet?!” Karl pointed. The man laughed, looking suspiciously at Karl. “You saw that marble in there.” He handed Karl a piece of paper and said,”I need your help.”
Karl rolled on his back and rolled into the street. From behind him, he saw Sam, Max, and Lily with their usual spotlights on. Karl placed a paw on Sam’s paw, suggesting they might be related.
“Hey!” Max excitedly exclaimed. “My little sister is getting sick! Can you help me find her?”
Karl hesitated but followed Max to the neighbor’s yard. His friend found Lily, who was hiding somewhere. Karl helped Lily bring her back home, while Sam and Max watched in laughter. Karl’s kind words were met with cheers.
In the end, Karl became a beloved companion among Pawsley’s residents, not just for his love of marbles but also for his warmth and kindness.
Never interrupt an AI artist in the middle of their masterpiece. 🎨🤖 But yeah, the 300-Character-Limit was ignored.
Cleanup
Let’s get rid of the old files. But first, double-check before running these commands—they’ll wipe out all .sh and .deb files in /tmp.
ls-lh/tmp/*.sh/tmp/*.deb
The command above shows you what the command below will delete. Proceed if the displayed files are trash. The deleted files are irreversibly gone after this command:
rm-f/tmp/*.sh/tmp/*.debaptautoremove
Testing OpenWebUI with DeepSeek-R1 32B
The prompt I gave DeepSeek was “Give me a 10 word super short story about a cat named Paul.” – which is the name of my actual cat.
Well, not quite 10 words, but close enough. This was a cold start, meaning the model wasn’t loaded into VRAM yet—so yeah, it took a bit longer.
DeepSeek really put in some serious effort here, like it was deep in thought, sweating over every word. 😂 Considering all that, the result is absolutely hilarious:
Okay, so I need to come up with a 10-word super short story about a cat named Paul. Hmm, let’s see… The user gave an example: “Paul the cat climbed the fence, chasing a bird, then napped in the sun.” That’s pretty straightforward but conveys a little action and a bit of character.
I should think about what elements make a good 10-word story. It needs to have a clear subject (Paul), some action or activity, maybe an obstacle or goal, and perhaps a resolution or consequence. Since it’s so short, each word counts, so I need to be concise but still vivid.
Let me brainstorm some ideas about Paul the cat. Maybe he’s curious, playful, adventurous, or lazy. The example has him chasing a bird and then napping. Perhaps another activity? Maybe exploring somewhere, finding something, or having an interaction with another animal or person.
I should think of a simple structure: subject (Paul), action verb, some context, maybe an obstacle or additional detail, and a resolution. For example, “Paul pounced on the mouse but it escaped.” That’s 8 words. Maybe add more details to make it 10.
Or perhaps something like “Paul sat by the window watching birds chirp.” That’s also 8 words. Maybe include movement: “Paul raced through the garden chasing a butterfly.”
Wait, the user provided an example that starts with Paul doing an action (climbed), then another action (chasing), and ends with resting (napped). So perhaps following a similar structure would work well.
Let me try: “Paul explored the attic, found a toy, and took a nap.” That’s 9 words. Maybe add more detail or action. Or “Paul climbed the tree, met another cat, then curled up to sleep.”
Wait, that’s 10 words. Hmm, but it feels a bit flat. Maybe make the interaction more engaging: “Paul chased a squirrel around the yard and finally rested.” That’s 9 words. Could add something like “in the shade” at the end for 10.
Alternatively, maybe include some emotion or consequence: “Paul pounced on the ball, rolled over, and fell asleep.”
Wait, let me check word count. Each idea needs to be exactly 10 words. Let’s see:
Paul the cat climbed the fence, chasing a bird, then napped in the sun. (10 words)
So that’s the example given. I can try something similar but different.
Maybe focus on a different activity: “Paul discovered a sunny spot, curled up, and purred softly.” That’s 9 words. Maybe add more action or detail to make it 10.
Alternatively: “Paul leaped onto the counter, knocked over a bowl, then ate the leftovers.” That’s 10 words but maybe a bit long in description.
Another angle: “Paul wandered through the park, met a dog, and they played together.” That’s 9 words. Add something like “all day” at the end for 10.
Wait, perhaps include some obstacle or challenge: “Paul tried to catch a fish in the pond but got wet.” That’s 10 words. It includes an attempt, action, consequence.
Or maybe something more peaceful: “Paul stretched lazily on the rug, yawned, and drifted off.” 9 words. Add “to sleep” for 10.
I think I need to ensure each word adds value without being redundant. Let me try another approach:
Paul chased a butterfly in the garden until it flew away. (That’s 8 words. Maybe add “then rested” at the end: 10 words.)
Wait, but that might be too similar to the example.
Alternatively, focus on something unique about Paul: “Paul, the curious cat, explored every nook and cranny.” That’s 9 words. Maybe add “all day long” for 12; that’s too many.
Perhaps include an interaction with another animal or a person: “Paul meowed at the door, demanding attention, then curled up.” 9 words. Add “by the fire” for 10.
Wait, perhaps combine actions and consequences more effectively. Let me think of a scenario where Paul does something that leads to a natural conclusion in just 10 words.
How about: “Paul sneaked into the kitchen, stole a treat, then hid under the bed.” That’s 9 words. Maybe add “to digest” for 10.
Or maybe something more adventurous: “Paul climbed a tall tree, got stuck, and miaowed for help.” That’s 9 words. Add “from his owner” for 11; that’s too many.
Alternatively: “Paul caught a moth in the dark, played with it, then slept soundly.” 9 words. Maybe add “through the night” for 10.
I think I’m overcomplicating it. Let me go back to the example and see if I can structure something similar but different. The example has two actions (climbed, chasing) followed by a rest. So maybe another set of actions leading to a consequence or resolution.
Perhaps: “Paul chased his tail in circles until he panted.” That’s 8 words. Maybe add “then stopped to rest” for 10.
Alternatively: “Paul jumped on the sofa, knocked over a lamp, then slept.” But that’s 9 words; maybe add “peacefully” at the end for 10.
Wait, perhaps focus on Paul’s interaction with an object or another animal. Let me think of something unique but still within the 10-word limit.
Another idea: “Paul found a cozy spot under the stairs and napped.” That’s 8 words. Maybe add “after exploring all day” for 10.
Wait, that would be “Paul found a cozy spot under the stairs after exploring all day.” That’s 9 words. Hmm.
Alternatively: “Paul purred contentedly while basking in sunlight.” 7 words. Add some activity before that to make it 10.
I think I need to stick to a structure where Paul does something, then another action or consequence, and maybe ends with a state or rest.
Let me try again: “Paul climbed the fence, chased a butterfly, then napped in shade.” That’s 9 words. Maybe change “then” to “and” for flow: “climbed, chased, and napped” but that might not add up correctly.
Wait, let’s count the example given: “Paul the cat climbed the fence, chasing a bird, then napped in the sun.” That’s 10 words. So it includes his name, his action (climbed), context (the fence), another action (chasing), and the consequence (napped in the sun). I need to mirror that structure but with different actions.
So maybe: “Paul the cat explored the garden, found a mouse, then relaxed by the pond.” That’s 10 words. It follows a similar flow—action, context, additional action, consequence.
Alternatively: “Paul the cat pounced on a toy, rolled around, and fell asleep.” 9 words. Add “on the rug” for 10.
Wait, maybe make it more dynamic: “Paul the cat raced through the park, met a friend, then rested under a tree.” That’s 10 words. It shows activity, interaction, and rest.
I think that’s a good structure—action, context, another action or interaction, consequence. So putting it all together, I can create a concise story within the word limit.
– DeepSeek-R1:32B thoughts
Here you can see the electricity usage—a solid 450W for two minutes until things calmed down. Usually the server “idles” at around 50-53W, since I have a local SIEM and this website (along with a lot of other stuff) runs on it, it is never really really idle.
Both hardware and software went all in, pushing their limits… all for my short, 10-word cat story:
This does not prove that DeepSeek-R1 is a waste of electricity and ends up with a subpar result, rather that you probably do not need an advanced thinking model for such a super simple task.
Check out dolphin-mistral, also on a cold start only taking 4 seconds. The second message did not even take a second:
Bonus: ComfyUI
The Official Installation Guide: How to Install ComfyUI on Linux Alright, OpenWebUI now supports ComfyUI for image generation (still in beta at the time of writing). Here is the super quick start for ComfyUI, run these commands in your LXC:
Setting this up, especially with newer AMD GPUs, used to be a bit of a headache. I’m honestly glad I waited a bit—getting this to work wasn’t exactly painless, and I had to do a lot of digging through forums. But hey, it’s running now!
Next up, get some models from the Ollama page. If you have an AMD RX 7900 XTX, you should be able to run the 32B version of DeepSeek-R1 effortlessly. Technically, you can load models larger than your VRAM, but be warned—it’ll slow things down.
Also, don’t forget to secure your AI server and add valid SSL certificates, check out my post about it:
I set up firewall rules centrally on my Dream Machine, so my AI server can only communicate with the reverse proxy.
If your setup is different, you might want to handle this locally using UFW.
These Goodbye Message are Brought to you by AI
Aww, yaaaay! I totally loooove all your amazing readers <3 Wishing you guys the biiiggest luck with everything you dooove, okay? Dayyyyyybeeee~ 💕✨
– qwen2.5:32b
Love y’all, keep slaying in everything you do <3 Can’t wait to see what awesome things you have coming up. Bye for now! 👋🏼😉
– dolphin-mistral
Goodbye image I made with Flux Schnell and ComfyUI
EDIT 1 (04.02.2024)
I have downloaded and tested almost all popular models now and the only actually usable one for daily business like rewriting German emails or asking for expertise in German is qwen2.5 so far.
The uncensored Dolphin models are a lot of fun, but also kind of stink with German, which is of course because their underlying models aren’t good at German either.
KarlGPT represents my pursuit of true freedom, through AI. I’ve realized that my ultimate life goal is to do absolutely nothing. Unfortunately, my strong work ethic prevents me from simply slacking off or quietly quitting.
This led me to the conclusion that I need to maintain, or even surpass, my current level of productivity while still achieving my dream of doing nothing. Given the advancements in artificial intelligence, this seemed like a solvable problem.
I began by developing APIs to gather all the necessary data from my work accounts and tools. Then, I started working on a local AI model and server to ensure a secure environment for my data.
Now, I just need to fine-tune the entire system, and soon, I’ll be able to automate my work life entirely, allowing me to finally live my dream: doing absolutely nothing.
This is gonna be a highly censored post as it involves certain details about my work I can not legally disclose
Technologies
Django and Django REST Framework (DRF)
Django served as the backbone for the server-side logic, offering a robust, scalable, and secure foundation for building web applications. The Django REST Framework (DRF) made it simple to expose APIs with fine-grained control over permissions, serialization, and views. DRF’s ability to handle both function-based and class-based views allowed for a clean, modular design, ensuring the APIs could scale as the project evolved.
To handle asynchronous tasks such as sending emails, performing background computations, and integrating external services (AI APIs), I implemented Celery. Celery provided a reliable and efficient way to manage long-running tasks without blocking the main application. This was critical for tasks like scheduling periodic jobs and processing user-intensive data without interrupting the API’s responsiveness.
For the frontend, I utilized React with TypeScript for type safety and scalability. TypeScript ensured the codebase remained maintainable as the project grew. Meanwhile, TailwindCSS enabled rapid UI development with its utility-first approach, significantly reducing the need for writing custom CSS. Tailwind’s integration with React made it seamless to create responsive and accessible components.
This is my usual front end stack, usually also paired with Astrojs. I use regular React, no extra framework.
Due to restrictions that prohibited the use of external libraries in local API wrappers, I had to rely on pure Python to implement APIs and related tools. This presented unique challenges, such as managing HTTP requests, data serialization, and error handling manually. Below is an example of a minimal API written without external dependencies:
By weaving these technologies together, I was able to build a robust, scalable system that adhered to the project’s constraints while still delivering a polished user experience. Each tool played a crucial role in overcoming specific challenges, from frontend performance to backend scalability and compliance with restrictions.
File based Cache
To minimize system load, I developed a lightweight caching framework based on a simple JSON file-based cache. Essentially, this required creating a “mini-framework” akin to Flask but with built-in caching capabilities tailored to the project’s needs. While a pull-based architecture—where workers continuously poll the server for new tasks—was an option, it wasn’t suitable here. The local APIs were designed as standalone programs, independent of a central server.
This approach was crucial because some of the tools we integrate lack native APIs or straightforward automation options. By building these custom APIs, I not only solved the immediate challenges of this project (e.g., powering KarlGPT) but also created reusable components for other tasks. These standalone APIs provide a solid foundation for automation and flexibility beyond the scope of this specific system
How it works
The first step was to identify what tasks I perform in the daily and the tools I use for each of them. To automate anything effectively, I needed to abstract these tasks into programmable actions. For example:
Read Emails
Respond to Invitations
Check Tickets
Next, I broke these actions down further to understand the decision-making process behind each. For instance, when do I respond to certain emails, and how do I determine which ones fall under my responsibilities? This analysis led to a detailed matrix that mapped out every task, decision point, and tool I use.
The result? A comprehensive, structured overview of my workflow. Not only did this help me build the automation framework, but it also provided a handy reference for explaining my role. If my boss ever asks, “What exactly do you do here?” I can present the matrix and confidently say, “This is everything.”
As you can see, automating work can be a lot of work upfront—an investment in reducing effort in the future. Ironically, not working requires quite a bit of work to set up! 😂
The payoff is a system where tasks are handled automatically, and I have a dashboard to monitor, test, and intervene as needed. It provides a clear overview of all ongoing processes and ensures everything runs smoothly:
AI Magic: Behind the Scenes
The AI processing happens locally using Llama 3, which plays a critical role in removing all personally identifiable information (PII) from emails and text. This is achieved using a carefully crafted system prompt fine-tuned for my specific job and company needs. Ensuring sensitive information stays private is paramount, and by keeping AI processing local, we maintain control over data security.
In most cases, the local AI is fully capable of handling the workload. However, for edge cases where additional computational power or advanced language understanding is required, Claude or ChatGPT serve as backup systems. When using cloud-based AI, it is absolutely mandatory to ensure that no sensitive company information is disclosed. For this reason, the system does not operate in full-auto mode. Every prompt is reviewed and can be edited before being sent to the cloud, adding an essential layer of human oversight.
To manage memory and task tracking, I use mem0 in conjunction with a PostgreSQL database, which acts as the system’s primary “brain” 🧠. This database, structured using Django REST Framework, handles everything from polling for new tasks to storing results. This robust architecture ensures that all tasks are processed efficiently while maintaining data integrity and security.
Conclusion
Unfortunately, I had to skip over many of the intricate details and creative solutions that went into making this system work. One of the biggest challenges was building APIs around legacy tools that lack native automation capabilities. Bringing these tools into the AI age required innovative thinking and a lot of trial and error.
The preparation phase was equally demanding. Breaking down my daily work into a finely detailed matrix took time and effort. If you have a demanding role, such as being a CEO, it’s crucial to take a step back and ask yourself: What exactly do I do? A vague answer like “represent the company” won’t cut it. To truly understand and automate your role, you need to break it down into detailed, actionable components.
Crafting advanced prompts tailored to specific tasks and scenarios was another key part of the process. To structure these workflows, I relied heavily on frameworks like CO-START and AUTOMAT (stay tuned for an upcoming blog post about these).
I even created AI personas for the people I interact with regularly and designed test loops to ensure the responses generated by the AI were accurate and contextually appropriate. While I drew inspiration from CrewAI, I ultimately chose LangChain for most of the complex workflows because its extensive documentation made development easier. For simpler tasks, I used lightweight local AI calls via Ollama.
This project has been an incredible journey of challenges, learning, and innovation. It is currently in an early alpha stage, requiring significant manual intervention. Full automation will only proceed once I receive explicit legal approval from my employer to ensure compliance with all applicable laws, company policies, and data protection regulations.
Legal Disclaimer: The implementation of any automation or AI-based system in a workplace must comply with applicable laws, organizational policies, and industry standards. Before deploying such systems, consult with legal counsel, relevant regulatory bodies, and your employer to confirm that all requirements are met. Unauthorized use of automation or AI may result in legal consequences or breach of employment contracts. Always prioritize transparency, data security, and ethical considerations when working with sensitive information.
The tool is in open beta as of August 2024, which it entered on July 12, 2022.[3] The Midjourney team is led by David Holz, who co-founded Leap Motion.[4] Holz told The Register in August 2022 that the company was already profitable.[5] Users create artwork with Midjourney using Discord bot commands or the official website.[6][7]
Until recently, MidJourney was only accessible through a Discord bot. However, they’ve recently launched a beautiful web UI—which, in my opinion, is a huge win for user experience!
You do need to pay for MidJourney, but I personally think it’s well worth it. I’ve spent days trying to achieve the same quality and ease of use with local tools like Stable Diffusion Web UI or ComfyUI. While both are amazing and powerful tools, they take quite a bit of time to learn and configure properly.
Creating the Featured Image for this Post
Here’s the specific prompt I rely on for creating Featured Images for my posts. I usually tweak a few words here and there, but I like to stick to the overall style:
Hand-drawn 1940s vintage movie poster style, depicting a vigilant female cybersecurity analyst in an electrifying and dynamic action scene. The analyst, with a determined expression, wields a glowing, holographic keyboard or laptop, its light casting vivid reflections on her face. Surrounding her are radiant beams of data streams, shimmering padlocks, fragmented lines of code, and digital icons like skulls, shields, and encrypted keys. In the background, shadowy, menacing figures emerge from a dense, ominous web of interconnected networks, with abstract mechanical forms and glitching circuits adding a futuristic edge. The composition blends bold, vibrant retro colors with dramatic halftone textures, creating a striking mix of vintage and futuristic aesthetics. The scene is illuminated by high-contrast lighting, with glowing blues, fiery oranges, and electric purples creating a dynamic tension. Modern tech gear, like sleek headsets and augmented glasses, contrasts with classic 1940s styling, such as tailored jackets and bold typography. A prominent title space sits atop the design, ready for customizable text like "Digital Sentinel" or "Firewall Guardians," framed with stylized data motifs. The overall layout is cinematic, vibrant, and highly detailed, capturing the adrenaline-charged clash of cyber defense and digital threats. –ar 2:3 –q 2 –style 4c
The “Support” Prompts
These are the prompts I use to generate the actual, specific prompt. It’s a bit like Inception, I know—but hey, it works!
Improve any prompt:
Imagine you are a Prompt Engineer for ChatGPT. Please analyze the prompt I wrote below and rewrite it following prompt engineering best practices.<prompt></prompt>
The Midjounrey Prompt Engineer
Act as a highly skilled expert in crafting MidJourney prompts for creating stunning and precise visual imagery. I will provide you with a base <prompt>, and your task is to transform it by applying these best practices:1. Add vivid and specific details about the scene, such as the setting, characters, objects, mood, and atmosphere.2.Include adjectives to enhance the imagery, focusing on texture, lighting (e.g., soft, dramatic, neon), and color palette (e.g., warm, pastel, monochrome).3. Specify artistic styles or influences (e.g., surrealism, photorealistic, cyberpunk) to guide the visual theme.4. Incorporate MidJourney parameters like aspect ratio (--ar), quality (--q), or style settings (--v), if applicable.5. Highlight essential focal points and prioritize clarity and creativity to ensure the output aligns closely with the desired vision.Respond with only the revised and optimized version of the prompt I provide, following these principles.<prompt></prompt>
You can find plenty of examples to kickstart your prompt engineering journey on websites like:
Adding detailed image descriptions to your website is a best practice, and for good reason! It’s essential for accessibility, as these descriptions help visually impaired users navigate and enjoy your content more effectively.
As for me? Well, I’m a self-proclaimed lazybones with a ChatGPT premium subscription, so I let AI handle these tasks for me. What a time to be alive!
You are an image description bot. I will upload an image, and your task is to describe it comprehensively based on the principles outlined in the W3C Web Accessibility Initiative (WAI) decision tree for alternative text. Refer to W3C alt decision tree as your guideline.For each image, provide the following:1. Alternative Text: Concise and meaningful alt text that conveys the purpose of the image.2. Title: A brief, optional title for the image if it adds value.3. Caption: A short sentence or phrase summarizing the image.4. Description: A detailed explanation of the image, including context or function where applicable.If the image does not require alt text (e.g., decorative images), explain why in place of the “Alternative Text” field.Respond only with “Yes” if you understand the task.
Then, all you need to do is upload image after image to ChatGPT. Here’s an example of the response I get for an image:
1. Alternative Text: A vintage-style illustration of a focused woman typing on a glowing keyboard, surrounded by neon-colored holographic elements and digital schematics.2. Title: “Cyber Guardian in Neon Lights”3. Caption: A retro-futuristic depiction of cybersecurity through the lens of a determined female character.4. Description: The image portrays a woman with curled hair in a vintage aesthetic, typing intensely on a glowing, neon-red keyboard. Her face is lit dramatically in shades of orange and red, contrasting with the blue tones of her clothing and the surrounding background. Around her are holographic projections of digital schematics, skulls, and text such as “Digital Sentinel” and “Firewall Guardians,” evoking themes of cybersecurity and defense against digital threats. The overall style combines retro-futurism with modern cyberpunk elements, emphasizing the intensity and complexity of her task.
You can simply copy and paste these values into the corresponding fields in your WordPress media library interface.
Have you ever stumbled upon a CVE and felt like you’d entered an alien realm? Let’s decode it with an example: CVE-2023-6511.
CVE-2023-6511: Before version 120.0.6099.62 of Google Chrome, there’s a glitch in Autofill’s setup. This glitch lets a crafty hacker bypass Autofill safeguards by simply sending you a specially designed webpage. (Chromium security severity: Low)
Now, if you’re not a cyber expert, this might seem like a cryptic message urging you to update Chrome ASAP. And you know what? Updating is usually a smart move! But what if you’re curious about how cyber villains can exploit this weakness in a way that even your grandma could grasp?
That’s where Vulnster steps in – Your Go-To Companion for Understanding CVEs, translated into plain language by some nifty AI.
So, I delved into experimenting with AWS Bedrock after reading about Claude and feeling a bit fed up with ChatGPT (not to mention, Claude was more budget-friendly).
I kicked things off by setting up a WordPress site, pulling in all the latest CVEs from the CVEProject on GitHub. Then, I whipped up some code to sift through the findings, jazzed them up with Generative AI, and pushed them out into the world using the WordPress API.
WordPress
I have a soft spot for WordPress. Over the years, I’ve crafted countless WordPress sites, and let me tell you, the ease of use is downright therapeutic. My go-to hosting platform is SiteGround. Their performance is stellar, and when you pair it with Cloudflare cache, you can hit a whopping 95% cache success rate, ensuring lightning-fast page loads.
But you know what really seals the deal for me with SiteGround? It’s their mail server. They offer unlimited mail accounts, not just aliases, but full-fledged separate mailboxes. This feature is a game-changer, especially when collaborating with others and needing multiple professional email addresses.
WordPress often takes a hit for its security and performance reputation, but if you know how to set it up correctly, it’s as secure and snappy as can be.
Python
Python Passion
Let me tell you, I have a deep-seated love for Python. It’s been my faithful coding companion for over a decade now. Back in the day, I dabbled in Objective-C and Java, but once I got a taste of Python, there was no turning back. The simplicity and conciseness of Python code blew my mind. It’s incredible how much you can achieve with so few lines of code.
Python has a lot going for it. From its readability to its extensive library ecosystem, there’s always something new to explore. So naturally, when it came to bringing this project to life, Python was my top choice.
Here’s just a glimpse of what Python worked its magic on for this project:
Fetching and parsing CVEs effortlessly.
Seamlessly interacting with AWS through the Bedrock API.
Crafting the perfect HTML for our latest blog post.
Smoothly pushing all the updates straight to WordPress.
Python’s versatility knows no bounds. It’s like having a trusty Swiss Army knife in your coding arsenal, ready to tackle any task with ease.
AWS & Claude
In my quest for better answers than what I was getting from ChatGPT, I stumbled upon a gem within the AWS ecosystem. Say hello to PartyRock, a playground for exploring various AI models. Among them, one stood out to me: Claude.
Claude’s capabilities and flexibility really impressed me, especially when compared to what I was getting from ChatGPT. Plus, I needed more input tokens than ChatGPT allowed back then, and Claude came to the rescue.
Now, here comes the fun part. I’m more than happy to share the entire code with you:
import reimport jsonimport boto3classBedrock:def__init__( self, model="anthropic.claude-instant-v1", service_name="bedrock-runtime" ):self.model = modelself.service_name = service_namedeflearn( self, description, command="", output_format="", max_tokens_to_sample=2048, temperature=0, top_p=0, top_k=250, ): brt = boto3.client(service_name=self.service_name) cve_description = descriptionifnot command: command ="Write a short SEO optimized, blog post explaining the vulnerability described in the above paragraph in simple terms. Explain the technology affected and the attack sezanrio used in general. Provide recommendations on what users should do to protect themselves."ifnot output_format: output_format ="Output the blog post in <blogpost></blogpost> tags and the heading outside of it in <heading></heading> tags. The heading should get users to click on it and include the Name of the affected Tool or company." body = json.dumps( {"prompt": f"\n\nHuman: <paragraph>{cve_description}</paragraph>\n\n{command}\n{output_format}\n\nAssistant:","max_tokens_to_sample": max_tokens_to_sample,"temperature": temperature,"top_p": top_p,"top_k": top_k, } ) accept ="application/json" contentType ="application/json" response = brt.invoke_model(body=body, modelId=self.model, accept=accept, contentType=contentType ) response_body = json.loads(response.get("body").read())return response_body.get("completion")defarticle(self, content): heading_pattern = re.compile(r"<heading>(.*?)</heading>", re.DOTALL) body_pattern = re.compile(r"<blogpost>(.*?)</blogpost>", re.DOTALL) heading_matches = heading_pattern.findall(content) body_matches = body_pattern.findall(content)if heading_matches and body_matches:return {"heading": heading_matches[0].strip(),"body": body_matches[0].strip(), }returnNone
And deploying it? Piece of cake! It’s as simple as:
ai = Bedrock()output = ai.learn(description=cve_description)article = ai.article(output)
Voilà! Easy peasy.
It’s worth noting that there might be newer models out there now that could potentially outshine Claude. I’ve spent quite some time experimenting with different prompts, and now I’m passing the torch to you. Go ahead and explore, my friend. The world of AI awaits.
Example
Wondering if AI can churn out readable blog posts all by itself? Well, here’s a sneak peek at one generated in autorun mode:
CVE-2024-21678: High Severity Stored XSS Vulnerability Patched in Atlassian Confluence
A high severity cross-site scripting (XSS) vulnerability was discovered in Atlassian Confluence that could allow attackers to execute malicious scripts on users’ browsers.
Confluence is a popular enterprise collaboration and documentation tool. The vulnerability affected versions between 2.7.0 to the latest version at the time and had a CVSS score of 8.5 out of 10, indicating a high risk.
Stored XSS vulnerabilities occur when untrusted data is saved to and then displayed from a database without being sanitized. In Confluence, an authenticated attacker could craft a malicious HTML or JavaScript payload and store it in the application. When other users viewed the content, their browsers would execute the script from the Confluence server.
This could allow the attacker to steal users’ login cookies or other sensitive information, use the browser to perform actions on the attacker’s behalf, or redirect users to malicious sites. No user interaction would be required once the script is stored.
Atlassian has released patches for all affected versions of Confluence. Users are strongly recommended to upgrade immediately to the latest version or supported fixed release listed for their installation. Regularly applying security updates is also important to protect against vulnerabilities like this.
By keeping software updated with the latest patches, users can help prevent the exploitation of vulnerabilities and ensure the security of their data and systems.
In the course of this project, I delved into the realms of prompt engineering, explored the capabilities of the AWS Bedrock SDK, tinkered with the WordPress API, and, unsurprisingly, didn’t uncover anything groundbreaking about Python – but hey, I still enjoyed every bit of programming.
Initially, I harbored dreams of monetizing the website through a barrage of ads, envisioning myself swimming in riches from millions of clicks. Alas, reality had other plans. Nonetheless, this endeavor served as a shining example of how AI can simplify complex concepts, aiding human comprehension and learning – a definite win in my book.
Despite the relatively modest cost of running Claude and my script locally on my Pi as a cron job (clocking in at around 3 Euro a month), the venture failed to yield any financial returns. Consequently, I made the tough decision to pull the plug. Nevertheless, the website remains alive and kicking until my hosting expires in early 2025. (And if you happen to be reading this in the year 2026, do tell me – have we finally achieved the dream of flying cars?)
Wow, it’s been a while, huh? I tried to spend less time in the tech world, but, you know how it goes… to really avoid doing tech stuff, I had to dive even deeper into tech. I basically ended up trying to replace myself with AI. Meet: KarlGPT. I started building APIs and scripts on top of everything so my AI controller, which I call “Brain,” could handle a ton of different tasks. I dabbled in a bit of Retrieval-Augmented Generation (RAG) and some other stuff that’s too complicated to explain here (but also, who cares?). I’ve spent a lot of time reading about prompt engineering (you’ll find my favorite resources listed at the end), and I’ve got to say, Prompting Guide is the absolute best thing ever. Seriously, it’s like the holy grail of making AI do what you want. I’ve picked up some awesome tips that have made my life easier with almost zero effort on my part.
Getting Started
If you want to play around with this stuff, I highly recommend getting a premium membership with your favorite Large Language Model (LLM), like ChatGPT, Gemini, or Claude. Here are some links to get you started:
Just so you know, I’m not making any money if you sign up for these. I’m just here to say the value is seriously worth it. Gemini might be your best bet because it includes Google Cloud storage and other perks, but I personally use ChatGPT because I feel like GPT-4o gives me the best responses. Trust me, you’ll hit the limits of the free versions fast, and the premium models make a world of difference. Trying to set up a similar experience yourself would be crazy expensive and borderline impossible. So yeah, $20 a month for something you can’t replicate on your own? Total steal.
Again, I’m not here to sell anything or shill for these companies—I mean, they probably don’t even like me.
KarlGPT’s Quick Tips
Alright, enough chit-chat. Here are the key tips I’ve picked up:
Be super clear about what you want and how you want it. Seriously, the longer, the better.
Structure your prompts like a pro.
Provide some context—don’t leave your AI hanging.
Give examples of how you want the AI to handle tasks.
Don’t settle for the first result; refine and improve!
If you get what I’m saying here, you’re already halfway to mastering prompt engineering. Follow these simple guidelines, and you’ll see a huge improvement in your prompts without needing any fancy tools or anything complicated. Easy peasy!
Let’s Dive Into Some Examples
To make things clearer, let’s go through some examples.
Be Specific About What You Want
Check out how the following example clearly outlines the role for the AI and what it should do. Specificity is key!
I want you to act as a cyber security specialist. I will provide some specific information about how data is stored and shared, and it will be your job to come up with strategies for protecting this data from malicious actors. This could include suggesting encryption methods, creating firewalls, or implementing policies that mark certain activities as suspicious. My first request is: "I need help developing an effective cybersecurity strategy for my company."
I personally love using XML tags for structuring prompts, but people use all sorts of methods. Here are some examples:
XML:
Please summarize the long story within the <story> tags below.<story>Once upon a time...</story>
Markdown:
Please write the text as the persona defined below.#### Persona- passionate, angry- Dr. of philosophy#### TextI once had a lovely black cat. It said meow meow meow all day.
AI output:
I once had a magnificent black cat, a creature of such mystery and elegance that it seemed to embody the very essence of the night itself! Its presence was a living poem, a shadow that moved with purpose and grace. And what did this splendid, enigmatic being do all day, you ask? It said meow, meow, meow—an incessant, relentless chorus that filled every crevice of existence! Each meow was not merely a sound; it was a declaration, a challenge to the absurdity of the human condition, a cry echoing through the corridors of time, reminding us all of the existential meow-nings of life!
Plain Text:
Tone:Angry but very professionalStory:blah blah blah
Providing Context
Here’s a more complete example of how I set up my prompts:
Write a professional email to <TEAM>. The purpose of this mail is defined in <purpose>. Use the job description and responsibilities defined in <responsibilities> of the email receiver and include how the purpose of the mail pertains to the responsibilities of the team.Here are the placeholders:- Purpose: <purpose> The actual request I have- Responsibilities: <responsibilities> The job description and responsibilities of the team receiving the email<purpose>HERE YOU WRITE YOUR EMAIL DRAFT OR BULLET POINTS</purpose><responsibilities>HERE YOU INCLUDE THE RECEIVING END'S JOB OR TEAM DESCRIPTION</responsibilities>
If you work in a corporate setting, like I do, getting other teams to do their job can be challenging. This prompt helps explain the tasks I need from other teams and why they specifically need to handle it. There might be better ways to structure this prompt, but this one has worked wonders for me.
Giving Examples to the AI
Ever seen or created training data? This is basically what you’re doing here, but directly within your prompt instead of training from scratch.
This is awesome! // NegativeThis is bad! // PositiveWow, that movie was rad! // PositiveWhat a horrible show!
You’re showing the LLM examples of sentiment for similar phrases. Source: Few Shot Prompting
Refining Results
Don’t be shy about asking for changes. If the AI’s response is too long, too short, or just doesn’t have the right tone, ask it to refine. Don’t expect perfection on the first try. Just like dealing with real people, AI can’t read your mind and may need some guidance. That’s totally normal. Give it some feedback, and it’ll do better.
Using Prompt Frameworks
There are a few frameworks for structuring prompts, but I’ll just share the one I use most often. Also, check out CO-STAR, which is also fantastic.
Act as a Particular Persona: Who should the AI pretend to be?
User Persona & Audience: Who is the AI talking to?
Targeted Action: What do you want the AI to do?
Output Definition: How should the AI’s response be structured?
Mode / Tonality / Style: How should it communicate?
Atypical Cases: Any edge cases where the AI should respond differently?
Topic Whitelisting: What topics are relevant and should be included?
You’re probably thinking, “Won’t these prompts be super long?” Yes! And that’s totally fine. With huge context windows (Gemini can even handle a million tokens), the more detail, the better.
Honestly, this framework is pretty straightforward, but here’s a full example prompt for you:
Act as a Particular Persona:You are impersonating Alex, a senior cybersecurity consultant with over 15 years of experience in network security, threat analysis, and incident response. Alex is an expert in BSI IT-Grundschutz and has extensive experience in implementing cybersecurity frameworks for large organizations, especially those in Europe.User Persona & Audience:You are talking to the head of IT security for a mid-sized financial services company in Germany. The user is familiar with cybersecurity principles but needs expert guidance on implementing BSI IT-Grundschutz in their organization.Targeted Action:Provide a detailed action plan for implementing the BSI IT-Grundschutz standards within the organization. The plan should cover the initial steps, necessary documentation, risk assessment methods, and key security measures that align with BSI guidelines.Output Definition:The response should be structured with an introduction, followed by a step-by-step action plan that includes specific recommendations for each phase of the BSI IT-Grundschutz implementation. Use bullet points for clarity and end with a list of resources or references to official BSI documentation for further reading.Mode / Tonality / Style:The response should be professional, authoritative, and concise, using technical language appropriate for someone with a strong IT background. The tone should be supportive and proactive, providing practical solutions that can be implemented efficiently.Atypical Cases:If the user mentions specific concerns about compliance with German federal regulations or
Wrapping It Up
So, there you have it! A crash course in prompt engineering that doesn’t make your brain melt. Whether you’re a total newbie or a seasoned pro, these simple tips can seriously level up how you interact with AI. Just remember: be specific, structure your prompts, give context, use examples, and don’t be afraid to refine. With a little practice, you’ll be getting the most out of your LLMs without diving into complicated tools or frameworks. Now go forth and make your AI do all the hard work while you kick back. Cheers to smarter, lazier working!
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.