Category: Projekt

Projekte

  • Typosquatterpy: Secure Your Brand with Defensive Domain Registration

    Typosquatterpy: Secure Your Brand with Defensive Domain Registration

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk. Read it have a laugh and never do this.

    For the full disclaimer, please click here.

    I already wrote a post about how dangerous typosquatting can be for organizations and government entities:

    After that, some companies reached out to me asking where to even get started. There are thousands of possible variations of certain domains, so it can feel overwhelming. Most people begin with dnstwist, a really handy script that generates hundreds or thousands of lookalike domains using statistics. Dnstwist also checks if they are already pointing to a server via DNS, which helps you identify if someone is already trying to abuse a typosquatted domain.

    While this is great for finding typosquatter domains that already exist, it doesn’t necessarily help you find and register them before someone else does (at least, not in a targeted way).

    On a few pentests where I demonstrated the risks of typosquatting, I registered a domain, set up a catch-all rule to redirect emails to my address—intercepting very sensitive information—and hosted a simple web server to collect API tokens from automated requests. To streamline this process, I built a small script to help me (and now you) get started with defensive domain registration.

    I called the tool Typosquatterpy, and the code is open-source on my GitHub.

    Usage

    1. Add your OpenAI API key (or use a local Ollama, whatever).
    2. Add your domain.
    3.  Run it.

    And you get an output like this:

    root@code-server:~/code/scripts# python3 typo.py 
     karlcomd.de
     karlcome.de
     karlcpm.de
     karlcjm.de
     karlcok.de
     karcom.de
     karcomd.de
     karlcon.de
     karlcim.de
     karicom.de

    Wow, there are still a lot of typo domains available for my business website 😅.

    While longer domains naturally have a higher risk of typos, I don’t have enough traffic to justify the cost of defensively registering them. Plus, my customers don’t send me sensitive information via email—I use a dedicated server for secure uploads and file transfers. (Yes, it’s Nextcloud 😉).

    README.md

    You can find the source here.

    typosquatterpy

    🚀 What is typosquatterpy?

    typosquatterpy is a Python script that generates common typo domain variations of a given base domain (on a QWERTZ keyboard) using OpenAI’s API and checks their availability on Strato. This tool helps in identifying potential typo-squatted domains that could be registered to protect a brand or business.

    ⚠️ Disclaimer: This project is not affiliated with Strato, nor is it their official API. Use this tool at your own risk!


    🛠️ Installation

    To use typosquatterpy, you need Python and the requests library installed. You can install it via pip:

    pip install requests

    📖 Usage

    Run the script with the following steps:

    1. Set your base domain (e.g., example) and TLD (e.g., .de).
    2. Replace api_key="sk-proj-XXXXXX" with your actual OpenAI API key.
    3. Run the script, and it will:
      • Generate the top 10 most common typo domains.
      • Check their availability using Strato’s unofficial API.

    Example Code Snippet

    base_domain = "karlcom"
    tld = ".de"
    typo_response = fetch_typo_domains_openai(base_domain, api_key="sk-proj-XXXXXX")
    typo_domains_base = extract_domains_from_text(typo_response)
    typo_domains = [domain.split(".")[0].rstrip(".") + tld for domain in typo_domains_base]
    is_domain_available(typo_domains)

    Output Example

     karicom.de
     karlcomm.de
     krlcom.de

    ⚠️ Legal Notice

    • typosquatterpy is not affiliated with Strato and does not use an official Strato API.
    • The tool scrapes publicly available information, and its use is at your own discretion.
    • Ensure you comply with any legal and ethical considerations when using this tool.

    Conclusion

    If you’re wondering what to do next and how to start defensively registering typo domains, here’s a straightforward approach:

    1. Generate Typo Domains – Use my tool to create common misspellings of your domain, or do it manually (with or without ChatGPT).
    2. Register the Domains – Most companies already have an account with a registrar where their main domain is managed. Just add the typo variations there.
    3. Monitor Traffic – Keep an eye on incoming and outgoing typo requests and emails to detect misuse.
    4. Route & Block Traffic – Redirect typo requests to the correct destination while blocking outgoing ones. Most commercial email solutions offer rulesets for this. Using dnstwist can help identify a broad range of typo domains.
    5. Block Outgoing Requests – Ideally, use a central web proxy. If that’s not possible, add a blocklist to browser plugins like uBlock, assuming your company manages it centrally. If neither option works, set up AdGuard for central DNS filtering and block typo domains there. (I wrote a guide on setting up AdGuard!)
  • Squidward:Continuous Observation and Monitoring

    Squidward:Continuous Observation and Monitoring

    The name Squidward comes from TAD → Threat Modelling, Attack Surface and Data. “Tadl” is the German nickname for Squidward from SpongeBob, so I figured—since it’s kind of a data kraken—why not use that name?

    It’s a continuous observation and monitoring script that notifies you about changes in your internet-facing infrastructure. Think Shodan Monitor, but self-hosted.

    Technology Stack

    • certspotter: Keeps an eye on targets for new certificates and sneaky subdomains.
    • Discord: The command center—control the bot, add targets, and get real-time alerts.
    • dnsx: Grabs DNS records.
    • subfinder: The initial scout, hunting down subdomains.
    • rustscan: Blazing-fast port scanner for newly found endpoints.
    • httpx: Checks ports for web UI and detects underlying technologies.
    • nuclei: Runs a quick vulnerability scan to spot weak spots.
    • anew: Really handy deduplication tool.

    At this point, I gotta give a massive shoutout to ProjectDiscovery for open-sourcing some of the best recon tools out there—completely free! Seriously, a huge chunk of my projects rely on these tools. Go check them out, contribute, and support them. They deserve it!

    (Not getting paid to say this—just genuinely impressed.)

    How it works

    I had to rewrite certspotter a little bit in order to accomodate a different input and output scheme, the rest is fairly simple.

    Setting Up Directories

    The script ensures required directories exist before running:

    • $HOME/squidward/data for storing results.
    • Subdirectories for logs: onlynew, allfound, alldedupe, backlog.

    Running Subdomain Enumeration

    • squidward (certspotter) fetches SSL certificates to discover new subdomains.
    • subfinder further identifies subdomains from multiple sources.
    • Results are stored in logs and sent as notifications (to a Discord webhook).

    DNS Resolution

    dnsx takes the discovered subdomains and resolves:

    • A/AAAA (IPv4/IPv6 records)
    • CNAME (Canonical names)
    • NS (Name servers)
    • TXT, PTR, MX, SOA records

    HTTP Probing

    httpx analyzes the discovered subdomains by sending HTTP requests, extracting:

    • Status codes, content lengths, content types.
    • Hash values (SHA256).
    • Headers like server, title, location, etc.
    • Probing for WebSocket, CDN, and methods.

    Vulnerability Scanning

    • nuclei scans for known vulnerabilities on discovered targets.
    • The scan focuses on high, critical, and unknown severity issues.

    Port Scanning

    • rustscan finds open ports for each discovered subdomain.
    • If open ports exist, additional HTTP probing and vulnerability scanning are performed.

    Automation and Notifications

    • Discord notifications are sent after each stage.
    • The script prevents multiple simultaneous runs by checking if another instance is active (ps -ef | grep “squiddy.sh”).
    • Randomization (shuf) is used to shuffle the scan order.

    Main Execution

    If another squiddy.sh instance is running, the script waits instead of starting.

    • If no duplicate instance exists:
    • Squidward (certspotter) runs first.
    • The main scanning pipeline (what_i_want_what_i_really_really_want()) executes in a structured sequence:

    The Code

    I wrote this about six years ago and just laid eyes on it again for the first time. I have absolutely no clue what past me was thinking 😂, but hey—here you go:

    #!/bin/bash
    
    #############################################
    #
    # Single script usage:
    # echo "test.karl.fail" | ./httpx -sc -cl -ct -location -hash sha256 -rt -lc -wc -title -server -td -method -websocket -ip -cname -cdn -probe -x GET -silent
    # echo "test.karl.fail" | ./dnsx -a -aaaa -cname -ns -txt -ptr -mx -soa -resp -silent
    # echo "test.karl.fail" | ./subfinder -silent
    # echo "test.karl.fail" | ./nuclei -ni
    #
    #
    #
    #
    #############################################
    
    # -----> globals <-----
    workdir="squidward"
    script_path=$HOME/$workdir
    data_path=$HOME/$workdir/data
    
    only_new=$data_path/onlynew
    all_found=$data_path/allfound
    all_dedupe=$data_path/alldedupe
    backlog=$data_path/backlog
    # -----------------------
    
    # -----> dir-setup <-----
    setup() {
        if [ ! -d $backlog ]; then
            mkdir $backlog
        fi
        if [ ! -d $only_new ]; then
            mkdir $only_new
        fi
        if [ ! -d $all_found ]; then
            mkdir $all_found
        fi
        if [ ! -d $all_dedupe ]; then
            mkdir $all_dedupe
        fi
        if [ ! -d $script_path ]; then
            mkdir $script_path
        fi
        if [ ! -d $data_path ]; then
            mkdir $data_path
        fi
    }
    # -----------------------
    
    # -----> subfinder <-----
    write_subfinder_log() {
        tee -a $all_found/subfinder.txt | $script_path/anew $all_dedupe/subfinder.txt | tee $only_new/subfinder.txt
    }
    run_subfinder() {
        $script_path/subfinder -dL $only_new/certspotter.txt -silent | write_subfinder_log;
        $script_path/notify -data $only_new/subfinder.txt -bulk -provider discord -id crawl -silent
        sleep 5
    }
    # -----------------------
    
    # -----> dnsx <-----
    write_dnsx_log() {
        tee -a $all_found/dnsx.txt | $script_path/anew $all_dedupe/dnsx.txt | tee $only_new/dnsx.txt
    }
    run_dnsx() {
        $script_path/dnsx -l $only_new/subfinder.txt -a -aaaa -cname -ns -txt -ptr -mx -soa -resp -silent | write_dnsx_log;
        $script_path/notify -data $only_new/dnsx.txt -bulk -provider discord -id crawl -silent
        sleep 5
    }
    # -----------------------
    
    # -----> httpx <-----
    write_httpx_log() {
        tee -a $all_found/httpx.txt | $script_path/anew $all_dedupe/httpx.txt | tee $only_new/httpx.txt
    }
    run_httpx() {
        $script_path/httpx -l $only_new/subfinder.txt -sc -cl -ct -location -hash sha256 -rt -lc -wc -title \ 
        -server -td -method -websocket -ip -cname -cdn -probe -x GET -silent | write_httpx_log;
        $script_path/notify -data $only_new/httpx.txt -bulk -provider discord -id crawl -silent
        sleep 5
    }
    # -----------------------
    
    # -----> nuclei <-----
    write_nuclei_log() {
        tee -a $all_found/nuclei.txt | $script_path/anew $all_dedupe/nuclei.txt | tee $only_new/nuclei.txt
    }
    run_nuclei() {
        $script_path/nuclei -ni -l $only_new/httpx.txt -s high, critical, unknown -rl 5 -silent \
        | write_nuclei_log | $script_path/notify -provider discord -id vuln -silent
    }
    # -----------------------
    
    # -----> squidward <-----
    write_squidward_log() {
        tee -a $all_found/certspotter.txt | $script_path/anew $all_dedupe/certspotter.txt | tee -a $only_new/forscans.txt
    }
    run_squidward() {
        rm $script_path/config/certspotter/lock
        $script_path/squidward | write_squidward_log | $script_path/notify -provider discord -id cert -silent
        sleep 3
    }
    # -----------------------
    
    send_certspotted() {
        $script_path/notify -data $only_new/certspotter.txt -bulk -provider discord -id crawl -silent
        sleep 5
    }
    
    send_starting() {
        echo "Hi! I am Squiddy!" | $script_path/notify  -provider discord -id crawl -silent
        echo "I am gonna start searching for new targets now :)" | $script_path/notify  -provider discord -id crawl -silent
    }
    
    dns_to_ip() {
        # TODO: give txt file of subdomains to get IPs from file 
        $script_path/dnsx -a -l $1 -resp -silent \
        | grep -oE "\b((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b" \
        | sort --unique 
    }
    
    run_rustcan() {
        local input=""
    
        if [[ -p /dev/stdin ]]; then
            input="$(cat -)"
        else
            input="${@}"
        fi
    
        if [[ -z "${input}" ]]; then
            return 1
        fi
    
        # ${input/ /,} -> join space to comma
        # -> loop because otherwise rustscan will take forever to scan all IPs and only save results at the end
        # we could do this to scan all at once instead: $script_path/rustscan -b 100 -g --scan-order random -a ${input/ /,}
        for ip in ${input}
        do
            $script_path/rustscan -b 500 -g --scan-order random -a $ip
        done
    
    }
    
    write_rustscan_log() {
        tee -a $all_found/rustscan.txt | $script_path/anew $all_dedupe/rustscan.txt | tee $only_new/rustscan.txt
    }
    what_i_want_what_i_really_really_want() {
        # shuffle certspotter file cause why not
        cat $only_new/forscans.txt | shuf -o $only_new/forscans.txt 
    
        $script_path/subfinder -silent -dL $only_new/forscans.txt | write_subfinder_log
        $script_path/notify -silent -data $only_new/subfinder.txt -bulk -provider discord -id subfinder
    
        # -> empty forscans.txt
        > $only_new/forscans.txt
    
        # shuffle subfinder file cause why not
        cat $only_new/subfinder.txt | shuf -o $only_new/subfinder.txt
    
        $script_path/dnsx -l $only_new/subfinder.txt -silent -a -aaaa -cname -ns -txt -ptr -mx -soa -resp | write_dnsx_log
        $script_path/notify -data $only_new/dnsx.txt -bulk -provider discord -id dnsx -silent
        
        # shuffle dns file before iter to randomize scans a little bit
        cat $only_new/dnsx.txt | shuf -o $only_new/dnsx.txt
        sleep 1
        cat $only_new/dnsx.txt | shuf -o $only_new/dnsx.txt
    
        while IFS= read -r line
        do
            dns_name=$(echo $line | cut -d ' ' -f1)
            ip=$(echo ${line} \
            | grep -E "\[(\b((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b)\]" \
            | grep -oE "(\b((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b)")
            match=$(echo $ip | run_rustcan)
    
            if [ ! -z "$match" ]
            then
                ports_unformat=$(echo ${match} | grep -Po '\[\K[^]]*')
                ports=${ports_unformat//,/ }
    
                echo "$dns_name - $ip - $ports" | write_rustscan_log
                $script_path/notify -silent -data $only_new/rustscan.txt -bulk -provider discord -id portscan
            
                for port in ${ports}
                do
                    echo "$dns_name:$port" | $script_path/httpx -silent -sc -cl -ct -location \
                    -hash sha256 -rt -lc -wc -title -server -td -method -websocket \
                    -ip -cname -cdn -probe -x GET | write_httpx_log | grep "\[SUCCESS\]" | cut -d ' ' -f1 \
                    | $script_path/nuclei -silent -ni -s high, critical, unknown -rl 10 \
                    | write_nuclei_log | $script_path/notify -provider discord -id nuclei -silent
    
                    $script_path/notify -silent -data $only_new/httpx.txt -bulk -provider discord -id httpx
                done
            fi 
        done < "$only_new/dnsx.txt"
    }
    
    main() {
        dupe_script=$(ps -ef | grep "squiddy.sh" | grep -v grep | wc -l | xargs)
    
        if [ ${dupe_script} -gt 2 ]; then
            echo "Hey friends! Squiddy is already running, I am gonna try again later." | $script_path/notify  -provider discord -id crawl -silent
        else 
            send_starting
    
            echo "Running Squidward"
            run_squidward
    
            echo "Running the entire rest"
            what_i_want_what_i_really_really_want
    
            # -> leaving it in for now but replace with above function
            #echo "Running Subfinder"
            #run_subfinder
    
            #echo "Running DNSX"
            #run_dnsx
    
            #echo "Running HTTPX"
            #run_httpx
    
            #echo "Running Nuclei"
            #run_nuclei
        fi
    }
    
    setup
    
    dupe_script=$(ps -ef | grep "squiddy.sh" | grep -v grep | wc -l | xargs)
    if [ ${dupe_script} -gt 2 ]; then
        echo "Hey friends! Squiddy is already running, I am gonna try again later." | $script_path/notify  -provider discord -id crawl -silent
    else 
        #send_starting
        echo "Running Squidward"
        run_squidward
    fi

    There’s also a Python-based Discord bot that goes with this, but I’ll spare you that code—it did work back in the day 😬.

    Conclusion

    Back when I was a Red Teamer, this setup was a game-changer—not just during engagements, but even before them. Sometimes, during client sales calls, they’d expect you to be some kind of all-knowing security wizard who already understands their infrastructure better than they do.

    So, I’d sit in these calls, quietly feeding their possible targets into Squidward and within seconds, I’d have real-time recon data. Then, I’d casually drop something like, “Well, how about I start with server XYZ? I can already see it’s vulnerable to CVE-Blah.” Most customers loved that level of preparedness.

    I haven’t touched this setup in ages, and honestly, I have no clue how I’d even get it running again. I would probably go about it using Node-RED like in this post.

    These days, I work for big corporate, using commercial tools for the same tasks. But writing about this definitely brought back some good memories.

    Anyway, time for bed! It’s late, and you’ve got work tomorrow. Sweet dreams! 🥰😴

    Have another scary squid man monster that didn’t make featured, buh-byeee 👋

  • Hack the Chart, Impress the Party: A (Totally Ethical) Guide to GitHub Glory

    Hack the Chart, Impress the Party: A (Totally Ethical) Guide to GitHub Glory

    We’ve all been there—no exceptions, literally all of us. You’re at a party, chatting up a total cutie, the vibes are immaculate, and then she hits you with the: “Show me your GitHub contributions chart.” She wants to see if you’re really about that open-source life.

    Panic. You know you are mid at best, when it comes to coding. Your chart is weak and you know it.

    You hesitate but show her anyway, hoping she’ll appreciate you for your personality instead. Wrong! She doesn’t care about your personality, dude—only your commits. She takes one look, laughs, and walks away.

    Defeated, you grab a pizza on the way home (I’m actually starving writing this—if my Chinese food doesn’t arrive soon, I’m gonna lose it).

    Anyway! The responsible thing to do would be to start contributing heavily to open-source projects. This is not that kind of blog though. Here, we like to dabble in the darker arts of IT. Not sure how much educational value this has, but here we go with the disclaimer:

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk. Read it have a laugh and never do this.

    For the full disclaimer, please click here.

    Quick note: This trick works on any gender you’re into. When I say “her” just mentally swap it out for whoever you’re trying to impress. I’m only writing it this way because, that’s who I would personally want to impress.

    Intro

    I came across a LinkedIn post where someone claimed they landed a $500K developer job—without an interview—just by writing a tool that fakes GitHub contributions. Supposedly, employers actually check these charts and your public code.

    Now, I knew this was classic LinkedIn exaggeration, but it still got me thinking… does this actually work? I mean, imagine flexing on your friends with an elite contribution chart—instant jealousy.

    Of course, the golden era of half-a-mil, no-interview dev jobs is long gone (RIP), but who knows? Maybe it’ll make a comeback. Or maybe AI will just replace us all before that happens.

    Source: r/ProgrammerHumor

    I actually like Copilot, but it still cracks me up. If you’re not a programmer, just know that roasting your own code is part of the culture—it’s how we cope, but never roast my code, because I will cry and you will feel bad. We both will.

    The Setup

    Like most things in life, step one is getting a server to run a small script and a cronjob on. I’m using a local LXC container in my Proxmox, but you can use a Raspberry Pi, an old laptop, or whatever junk you have lying around.

    Oh, and obviously, you’ll need a GitHub account—but if you didn’t already have one, you wouldn’t be here.

    Preparation

    First, you need to install a few packages on your machine. I’m gonna assume you’re using Debian—because it’s my favorite (though I have to admit, Alpine is growing on me fast):

    apt update && apt upgrade -y
    apt install git -y
    apt install curl -y

    Adding SSH Keys to Github

    There are two great guides from GithHub:

    ssh-keygen -t ed25519 -C "[email protected]"
    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/id_ed25519 # <- if that is what you named your key

    Then copy the public key, you recognize it by the .pub ending:

    cat ~/.ssh/id_ed25519.pub # <- check if that is the name of your key

    It happens way more often than it should—people accidentally exposing their private key like it’s no big deal. Don’t be that person.

    Once you’ve copied your public key (the one with .pub at the end), add it to your GitHub account by following the steps in “Adding a new SSH key to your GitHub account“.

    Check if it worked with:

    You should see something like:

    Hi StasonJatham! You've successfully authenticated, but GitHub does not provide shell access.

    Configuring git on your system

    This is important for your upcoming contributions to actually count towards your stats, they need to be made by “you”:

    git config --global user.name "YourActualGithubUsername"
    git config --global user.email "[email protected]"

    You’re almost done prepping. Now, you just need to clone one of your repositories. Whether it’s public or private is up to you—just check your GitHub profile settings:

    • If you have private contributions enabled, you can commit to a private repo.
    • f not, just use a public repo—or go wild and do both.

    The Code

    Let us test our setup before we continue:

    git clone https://github.com/YourActualGithubUser/YOUR_REPO_OF_CHOICE
    git add counter.py
    git commit -m "add a counter"
    git push

    Make sure to replace your username and repo in the command—don’t just copy-paste like a bot. If everything went smoothly, you should now have an empty counter.py file sitting in your repository.

    Of course, if you’d rather keep things tidy, you can create a brand new repo for this. But either way, this should have worked.

    The commit message will vary.

    Now the code of the shell script:

    gh_champ.sh
    #!/bin/bash
    
    # Define the directory where the repository is located
    # this is the repo we got earlier from git clone
    REPO_DIR="/root/YOUR_REPO_OF_CHOICE"
    
    # random delay to not always commit at exact time
    RANDOM_DELAY=$((RANDOM % 20 + 1))
    DELAY_IN_SECONDS=$((RANDOM_DELAY * 60))
    sleep "$DELAY_IN_SECONDS"
    
    cd "$REPO_DIR" || exit
    
    # get current time and overwrite file
    echo "print(\"$(date)\")" > counter.py
    
    # Generate a random string for the commit message
    COMMIT_MSG=$(tr -dc A-Za-z0-9 </dev/urandom | head -c 16)
    
    # Stage the changes, commit, and push
    git add counter.py > /dev/null 2>&1
    git commit -m "$COMMIT_MSG" > /dev/null 2>&1
    git push origin master > /dev/null 2>&1

    Next, you’ll want to automate this by setting it up as a cronjob:

    17 10-20/2 * * * /root/gh_champ.sh

    I personally like using crontab.guru to craft more complex cron schedules—it makes life easier.

    This one runs at minute 17 past every 2nd hour from 10 through 20, plus a random 1-20 minute delay from our script to keep things looking natural.

    And that’s it. Now you just sit back and wait 😁.

    Bonus: Cronjob Monitoring

    I like keeping an eye on my cronjobs in case they randomly decide to fail. If you want to set up Healthchecks.io for this, check out my blog post.

    The final cronjob entry looks like this:

    17 10-20/2 * * * /root/gh_champ.sh && curl -fsS -m 10 --retry 5 -o /dev/null https://ping.yourdomain.de/ping/UUID

    Conclusion

    Contributions chart of 2025 so far

    Looks bonita 👍 ! With a chart like this, the cuties will flock towards you instead of running away.

    Jokes aside, the whole “fake it till you make it” philosophy isn’t all sunshine and promotions. Sure, research suggests that acting confident can actually boost performance and even trick your brain into developing real competence (hello, impostor syndrome workaround!). But there’s a fine line between strategic bluffing and setting yourself up for disaster.

    Let’s say you manage to snag that sweet developer job with nothing but swagger and a well-rehearsed GitHub portfolio. Fast forward to your 40s—while you’re still Googling “how to center a div” a younger, hungrier, and actually skilled dev swoops in, leaving you scrambling. By that age, faking it again isn’t just risky; it’s like trying to pass off a flip phone as the latest iPhone.

    And yeah, if we’re being honest, lying your way into a job is probably illegal (definitely unethical), but hey, let’s assume you throw caution to the wind. If you do manage to land the gig, your best bet is to learn like your livelihood depends on it—because, well, it does. Fake it for a minute, but make sure you’re building real skills before the curtain drops.

    Got real serious there for a second 🥶, gotta go play Witcher 3 now, byeeeeeeeeee 😍

    EDIT

    There has been some development in this space. I have found a script that let’s you commit messages with dates attached so you do not have to wait an entire year to show off: https://github.com/davidjan3/githistory

  • That One Time I Though I Cracked the Stock Market with the Department of Defense

    That One Time I Though I Cracked the Stock Market with the Department of Defense

    Seven to five years ago, I was absolutely obsessed with the idea of beating the stock market. I dove headfirst into the world of investing, devouring books, blogs, and whatever information I could get my hands on. I was like a sponge, soaking up everything. After countless hours of research, I came to one clear conclusion:

    To consistently beat the market, I needed a unique edge—some kind of knowledge advantage that others didn’t have.

    It’s like insider trading, but, you know, without the illegal part. My plans to uncover obscure data and resources online that only a select few were using. That way, I’d have a significant edge over the average trader. In hindsight, I’m pretty sure that’s what big hedge funds, especially the short-selling ones, are doing—just with a ton more money and resources than I had. But I’ve always thought, “If someone else can do it, so can I.” At the end of the day, those hedge fund managers are just people too, right?

    Around that time, I was really into the movie War Dogs. It had this fascinating angle that got me thinking about analyzing the weapons trade, aka the “defense” sector.

    Here’s the interesting part: The United States is surprisingly transparent when it comes to defense spending. They even publicly list their contracts online (check out the U.S. Department of Defense Contracts page). The EU, on the other hand, is a completely different story. Getting similar information was like pulling teeth. You’d basically need to lawyer up and start writing formal letters to access anything remotely useful.

    The Idea

    Quite simply: Build a tool that scrapes the Department of Defense contracts website and checks if any of the publicly traded companies involved had landed massive new contracts or reported significantly higher income compared to the previous quarter.

    Based on the findings, I’d trade CALL or PUT options. If the company performed poorly in the quarter or year, I’d go for a PUT option. If they performed exceptionally well, I’d opt for a CALL, banking on the assumption that these contracts would positively influence the next earnings report.

    Theoretically, this seemed like one of those obvious, no-brainer strategies that had to work. Kind of like skipping carbs at a buffet and only loading up on meat to get your money’s worth.

    Technologie

    At first, I did everything manually with Excel. Eventually, I wrote a Python Selenium script to automate the process.

    Here’s the main script I used to test the scraping:

    // Search -> KEYWORD
    // https://www.defense.gov/Newsroom/Contracts/Search/KEYWORD/
    // -------
    // Example:
    // https://www.defense.gov/Newsroom/Contracts/Search/Boeing/
    // ------------------------------------------------------------
    
    // All Contracts -> PAGE (momentan bis 136)
    // https://www.defense.gov/Newsroom/Contracts/?Page=PAGE
    // -------
    // Example:
    // https://www.defense.gov/Newsroom/Contracts/?Page=1
    // https://www.defense.gov/Newsroom/Contracts/?Page=136
    // -------------------------------------------------------
    
    // Contract -> DATE
    // https://www.defense.gov/Newsroom/Contracts/Contract/Article/DATE
    // -------
    // https://www.defense.gov/Newsroom/Contracts/Contract/Article/2041268/
    // ---------------------------------------------------------------------
    
    // Select Text from Article Page
    // document.querySelector(".body")
    
    // get current link
    // window.location.href
    
    
    
    // ---> Save Company with money for each day in db
    
    // https://www.defense.gov/Newsroom/Contracts/Contract/Article/1954307/
    var COMPANY_NAME = "The Boeing Co.";
    var comp_money = 0;
    var interesting_div = document.querySelector('.body')
    var all_contracts = interesting_div.querySelectorAll("p"),i;
    var text_or_heading;
    var heading;
    var text;
    var name_regex = /^([^,]+)/gm;
    var price_regex = /\$([0-9]{1,3},*)+/gm;
    var price_contract_regex =/\$([0-9]{1,3},*)+ (?<=)([^\s]+)/gm;
    var company_name;
    var company_article;
    
    for (i = 0; i < all_contracts.length; ++i) {
      text_or_heading = all_contracts[i];
    
      if (text_or_heading.getAttribute('id') != "skip-target-holder") {
      	if (text_or_heading.getAttribute('style')) {
      		heading = text_or_heading.innerText;
      	} else {
      		text = text_or_heading.innerText;
    	    company_name = text.match(name_regex)
    	    contract_price = text.match(price_regex)
    	    contract_type = text.match(price_contract_regex)
    
    	    try {
    	    	contract_type = contract_type[0];
    	    	clean_type = contract_type.split(' ');
    	    	contract_type = clean_type[1];
    	    } catch(e) {
    	    	contract_type = "null";
    	    }
    	    try {
    	    	company_article = company_name[0];
    	    } catch(e) {
    	    	company_article = "null";
    	    }
    	    try {
    	    	contract_amount = contract_price[0];
    		    if (company_article == COMPANY_NAME){
    		    	contract_amount = contract_amount.replace("$","")
    		    	contract_amount = contract_amount.replace(",","")
    		    	contract_amount = contract_amount.replace(",","")
    		    	contract_amount = contract_amount.replace(",","")
    		    	contract_amount = parseInt(contract_amount, 10)
    
    
    		    	comp_money = contract_amount + comp_money
    	    	}
    	    } catch(e) {
    	    	contract_amount = "$0";
    	    }
    
    	    console.log("Heading      : " + heading);
    	    console.log("Text         : " + text);
    	    console.log("Company Name : " + company_article);
    	    console.log("Awarded      : " + contract_amount)
    	    console.log("Contract Type: " + contract_type);
      	}
      }
    }
    console.log(COMPANY_NAME);
    console.log(new Intl.NumberFormat('en-EN', { style: 'currency', currency: 'USD' }).format(comp_money));
    
    
    
    // --> Save all Links to Table in Database
    for (var i = 1; i >= 136; i++) {
    	var url = "https://www.defense.gov/Newsroom/Contracts/?Page=" + i
    
    	var  page_links = document.querySelector("#alist > div.alist-inner.alist-more-here")
    	var all_links   = page_links.querySelectorAll("a.title")
    
    	all_links.forEach(page_link => {
    		var contract_date = Date(Date.parse(page_link.innerText))
    		var contracvt_link = page_link.href
    	});
    }

    The main code is part of another project I called “Wallabe“.

    The stack was the usual:

    • Python: The backbone of the project, handling the scraping logic and data processing efficiently.
    • Django: Used for creating the web framework and managing the backend, including the database and API integrations.
    • Selenium & BeautifulSoup: Selenium was used for dynamic interactions with web pages, while BeautifulSoup handled the parsing and extraction of relevant data from the HTML.
    • PWA (“mobile app”): Designed as a mobile-only Progressive Web App to deliver a seamless, app-like experience without requiring actual app store deployment.

    I wanted the feel of a mobile app without the hassle of actual app development.

    One of the challenges I faced was parsing and categorizing the HTML by U.S. military branches. There are a lot, and I’m sure I didn’t get them all, but here’s the list I was working with seven years ago (thanks, JROTC):

    millitary_branch = {'airforce',
                        'defenselogisticsagency',
                        'navy',
                        'army',
                        'spacedevelopmentagency',
                        'defensemicroelectronicsactivity',  
                        'jointartificialintelligencecenter',      
                        'defenseintelligenceagency',
                        'defenseinformationsystemagency',
                        'defensecommissaryagency',
                        'missiledefenseagency',
                        'defensehealthagency',
                        'u.s.specialoperationscommand',
                        'defensethreatreductionagency',
                        'defensefinanceandaccountingservice',
                        'defenseinformationsystemsagency',
                        'defenseadvancedresearchprojectsagency',
                        'washingtonheadquartersservices',
                        'defensehumanresourceactivity',
                        'defensefinanceandaccountingservices',
                        'defensesecurityservice',
                        'uniformedservicesuniversityofthehealthsciences',
                        'missledefenseagency',
                        'defensecounterintelligenceandsecurityagency',
                        'washingtonheadquartersservice',
                        'departmentofdefenseeducationactivity',
                        'u.s.transportationcommand'}

    I tried to revive this old project, but unfortunately, I can’t show you what the DoD data looked like anymore since the scraper broke after some HTML changes on their contracts website. On the bright side, I can still share some of the awesome UI designs I created for it seven years ago:

    Imagine a clean, simple table with a list of companies on one side and a number next to each one showing how much they made in the current quarter.

    How it works

    Every day, I scrape the Department of Defense contracts and calculate how much money publicly traded companies received from the U.S. government. This gives me a snapshot of their revenue before quarterly earnings are released. If the numbers are up, I buy CALL options; if they’re down, I buy PUT options.

    The hardest part of this process is dealing with the sheer volume of updates. They don’t just release new contracts—there are tons of adjustments, cancellations, and modifications. Accounting for these is tricky because the contracts aren’t exactly easy to parse. Still, I decided it was worth giving it a shot.

    Now, here’s an important note: U.S. defense companies also make a lot of money from other countries, not just the U.S. military. In fact, the U.S. isn’t even always their biggest contributor. Unfortunately, as I mentioned earlier, other countries are far less transparent about their military spending. This lack of data is disappointing and limits the scope of the analysis.

    Despite these challenges, I figured I’d test the idea on paper and backtest it to see how it performed.

    Conclusion

    TL;DR: Did not work.

    The correlation I found between these contracts and earnings just wasn’t there. Even when the numbers matched and I got the part right that “Company made great profit,” the market would still turn around and say, “Yeah, but it’s 2% short of what we expected. We wanted +100%, and a measly +98% is disappointing… SELLLL!”

    The only “free money glitch” I’ve ever come across is what I’m doing with Bearbot, plus some tiny bond tricks that can get you super small monthly profits (like 0.10% to 0.30% a month).

    That said, this analysis still made me question whether everything is truly priced in or if there are still knowledge gaps to exploit. The truth is, you never really know if something will work until you try. Sure, you can backtest, but that’s more for peace of mind. Historical data can’t predict the future. A drought killing 80% of cocoa beans next year is just as possible as a record harvest. Heck, what’s stopping someone from flying to Brazil and burning down half the coffee fields to drive up coffee bean prices? It’s all just as unpredictable as them not doing that (probably, please don’t).

    What I’m saying is, a strategy that’s worked for 10 years can break tomorrow or keep working. Unless you have insider info that others don’t, it’s largely luck. Sometimes your strategy seems brilliant just because it got lucky a few times—not because you cracked the Wall Street code.

    I firmly believe there are market conditions that can be exploited for profit, especially in complex derivatives trading. A lot of people trade these, but few really understand how they work, which leads to weird price discrepancies—especially with less liquid stocks. I also believe I’ve found one of these “issues” in the market: a specific set of conditions where certain instruments, in certain environments, are ripe for profit with minimal probability if risk (which means: high risk that almost never materializes). That’s Bearbot.

    Anyway, long story short, this whole experiment is part of what got Bearbot started. Thanks for reading, diamond hands 💎🙌 to the moon, and love ya ❤️✌️! Byeeeee!

  • The Day I (Almost) Cracked the Eurojackpot Code

    The Day I (Almost) Cracked the Eurojackpot Code

    Five years ago, a younger and more optimistic Karl, with dreams of cracking the European equivalent of the Powerball, formed a bold thesis:

    “Surely the Eurojackpot isn’t truly random anymore. It must be calculated by a machine! And since machines are only capable of generating pseudorandom numbers, I could theoretically simulate the system long enough to identify patterns or at least tilt the odds in my favor by avoiding the least random combinations.

    This idea took root after I learned an intriguing fact about computers: they can’t generate true randomness. Being deterministic machines, they rely on algorithms to create pseudorandom numbers, which only appear random but are entirely predictable if you know the initial value (seed). True randomness, on the other hand, requires inputs from inherently unpredictable sources, like atmospheric noise or quantum phenomena—things computers don’t have by default.

    My favorite example of true randomness is how Cloudflare, the internet security company, uses a mesmerizing wall of lava lamps to create randomness. The constantly changing light patterns from the lava lamps are captured by cameras and converted into random numbers. It’s a perfect blend of physics and computing, and honestly, a geeky work of art!

    Technologies

    • Python: The backbone of the project. Python’s versatility and extensive library support made it the ideal choice for building the bot. It handled everything from script automation to data parsing. You can learn more about Python at python.org.
    • Selenium: Selenium was crucial for automating browser interactions. It allowed the bot to navigate Lotto24 and fill out the lottery forms. If you’re interested in web automation, check out Selenium’s documentation here.

    I was storing the numbers in an SQLite database, don’t ask me why, I think I just felt like playing with SQL.

    The Plan

    The plan was simple. I researched Eurojackpot strategies and created a small program to generate lottery numbers based on historical data and “winning tactics.” The idea? Simulate the lottery process 50 billion times and identify the numbers that were “randomly” picked most often. Then, I’d play the top X combinations that showed up consistently.

    At the time, I was part of a lottery pool with a group of friends, which gave us a collective budget of nearly €1,000 per run. To streamline the process (and save my sanity), I wrote a helper script that automatically entered the selected numbers on the lottery’s online platform.

    If you’re curious about the code, you can check it out here. It’s not overly complicated:

    👉 GitHub Repository

    Winnings

    In the end, I didn’t win the Eurojackpot (yet 😉). But for a while, I thought I was onto something because I kept winning—kind of. My script wasn’t a groundbreaking success; I was simply winning small amounts frequently because I was playing so many combinations. It gave me the illusion of success, but the truth was far less impressive.

    A friend later explained the flaw in my thinking. I had fallen for a common misunderstanding about probability and randomness. Here’s the key takeaway: every possible combination of numbers in a lottery—no matter how “patterned” or “random” it seems—has the exact same chance of being drawn.

    For example, the combination 1-2-3-4-5 feels unnatural or “unlikely” because it looks ordered and predictable, while 7-23-41-56-88 appears random. But both have the same probability of being selected in a random draw. The fallacy lies in equating “how random something looks” with “how random it actually is.”

    Humans are naturally biased to see patterns and avoid things that don’t look random, even when randomness doesn’t work that way. In a lottery like Eurojackpot, where the numbers are drawn independently, no combination is more or less likely than another. The randomness of the draw is entirely impartial to how we perceive the numbers.

    So while my script made me feel like I was gaming the system, all I was really doing was casting a wider net—more tickets meant more chances to win small prizes, but it didn’t change the underlying odds of hitting the jackpot. In the end, the only real lesson I gained was a better understanding of randomness (and a lighter wallet).

  • Flexing on LinkedIn with the LinkedIn-Skillbot

    Flexing on LinkedIn with the LinkedIn-Skillbot

    This little experiment wasn’t meant to encourage cheating—far from it. It actually began as a casual conversation with a colleague about just how “cheatable” online tests can be. Curiosity got the better of me, and one thing led to another.

    If you’ve come across my earlier post, “Get an A on Moodle Without Breaking a Sweat!” you already know that exploring the boundaries of these platforms isn’t exactly new territory for me. I’ve been down this road before, always driven by curiosity and a love for tinkering with systems (not to mention learning how they work from the inside out).

    This specific tool, the LinkedIn-Skillbot, is a project I played with a few years ago. While the bot is now three years old and might not be functional anymore, I did test it back in the day using a throwaway LinkedIn account. And yes, it worked like a charm. If you’re curious about the original repository, it was hosted here: https://github.com/Ebazhanov/linkedin-skill-assessments-quizzes. (Just a heads-up: the repo has since moved.)

    Important Disclaimer: I do not condone cheating, and this tool was never intended for use in real-world scenarios. It was purely an experiment to explore system vulnerabilities and to understand how online assessments can be gamed. Please, don’t use this as an excuse to cut corners in life. There’s no substitute for honest effort and genuine skill development.

    Technologies

    This project wouldn’t have been possible without the following tools and platforms:

    • Python: The backbone of the project. Python’s versatility and extensive library support made it the ideal choice for building the bot. It handled everything from script automation to data parsing. You can learn more about Python at python.org.
    • Selenium: Selenium was crucial for automating browser interactions. It allowed the bot to navigate LinkedIn, answer quiz questions, and simulate user actions in a seamless way. If you’re interested in web automation, check out Selenium’s documentation here.
    • LinkedIn (kind of): While LinkedIn itself wasn’t a direct tool, its skill assessment feature was the target of this experiment. This project interacted with LinkedIn’s platform via automated scripts to complete the quizzes.

    How it works

    To get the LinkedIn-Skillbot up and running, I had to tackle a couple of major challenges. First, I needed to parse the Markdown answers from the assessment-quiz repository. Then, I built a web driver (essentially a scraper) that could navigate LinkedIn without getting blocked—which, as you can imagine, was easier said than done.

    Testing was a nightmare. LinkedIn’s blocks kicked in frequently, and I had to endure a lot of waiting periods. Plus, the repository’s answers weren’t a perfect match to LinkedIn’s questions. Minor discrepancies like typos or extra spaces were no big deal for a human, but they threw the bot off completely. For example:

    "Is earth round?""Is earth round ?"

    That one tiny space could break everything. To overcome this, I implemented a fuzzy matching system using Levenshtein Distance.

    Levenshtein Distance measures the number of small edits (insertions, deletions, or substitutions) needed to transform one string into another. Here’s a breakdown:

    • Insertions: Adding a letter.
    • Deletions: Removing a letter.
    • Substitutions: Replacing one letter with another.

    For example, to turn “kitten” into “sitting”:

    1. Replace “k” with “s” → 1 edit.
    2. Replace “e” with “i” → 1 edit.
    3. Add “g” → 1 edit.

    Total edits: 3. So, the Levenshtein Distance is 3.

    Using this technique, I was able to identify the closest match for each question or answer in the repository. This eliminated mismatches entirely and ensured the bot performed accurately.

    Here’s the code I used to implement this fuzzy matching system:

    import numpy as np
    
    def levenshtein_ratio_and_distance(s, t, ratio_calc = False):
        rows = len(s)+1
        cols = len(t)+1
        distance = np.zeros((rows,cols),dtype = int)
    
        for i in range(1, rows):
            for k in range(1,cols):
                distance[i][0] = i
                distance[0][k] = k
      
        for col in range(1, cols):
            for row in range(1, rows):
                if s[row-1] == t[col-1]:
                    cost = 0 
                else:
                    if ratio_calc == True:
                        cost = 2
                    else:
                        cost = 1
                distance[row][col] = min(distance[row-1][col] + 1,
                                     distance[row][col-1] + 1,
                                     distance[row-1][col-1] + cost)
        if ratio_calc == True:
            Ratio = ((len(s)+len(t)) - distance[row][col]) / (len(s)+len(t))
            return Ratio
        else:
            return distance[row][col]

    I also added a failsafe mode that searches for an answer in all documents possible. If it can’t be found, the bot quits the question and lets you answer it manually.

    Conclusion

    This project was made to show how easy it is to cheat on online tests such as the LinkedIn skill assessments. I am not sure if things have changed in the last 3 years, but back then it was easily possible to finish almost all of them in the top ranks.

    I have not pursued the cheating of online exams any further as I found my time to be used better on other projects. However, it did teach me a lot about fuzzy matching of strings and, back then, web scraping as well as getting around bot detection mechanisms. These are skills that have helped me a lot in my cybersecurity career thus far.

    Try it out here: https://github.com/StasonJatham/linkedin-skillbot

  • Get an A on Moodle Without Breaking a Sweat!

    Get an A on Moodle Without Breaking a Sweat!

    Ah, Moodle quizzes. Love them or hate them, they’re a staple of modern education. Back in the day, when I was a student navigating the endless barrage of quizzes, I created a little trick to make life easier. Now, I’m sharing it with you—meet the Moodle Solver, a simple, cheeky tool that automates quiz-solving with the help of bookmarklets. Let’s dive into the how, the why, and the fine print.

    Legally, I am required to clarify that this is purely a joke. I have never used this tool, and neither should you. This content is intended solely for educational and entertainment purposes.

    You can check out the code on my GitHub here: https://github.com/StasonJatham/moodle_solver

    I should note that this code is quite old and would need a lot of tweaking to work again.

    What is Moodle Solver?

    The Moodle Solver is a set of JavaScript scripts you can save as bookmarklets. These scripts automate the process of taking Moodle quizzes, saving you time, clicks, and maybe a bit of stress.

    The basic idea:

    1. Do a random first attempt on a quiz to see the correct answers.
    2. Use the scripts to save those answers.
    3. Automatically fill in the correct answers on the next attempt and ace the quiz.

    How It Works

    Step 1: Do the Quiz (Badly)

    Most Moodle quizzes give you two or more attempts. On the first attempt, go in blind—pick random answers without worrying about the outcome. If you’re feeling adventurous, I even have a script that fills in random answers for you (not included in the repo, but it’s out there).

    Why do this? Because Moodle shows you the correct answers on the review page after the first try. That’s where the magic happens.

    Step 2: Run get_answers_german.js

    Once you’re on the review page, it’s time to run the get_answers_german.js script. This script scans the page, identifies the correct answers, and saves them to your browser’s localStorage.

    One caveat: The script is written in German (a throwback to my school days), so you might need to modify it for your language. Moodle’s HTML structure might also change over time, but a little tweaking should do the trick.

    Step 3: Nail the Second Attempt

    When you’re ready for your second attempt, use the set_answers.js script. This script fills in all the correct answers for you. Want to go full automation? Use autosubmit.js to submit the quiz with a randomized timer, so it doesn’t look suspicious. After all, no teacher will believe you aced a 50-question quiz in 4 seconds.

    Bonus Features

    Got the answers from a friend or Google? No problem. The fallback_total.js script lets you preload question-answer pairs manually. Simply format them like this:

      var cheater = {
        answers: [
          {
            question:
              "Thisisanexamplequestion?",
            answer: "thecorrectanswer",
          },
          {
            question: "Whatisthisexamplequestion?",
            answer: "youwillpass.",
          },
          {
            question: "Justlikethis?",
            answer: "yes,dude.",
          },
          .......
        ],
      };

    Swap out the default questions and answers in the script, save it as a bookmarklet, and you’re good to go.

    Why Bookmarklets?

    Bookmarklets are incredibly convenient for this kind of task. They let you run JavaScript on any webpage directly from your browser’s bookmarks bar. It’s quick, easy, and doesn’t require you to mess around with browser extensions. It is also really sneaky in class 😈

    To turn the Moodle Solver scripts into bookmarklets, use this free tool.

    1. Download the Scripts: Grab the code from my GitHub repo: github.com/StasonJatham/moodle_solver.
    2. Convert to Bookmarklets: Use the guide linked above to save each script as a bookmarklet in your browser.
    3. Test and Tweak: Depending on your Moodle setup, you might need to adjust the scripts slightly (e.g., to account for language or HTML changes).

    The Fine Print

    Let’s be real: This script is a bit cheeky. Use it responsibly and with caution. The goal here isn’t to cheat your way through life—it’s to save time on tedious tasks so you can focus on learning the stuff that matters.

    That said, automation is a skill in itself. By using this tool, you’re not just “solving Moodle quizzes”—you’re learning how to script, automate, and work smarter.

    Wrapping Up

    The Moodle Solver is a lighthearted way to make Moodle quizzes less of a hassle. Whether you’re looking to save time, learn automation, or just impress your friends with your tech skills, it’s a handy tool to have in your back pocket.

    Check it out:

    Good luck out there, and remember: Work smarter, not harder! 🚀

  • Good Morning Berlin: Work Smarter, Not Harder!

    Good Morning Berlin: Work Smarter, Not Harder!

    Are you like me—a fan of working smarter, not harder, and squeezing every last drop of efficiency out of your day? Do you enjoy multitasking so much that even your coffee break is double-booked? If you’re nodding along, welcome to my world!

    Every morning, I log in to about 16 different tools—some online, some local—and most require separate accounts and credentials. Oh, and because most corporate networks are allergic to fun, scripting options are usually limited to PowerShell or VBScript. (If your network has no restrictions… well, you might want to rethink that. Seriously. 😅)

    But fear not! I’ve come up with a PowerShell-based solution to let your computer do the heavy lifting while you enjoy your coffee. Whether it’s logging into websites, launching mail clients, or sending a cheerful “Good Morning!” on Slack, this script automates the grind so you can focus on the important stuff.

    The code is public and open source on GitHub!

    Dependencies

    • PowerShell (because it’s the last scripting tool standing in restrictive environments)
    • KeePass 2/XC (or any password manager with auto-type functionality—no hardcoding passwords here!)

    KeePass is key to this system. It allows you to set up auto-type rules like:

    “If a window titled ‘Blah Manager’ is open, and I press Ctrl + Alt + I, enter username, press Tab, enter password, and press Enter.

    Examples of Automation

    Starting a Mail Client

    Open your mail app, wait for the login window, and let KeePass fill in the credentials.

    function start_mail_client() {
        wait_till_keepass_open
    
        $notes_running = Get-Process MAIL_CLIENT -ErrorAction SilentlyContinue
        if (-Not $notes_running) {
            Start-Process "C:\Program Files (x86)\MAIL_CLIENT\MAIL_CLIENT.exe"
        
            while(-Not $wshell.AppActivate('MAIL_CLIENT Login Window Title')) {
                $wait_counter = $wait_counter + 2
                if ($wait_counter -ge 12) {
                    $wait_counter = 0
                    return
                }
                Sleep 2
            }
            $wshell.SendKeys('{TAB}')
            $wshell.SendKeys('(^%(i))') #-> KeePass autotype keyboard shortcut, standard is $wshell.SendKeys('(^%(l))')
            Sleep 5
        }
    }

    Logging into Websites

    Launch your browser, open tabs, and log in automatically.

    function login_website() {
        $wshell.AppActivate('Edge')
        start microsoft-edge:https://some-website.com
        Sleep 5
        $wshell.SendKeys('(^%(i))') #-> KeePass autotype keyboard shortcut, standard is $wshell.SendKeys('(^%(l))')
        Sleep 9

    Messaging on Slack

    Start Slack, focus the chat window, and send a cheerful morning message to your team.

    function open_chat() {
        $wshell.AppActivate('Edge')
        start microsoft-edge:https://this-is-messenger.com
        if (is_first_start) {
            $wshell.SendKeys('(^+(l))') # -> slack keyboard to focus chat window 
            Sleep 2
            $wshell.SendKeys('Good Morning everyone! :wave: ')
            Sleep 8
            $wshell.SendKeys('{ENTER}')
            Sleep 1
        } 
        Sleep 5
    }

    Deployment Options

    Option 1: Manual

    Run the script as needed, but make sure to adjust paths (e.g., KeePass database location).

    # Name of KeePass DB
    $keepass_db_name = "SomeKeePassDatabaseName.kdbx"

    Option 2: Automatic on Startup

    Set the script to run every time you boot up by combining PowerShell with a simple VBScript.

    Dim objShell
    Set objShell = Wscript.CreateObject("WScript.Shell")
    
    objShell.Run("Path\to\Desktop\Scripts\keypresser.vbs")
    WScript.Sleep 1000
    objShell.Run("powershell -noexit -file Path\to\Desktop\Scripts\GoodMorningBerlin.ps1")
    
    Set objShell = Nothing

    Bonus: VBS-Caffeine Script ☕

    Because no one wants their computer falling asleep mid-task, this tiny VBScript toggles the Num Lock key every two minutes to keep the system awake. It’s a lifesaver for long processes.

    Dim objResult
    
    Set objShell = WScript.CreateObject("WScript.Shell")
    i = 0
    
    do while i = 0
    	objResult = objShell.sendkeys("{NUMLOCK}{NUMLOCK}")
    	WScript.Sleep (120000)
    
    Loop 

    Conclusion

    This script isn’t just about convenience—it’s about reclaiming your time and starting the day on your terms. So grab your coffee, lean back, and let your PC handle the morning grind for you. ☕✨

    Get started with the code today: https://github.com/StasonJatham/good-morning-berlin

  • The Privacy-Friendly Mail Parser You’ve Been Waiting For

    The Privacy-Friendly Mail Parser You’ve Been Waiting For

    As you may or may not know (but now totally do), I have another beloved website, Exploit.to. It’s where I let my inner coder run wild and build all sorts of web-only tools. I’ll save those goodies for another project post, but today, we’re talking about my Mail Parser—a little labor of love born from frustration and an overdose of caffeine.

    See, as a Security Analyst and incident responder, emails are my bread and butter. Or maybe my curse. Parsing email headers manually? It’s a one-way ticket to losing your sanity. And if you’ve ever dealt with email headers, you know they’re basically the Wild West—nobody follows the rules, everyone’s just slapping on whatever they feel like, and chaos reigns supreme.

    The real kicker? Every single EML parser out there at the time was server-side. Let me paint you a picture: you, in good faith, upload that super-sensitive email from your mom (the one where she tells you your laundry’s done and ready for pick-up) to some rando’s sketchy server. Who knows what they’re doing with your mom’s loving words? Selling them? Training an AI to perfect the art of passive-aggressive reminders? The horror!

    So, I thought, “Hey, wouldn’t it be nice if we had a front-end-only EML parser? One that doesn’t send your personal business to anyone else’s server?” Easy peasy, right? Wrong. Oh, how wrong I was. But I did it anyway.

    You can find the Mail Parser here and finally parse those rogue headers in peace. You’re welcome.

    Technologies

    • React: Handles the user interface and dynamic interactions.
    • Astro.js: Used to generate the static website efficiently. (technically not needed for this project)
    • TailwindCSS: For modern and responsive design.
    • ProtonMail’s jsmimeparser: The core library for parsing email headers.

    When I first approached this project, I tried handling email header parsing manually with regular expressions. It didn’t take long to realize how complex email headers have become, with an almost infinite variety of formats, edge cases, and inconsistencies. Regex simply wasn’t cutting it.

    That’s when I discovered ProtonMail’s jsmimeparser, a library purpose-built for handling email parsing. It saved me from drowning in parsing logic and ensured the project met its functional goals.

    Sharing the output of this tool without accidentally spilling personal info all over the place is kinda tricky. But hey, I gave it a shot with a simple empty email I sent to myself:

    The Code

    As tradition dictates, the code isn’t on GitHub but shared right here in a blog post 😁.

    Kidding (sort of). The repo is private, but no gatekeeping here—here’s the code:

    mailparse.tsx
    import React, { useState } from "react";
    import { parseMail } from "@protontech/jsmimeparser";
    
    type Headers = {
      [key: string]: string[];
    };
    
    const MailParse: React.FC = () => {
      const [headerData, setHeaderData] = useState<Headers>({});
      const [ioc, setIoc] = useState<any>({});
    
      function extractEntitiesFromEml(emlContent: string) {
        const ipRegex =
          /\b(?:\d{1,3}\.){3}\d{1,3}\b|\b(?:[0-9a-fA-F]{1,4}:){7}[0-9a-fA-F]{1,4}\b/g;
        const emailRegex = /\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/g;
        const urlRegex = /(?:https?|ftp):\/\/[^\s/$.?#].[^\s]*\b/g;
        const htmlTagsRegex = /<[^>]*>/g; // Regex to match HTML tags
    
        // Match IPs, emails, and URLs
        const ips = Array.from(new Set(emlContent.match(ipRegex) || []));
        const emails = Array.from(new Set(emlContent.match(emailRegex) || []));
        const urls = Array.from(new Set(emlContent.match(urlRegex) || []));
    
        // Remove HTML tags from emails and URLs
        const cleanEmails = emails.map((email) => email.replace(htmlTagsRegex, ""));
        const cleanUrls = urls.map((url) => url.replace(htmlTagsRegex, ""));
    
        return {
          ips,
          emails: cleanEmails,
          urls: cleanUrls,
        };
      }
    
      function parseDKIMSignature(signature: string): Record<string, string> {
        const signatureParts = signature.split(";").map((part) => part.trim());
        const parsedSignature: Record<string, string> = {};
    
        for (const part of signatureParts) {
          const [key, value] = part.split("=");
          parsedSignature[key.trim()] = value.trim();
        }
    
        return parsedSignature;
      }
    
      const handleFileChange = async (
        event: React.ChangeEvent<HTMLInputElement>
      ) => {
        const file = event.target.files?.[0];
        if (!file) return;
    
        const reader = new FileReader();
        reader.onload = async (e) => {
          const buffer = e.target?.result as ArrayBuffer;
    
          // Convert the buffer to a string
          const bufferArray = Array.from(new Uint8Array(buffer)); // Convert Uint8Array to number[]
          const bufferString = String.fromCharCode.apply(null, bufferArray);
    
          const { attachments, body, subject, from, to, date, headers, ...rest } =
            parseMail(bufferString);
    
          setIoc(extractEntitiesFromEml(bufferString));
          setHeaderData(headers);
        };
    
        reader.readAsArrayBuffer(file);
      };
    
      return (
        <>
          <div className="p-4">
            <h1>Front End Only Mailparser</h1>
            <p className="my-6">
              Have you ever felt uneasy about uploading your emails to a server you
              don't fully trust? I sure did. It's like handing over your private
              correspondence to a stranger. That's why I decided to take matters
              into my own hands.
            </p>
            <p className="mb-8">
              With this frontend-only mail parser, there's no need to worry about
              your privacy. Thanks to{" "}
              <a
                href="https://proton.me/"
                className="text-pink-500 underline dark:visited:text-gray-400 visited:text-gray-500 hover:font-bold after:content-['_↗']"
              >
                ProtonMail's
              </a>{" "}
              <a
                className="text-pink-500 underline dark:visited:text-gray-400 visited:text-gray-500 hover:font-bold after:content-['_↗']"
                href="https://github.com/ProtonMail/jsmimeparser"
              >
                jsmimeparser
              </a>
              , you can enjoy the same email parsing experience right in your
              browser. No more sending your sensitive data to external servers.
              Everything stays safe and secure, right on your own system.
            </p>
    
            <input
              type="file"
              onChange={handleFileChange}
              className="block w-full text-sm text-slate-500
          file:mr-4 file:py-2 file:px-4
          file:rounded-full file:border-0
          file:text-sm file:font-semibold
          file:bg-violet-50 file:text-violet-700
          hover:file:bg-violet-100
        "
            />
    
            {Object.keys(headerData).length !== 0 && (
              <table className="mt-8">
                <thead>
                  <tr className="border dark:border-white border-black">
                    <th>Header</th>
                    <th>Value</th>
                  </tr>
                </thead>
                <tbody>
                  {Object.entries(headerData).map(([key, value]) => (
                    <tr key={key} className="border dark:border-white border-black">
                      <td>{key}</td>
                      <td>{value}</td>
                    </tr>
                  ))}
                </tbody>
              </table>
            )}
          </div>
    
          {Object.keys(ioc).length > 0 && (
            <div className="mt-8">
              <h2>IPs:</h2>
              <ul>
                {ioc.ips && ioc.ips.map((ip, index) => <li key={index}>{ip}</li>)}
              </ul>
              <h2>Emails:</h2>
              <ul>
                {ioc.emails &&
                  ioc.emails.map((email, index) => <li key={index}>{email}</li>)}
              </ul>
              <h2>URLs:</h2>
              <ul>
                {ioc.urls &&
                  ioc.urls.map((url, index) => <li key={index}>{url}</li>)}
              </ul>
            </div>
          )}
        </>
      );
    };
    
    export default MailParse;

    Yeah, I know, it looks kinda ugly as-is—but hey, slap it into VSCode and let the prettifier work its magic.

    Most of the heavy lifting here is courtesy of the library I used. The rest is just some plain ol’ regex doing its thing—filtering for indicators in the email header and body to make life easier for further investigation.

    Conclusion

    Short and sweet—that’s the vibe here. Sometimes, less is more, right? Feel free to use this tool wherever you like—internally, on the internet, or even on a spaceship. You can also try it out anytime directly on my website.

    Don’t trust me? Totally fair. Open the website, yank out your internet connection, and voilà—it still works offline. No sneaky data sent to my servers, pinky promise.

    As for my Astro.js setup, I include the “mailparse.tsx” like this:

    ---
    import BaseLayout from "../../layouts/BaseLayout.astro";
    import Mailparse from "../../components/mailparse";
    ---
    
    <BaseLayout>
      <Mailparse client:only="react" />
    </BaseLayout>

    See you on the next one. Love you, byeeeee ✌️😘

  • KarlGPT – My Push to Freedom

    KarlGPT – My Push to Freedom

    KarlGPT represents my pursuit of true freedom, through AI. I’ve realized that my ultimate life goal is to do absolutely nothing. Unfortunately, my strong work ethic prevents me from simply slacking off or quietly quitting.

    This led me to the conclusion that I need to maintain, or even surpass, my current level of productivity while still achieving my dream of doing nothing. Given the advancements in artificial intelligence, this seemed like a solvable problem.

    I began by developing APIs to gather all the necessary data from my work accounts and tools. Then, I started working on a local AI model and server to ensure a secure environment for my data.

    Now, I just need to fine-tune the entire system, and soon, I’ll be able to automate my work life entirely, allowing me to finally live my dream: doing absolutely nothing.

    This is gonna be a highly censored post as it involves certain details about my work I can not legally disclose

    Technologies

    Django and Django REST Framework (DRF)

    Django served as the backbone for the server-side logic, offering a robust, scalable, and secure foundation for building web applications. The Django REST Framework (DRF) made it simple to expose APIs with fine-grained control over permissions, serialization, and views. DRF’s ability to handle both function-based and class-based views allowed for a clean, modular design, ensuring the APIs could scale as the project evolved.

    Celery Task Queue

    To handle asynchronous tasks such as sending emails, performing background computations, and integrating external services (AI APIs), I implemented Celery. Celery provided a reliable and efficient way to manage long-running tasks without blocking the main application. This was critical for tasks like scheduling periodic jobs and processing user-intensive data without interrupting the API’s responsiveness.

    React with TypeScript and TailwindCSS

    For the frontend, I utilized React with TypeScript for type safety and scalability. TypeScript ensured the codebase remained maintainable as the project grew. Meanwhile, TailwindCSS enabled rapid UI development with its utility-first approach, significantly reducing the need for writing custom CSS. Tailwind’s integration with React made it seamless to create responsive and accessible components.

    This is my usual front end stack, usually also paired with Astrojs. I use regular React, no extra framework.

    Vanilla Python

    Due to restrictions that prohibited the use of external libraries in local API wrappers, I had to rely on pure Python to implement APIs and related tools. This presented unique challenges, such as managing HTTP requests, data serialization, and error handling manually. Below is an example of a minimal API written without external dependencies:

    import re
    import json
    from http.server import BaseHTTPRequestHandler, HTTPServer
    
    
    items = {"test": "mewo"}
    
    
    class ControlKarlGPT(BaseHTTPRequestHandler):
        def do_GET(self):
            if re.search("/api/helloworld", self.path):
                self.send_response(200)
                self.send_header("Content-type", "application/json")
                self.end_headers()
                response = json.dumps(items).encode()
                self.wfile.write(response)
            else:
                self.send_response(404)
                self.end_headers()
                
    def run(server_class=HTTPServer, handler_class=ControlKarlGPT, port=8000):
        server_address = ("", port)
        httpd = server_class(server_address, handler_class)
        print(f"Starting server on port http://127.0.0.1:{port}")
        httpd.serve_forever()
    
    
    if __name__ == "__main__":
        run()

    By weaving these technologies together, I was able to build a robust, scalable system that adhered to the project’s constraints while still delivering a polished user experience. Each tool played a crucial role in overcoming specific challenges, from frontend performance to backend scalability and compliance with restrictions.

    File based Cache

    To minimize system load, I developed a lightweight caching framework based on a simple JSON file-based cache. Essentially, this required creating a “mini-framework” akin to Flask but with built-in caching capabilities tailored to the project’s needs. While a pull-based architecture—where workers continuously poll the server for new tasks—was an option, it wasn’t suitable here. The local APIs were designed as standalone programs, independent of a central server.

    This approach was crucial because some of the tools we integrate lack native APIs or straightforward automation options. By building these custom APIs, I not only solved the immediate challenges of this project (e.g., powering KarlGPT) but also created reusable components for other tasks. These standalone APIs provide a solid foundation for automation and flexibility beyond the scope of this specific system

    How it works

    The first step was to identify what tasks I perform in the daily and the tools I use for each of them. To automate anything effectively, I needed to abstract these tasks into programmable actions. For example:

    • Read Emails
    • Respond to Invitations
    • Check Tickets

    Next, I broke these actions down further to understand the decision-making process behind each. For instance, when do I respond to certain emails, and how do I determine which ones fall under my responsibilities? This analysis led to a detailed matrix that mapped out every task, decision point, and tool I use.

    The result? A comprehensive, structured overview of my workflow. Not only did this help me build the automation framework, but it also provided a handy reference for explaining my role. If my boss ever asks, “What exactly do you do here?” I can present the matrix and confidently say, “This is everything.”

    As you can see, automating work can be a lot of work upfront—an investment in reducing effort in the future. Ironically, not working requires quite a bit of work to set up! 😂

    The payoff is a system where tasks are handled automatically, and I have a dashboard to monitor, test, and intervene as needed. It provides a clear overview of all ongoing processes and ensures everything runs smoothly:

    AI Magic: Behind the Scenes

    The AI processing happens locally using Llama 3, which plays a critical role in removing all personally identifiable information (PII) from emails and text. This is achieved using a carefully crafted system prompt fine-tuned for my specific job and company needs. Ensuring sensitive information stays private is paramount, and by keeping AI processing local, we maintain control over data security.

    In most cases, the local AI is fully capable of handling the workload. However, for edge cases where additional computational power or advanced language understanding is required, Claude or ChatGPT serve as backup systems. When using cloud-based AI, it is absolutely mandatory to ensure that no sensitive company information is disclosed. For this reason, the system does not operate in full-auto mode. Every prompt is reviewed and can be edited before being sent to the cloud, adding an essential layer of human oversight.

    To manage memory and task tracking, I use mem0 in conjunction with a PostgreSQL database, which acts as the system’s primary “brain” 🧠. This database, structured using Django REST Framework, handles everything from polling for new tasks to storing results. This robust architecture ensures that all tasks are processed efficiently while maintaining data integrity and security.

    Conclusion

    Unfortunately, I had to skip over many of the intricate details and creative solutions that went into making this system work. One of the biggest challenges was building APIs around legacy tools that lack native automation capabilities. Bringing these tools into the AI age required innovative thinking and a lot of trial and error.

    The preparation phase was equally demanding. Breaking down my daily work into a finely detailed matrix took time and effort. If you have a demanding role, such as being a CEO, it’s crucial to take a step back and ask yourself: What exactly do I do? A vague answer like “represent the company” won’t cut it. To truly understand and automate your role, you need to break it down into detailed, actionable components.

    Crafting advanced prompts tailored to specific tasks and scenarios was another key part of the process. To structure these workflows, I relied heavily on frameworks like CO-START and AUTOMAT (stay tuned for an upcoming blog post about these).

    I even created AI personas for the people I interact with regularly and designed test loops to ensure the responses generated by the AI were accurate and contextually appropriate. While I drew inspiration from CrewAI, I ultimately chose LangChain for most of the complex workflows because its extensive documentation made development easier. For simpler tasks, I used lightweight local AI calls via Ollama.

    This project has been an incredible journey of challenges, learning, and innovation. It is currently in an early alpha stage, requiring significant manual intervention. Full automation will only proceed once I receive explicit legal approval from my employer to ensure compliance with all applicable laws, company policies, and data protection regulations.

    Legal Disclaimer: The implementation of any automation or AI-based system in a workplace must comply with applicable laws, organizational policies, and industry standards. Before deploying such systems, consult with legal counsel, relevant regulatory bodies, and your employer to confirm that all requirements are met. Unauthorized use of automation or AI may result in legal consequences or breach of employment contracts. Always prioritize transparency, data security, and ethical considerations when working with sensitive information.