Tag: self-hosted

  • Effortless Cron Job Monitoring: A Guide to Self-Hosting with Healthchecks.io

    Effortless Cron Job Monitoring: A Guide to Self-Hosting with Healthchecks.io

    Do you ever find yourself lying awake at night, staring at the ceiling, wondering if your beloved cronjobs ran successfully? Worry no more! Today, we’re setting up a free, self-hosted solution to ensure you can sleep like a content little kitten 🐱 from now on.

    I present to you Healthchecks.io. According to their website:

    Simple and Effective Cron Job Monitoring

    We notify you when your nightly backups, weekly reports, cron jobs, and scheduled tasks don’t run on time.

    How to monitor any background job:

    1. On Healthchecks.io, generate a unique ping URL for your background job.
    2. Update your job to send an HTTP request to the ping URL every time the job runs.
    3. When your job does not ping Healthchecks.io on time, Healthchecks.io alerts you!

    Today, we’re taking the super easy, lazy-day approach by using their Docker image. They’ve provided a well-documented, straightforward guide for deploying it right here: Running with Docker.

    What I love most about Healthchecks.io? It’s built on Django, my all-time favorite Python web framework. Sorry, FastAPI—you’ll always be cool, but Django has my heart!

    Prerequisites:

    1. A Server: You’ll need a server to host your shiny new cronjob monitor. A Linux distro is ideal.
    2. Docker & Docker Compose: Make sure these are installed. If you’re not set up yet, here’s the guide.
    3. Bonus Points: Having a domain or subdomain, along with a public IP, makes it accessible for all your systems.

    You can run this on your home network without any hassle, although you might not be able to copy and paste all the code below.

    Need a free cloud server? Check out Oracle’s free tier—it’s a decent option to get started. That said, in my experience, their free servers are quite slow, so I wouldn’t recommend them for anything mission-critical. (Not sponsored, pretty sure they hate me 🥺.)

    Setup

    I’m running a Debian LXC container on my Proxmox setup with the following specs:

    • CPU: 1 core
    • RAM: 1 GB
    • Swap: 1 GB
    • Disk: 10 GB (NVMe SSD)

    After a month of uptime, these are the typical stats: memory usage stays pretty consistent, and the boot disk is mostly taken up by Docker and the image. As for the CPU? It’s usually just sitting there, bored out of its mind.

    First, SSH into your server, and let’s get started by creating a .env file to store all your configuration variables:

    .env
    PUID=1000
    PGID=1000
    APPRISE_ENABLED=True
    TZ=Europe/Berlin
    SITE_ROOT=https://ping.yourdomain.de
    SITE_NAME=Healthchecks
    ALLOWED_HOSTS=ping.yourdomain.de
    CSRF_TRUSTED_ORIGINS=https://ping.yourdomain.de
    DEBUG=False
    SECRET_KEY=your-secret-key

    In your .env file, enter the domain you’ll use to access the service. I typically go with something simple, like “ping” or “cron” as a subdomain. If you want to explore more configuration options, you can check them out here.

    For my setup, this basic configuration does the job perfectly.

    To generate secret keys, I usually rely on the trusty openssl command. Here’s how you can do it:

    Bash
    openssl rand -base64 64
    docker-compose.yml
    services:
      healthchecks:
        image: lscr.io/linuxserver/healthchecks:latest
        container_name: healthchecks
        env_file:
          - .env
        volumes:
          - ./config:/config
        ports:
          - 8083:8000
        restart: unless-stopped

    All you need to do now is run:

    Bash
    docker compose -up

    That’s it—done! 🎉

    Oh, and by the way, I’m not using the original image for this. Instead, I went with the Linuxserver.io variant. There is no specific reason for this —just felt like it! 😄

    Important!

    Unlike the Linuxserver.io guide, I skipped setting the superuser credentials in the .env file. Instead, I created the superuser manually with the following command:

    Bash
    docker compose exec healthchecks python /app/healthchecks/manage.py createsuperuser

    This allows you to set up your superuser interactively and securely directly within the container.

    If you’re doing a standalone deployment, you’d typically set up a reverse proxy to handle SSL in front of Healthchecks.io. This way, you avoid dealing with SSL directly in the app. Personally, I use a centralized Nginx Proxy Manager running on a dedicated machine for all my deployments. I’ve even written an article about setting it up with SSL certificates—feel free to check that out!

    Once your site is served through the reverse proxy over the domain you specified in the configuration, you’ll be able to access the front end using the credentials you created with the createsuperuser command.

    There are plenty of guides for setting up reverse proxies, and if you’re exploring alternatives, I’m also a big fan of Caddy—it’s simple, fast, and works like a charm!

    Here is a finished Docker Compose file with Nginx Proxy Manager:

    docker-compose.yml
    services:
      npm:
        image: 'jc21/nginx-proxy-manager:latest'
        container_name: nginx-proxy-manager
        restart: unless-stopped
        ports:
          - '443:443'
          - '81:81'
        volumes:
          - ./npm/data:/data
          - ./npm/letsencrypt:/etc/letsencrypt
    
      healthchecks:
        image: lscr.io/linuxserver/healthchecks:latest
        container_name: healthchecks
        env_file:
          - .env
        volumes:
          - ./healthchecks/config:/config
        restart: unless-stopped

    In Nginx Proxy Manager your proxied host would be “http://healthchecks:8000”

    If you did not follow my post you will need to expose port 80 on the proxy as well for “regular” Let’s Encrypt certificates without DNS challenge.

    Healthchecks.io

    If you encounter any errors while trying to access the UI of your newly deployed Healthchecks, the issue is most likely related to the settings in your .env file. Double-check the following to ensure they match your domain configuration:

    .env
    SITE_ROOT=https://ping.yourdomain.de
    ALLOWED_HOSTS=ping.yourdomain.de
    CSRF_TRUSTED_ORIGINS=https://ping.yourdomain.de

    Once you’re in, the first step is to create a new project. After that, let’s set up your first simple check.

    For this example, I’ll create a straightforward uptime monitor for my WordPress host. I’ll set up a cronjob that runs every hour and sends an “alive” ping to my Healthchecks.io instance.

    The grace period is essential to account for high latency. For instance, if my WordPress host is under heavy load, an outgoing request might take a few extra seconds to complete. Setting an appropriate grace period ensures that occasional delays don’t trigger false alerts.

    I also prefer to “ping by UUID”. Keeping these endpoints secret is crucial—if someone else gains access to your unique ping URL, they could send fake pings to your Healthchecks.io instance, causing you to miss real downtimes.

    Click on the Usage Example button in your Healthchecks.io dashboard to find ready-to-use, copy-paste snippets for various languages and tools. For this setup, I’m going with bash:

    Bash
    curl -m 10 --retry 5 https://ping.yourdomain.de/ping/67162f7b-5daa-4a31-8667-abf7c3e604d8
    • -m sets the max timeout to 10 seconds. You can change the value but do not leave this out!
    • –retry says it should retry the request 5 times before aborting.

    Here’s how you can integrate it into a crontab:

    Bash
    # A sample crontab entry. Note the curl call appended after the command.
    # FIXME: replace "/your/command.sh" below with the correct command!
    0 * * * * /your/command.sh && curl -fsS -m 10 --retry 5 -o /dev/null https://ping.yourdomain.de/ping/67162f7b-5daa-4a31-8667-abf7c3e604d8
    

    To edit your crontab just run:

    Bash
    crontab -e

    The curl command to Healthchecks.io will only execute if command.sh completes successfully without any errors. This ensures that you’re notified only when the script runs without issues.

    After you ran that command, your dashboard should look like this:

    Advanced Checks

    While this is helpful, you might often need more detailed information, such as whether the job started but didn’t finish or how long the job took to complete.

    Healthchecks.io provides all the necessary documentation built right into the platform. You can visit /docs/measuring_script_run_time/ on your instance to find fully functional examples.

    Bash
    #!/bin/sh
    
    RID=`uuidgen`
    CHECK_ID="67162f7b-5daa-4a31-8667-abf7c3e604d8"
    
    # Send a start ping, specify rid parameter:
    curl -fsS -m 10 --retry 5 "https://ping.yourdomain.de/ping/$CHECK_ID/start?rid=$RID"
    
    # Put your command here
    /usr/bin/python3 /path/to/a_job_to_run.py
    
    # Send the success ping, use the same rid parameter:
    curl -fsS -m 10 --retry 5 "https://ping.yourdomain.de/ping/$CHECK_ID?rid=$RID"

    As you can see here this will give me the execution time as well:

    Here, I used a more complex cron expression. To ensure it works as intended, I typically rely on Crontab.guru for validation. You can use the same cron expression here as in your local crontab. The grace period depends on how long you expect the job to run; in my case, 10 seconds should be sufficient.

    Notifications

    You probably don’t want to find yourself obsessively refreshing the dashboard at 3 a.m., right? Ideally, you only want to be notified when something important happens.

    Thankfully, Healthchecks.io offers plenty of built-in notification options. And for even more flexibility, we enabled Apprise in the .env file earlier, unlocking a huge range of additional integrations.

    For notifications, I usually go with Discord or Node-RED, since they work great with webhook-based systems.

    While you could use Apprise for Discord notifications, the simplest route is to use the Slack integration. Here’s the fun part: Slack and Discord webhooks are fully compatible, so you can use the Slack integration to send messages directly to your Discord server without any extra configuration!

    This way, you’re only disturbed when something really needs your attention—and it’s super easy to set up.

    Discord already provides an excellent Introduction to Webhooks that walks you through setting them up for your server, so I won’t dive into the details here.

    All you need to do is copy the webhook URL from Discord and paste it into the Slack integration’s URL field in Healthchecks.io. That’s it—done! 🎉

    With this simple setup, you’ll start receiving notifications directly in your Discord server whenever something requires your attention. Easy and effective!

    On the Discord side it will look like this:

    With this setup, you won’t be bombarded with notifications every time your job runs. Instead, you’ll only get notified if the job fails and then again when it’s back up and running.

    I usually prefer creating dedicated channels for these notifications to keep things organized and avoid spamming anyone:

    EDIT:

    I ran into some issues with multiple Slack notifications in different projects. If you get 400 errors just use Apprise. The Discord URL would look like this:

    discord://{WebhookID}/{WebhookToken}/
    
    for example:
    
    discord://13270700000000002/V-p2SweffwwvrwZi_hc793z7cubh3ugi97g387gc8svnh

    Status Badges

    In one of my projects, I explained how I use SVG badges to show my customers whether a service is running.

    Here’s a live badge (hopefully it’s still active when you see this):

    bearbot

    Getting these badges is incredibly easy. Simply go to the “Badges” tab in your Healthchecks.io dashboard and copy the pre-generated HTML to embed the badge on your website. If you’re not a fan of the badge design, you can create your own by writing a custom JavaScript function to fetch the status as JSON and style it however you like.

    Here is a code example:

    HTML
    <style>
        .badge {
            display: inline-block;
            padding: 10px 20px;
            border-radius: 5px;
            color: white;
            font-family: Arial, sans-serif;
            font-size: 16px;
            font-weight: bold;
            text-align: center;
            box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
            transition: background-color 0.3s ease;
        }
        .badge.up {
            background-color: #28a745; /* Green for "up" */
        }
        .badge.down {
            background-color: #dc3545; /* Red for "down" */
        }
        .badge.grace {
            background-color: #ffc107; /* Yellow for "grace" */
        }
    </style>
    </head>
    <body>
    <div id="statusBadge" class="badge">Loading...</div>
    
    <script>
        async function updateBadge() {
            // replace this 
            const endpoint = "https://ping.yourdmain.de/badge/XXXX-XXX-4ff6-XXX-XbS-2.json"
            const interval = 60000
            // ---
            
            try {
                const response = await fetch(endpoint);
                const data = await response.json();
    
                const badge = document.getElementById('statusBadge');
                badge.textContent = `Status: ${data.status.toUpperCase()} (Total: ${data.total}, Down: ${data.down})`;
    
                badge.className = 'badge';
                if (data.status === "up") {
                    badge.classList.add('up');
                } else if (data.status === "down") {
                    badge.classList.add('down');
                } else if (data.status === "grace") {
                    badge.classList.add('grace');
                }
            } catch (error) {
                console.error("Error fetching badge data:", error);
                const badge = document.getElementById('statusBadge');
                badge.textContent = "Error fetching data";
                badge.className = 'badge down';
            }
        }
    
        updateBadge();
        setInterval(updateBadge, interval);
    </script>
    </body>

    The result:

    It might not look great, but the key takeaway is that you can customize the style to fit seamlessly into your design.

    Conclusion

    We’ve covered a lot of ground today, and I hope you now have a fully functional Healthchecks.io setup. No more sleepless nights worrying about whether your cronjobs ran successfully!

    So, rest easy and sleep tight, little kitten 🐱—your cronjobs are in good hands now.

  • Bearbot.dev – The Final Form

    Bearbot.dev – The Final Form

    I wanted to dedicate a special post to the new version of Bearbot. While I’ve already shared its history in a previous post, it really doesn’t capture the full extent of the effort I’ve poured into this update.

    The latest version of Bearbot boasts a streamlined tech stack designed to cut down on tech debt, simplify the overall structure, and laser-focus on its core mission: raking in those sweet stock market gains 🤑.

    Technologie

    • Django (Basic, not DRF): A high-level Python web framework that simplifies the development of secure and scalable web applications. It includes built-in tools for routing, templates, and ORM for database interactions. Ideal for full-stack web apps without a heavy focus on APIs.
    • Postgres: A powerful, open-source relational database management system known for its reliability, scalability, and advanced features like full-text search, JSON support, and transactions.
    • Redis: An in-memory data store often used for caching, session storage, and real-time messaging. It provides fast read/write operations, making it perfect for performance optimization.
    • Celery: A distributed task queue system for managing asynchronous tasks and scheduling. It’s commonly paired with Redis or RabbitMQ to handle background jobs like sending emails or processing data.
    • Bootstrap 5: A popular front-end framework for designing responsive, mobile-first web pages. It includes pre-designed components, utilities, and customizable themes.
    • Docker: A containerization platform that enables the packaging of applications and their dependencies into portable containers. It ensures consistent environments across development, testing, and production.
    • Nginx: A high-performance web server and reverse proxy server. It efficiently handles HTTP requests, load balancing, and serving static files for web applications.

    To streamline my deployments, I turned to Django Cookiecutter, and let me tell you—it’s been a game changer. It’s completely transformed how quickly I can get a production-ready Django app up and running.

    For periodic tasks, I’ve swapped out traditional Cron jobs in favor of Celery. The big win here? Celery lets me manage all asynchronous jobs directly from the Django Admin interface, making everything so much more efficient and centralized.

    Sweet, right ?

    Features

    Signals

    At its core, this is Bearbot’s most important feature—it tells you what to trade. To make it user-friendly, I added search and sort functionality on the front end. This is especially handy if you have a long list of signals, and it also improves the mobile experience. Oh, and did I mention? Bearbot is fully responsive by design.

    I won’t dive into how these signals are calculated or the reasoning behind them—saving that for later.

    Available Options

    While you’ll likely spend most of your time on the Signals page, the Options list is there to show what was considered for trading but didn’t make the cut.

    Data Task Handling

    Although most tasks can be handled via the Django backend with scheduled triggers, I created a more fine-tuned control system for data fetching. For example, if fetching stock data fails for just AAPL, re-running the entire task would unnecessarily stress the server and APIs. With this feature, I can target specific data types, timeframes, and stocks.

    User Management

    Bearbot offers complete user management powered by django-allauth, with clean and well-designed forms. It supports login, signup, password reset, profile updates, multifactor authentication, and even magic links for seamless access.

    Datamanagement

    Thanks to Django’s built-in admin interface, managing users, data, and other admin-level tasks is a breeze. It’s fully ready out of the box to handle just about anything you might need as an admin.

    Keeping track of all the Jobs

    When it comes to monitoring cronjobs—whether they’re running in Node-RED, n8n, Celery, or good old-fashioned Cron—Healthchecks.io has always been my go-to solution.

    If you’ve visited the footer of the bearbot.dev website, you might have noticed two neat SVG graphics:

    Those are dynamically loaded from my self-hosted Healthchecks instance, giving a quick visual of job statuses. It’s simple, effective, and seamlessly integrated!

    On my end it looks like this:

    I had to remove a bunch of Info, otherwise anyone could send uptime or downtime requests to my Healtchecks.io

    How the Signals work

    Every trading strategy ever created has an ideal scenario—a “perfect world”—where all the stars align, and the strategy delivers its best results.

    Take earnings-based trading as an example. The ideal situation here is when a company’s earnings surprise analysts with outstanding results. This effect is even stronger if the company was struggling before and suddenly outperforms expectations.

    Now, you might be thinking, “How could I possibly predict earnings without insider information?” There are a lot of things you could consider that indicate positive earnings like:

    • Launching hyped new products that dominate social media conversations.
    • Announcing major partnerships.
    • Posting a surge in job openings that signal strategic growth.

    There are a lot of factors that support facts that a company is doing really good.

    Let’s say you focus on one or two specific strategies. You spend considerable time researching these strategies and identifying supporting factors. Essentially, you create a “perfect world” for those scenarios.

    Bearbot then uses statistics to calculate how closely a trade aligns with this perfect world. It scans a range of stocks and options, simulating and comparing them against the ideal scenario. Anything scoring above a 96% match gets selected. On average, this yields about 4-5 trades per month, and each trade typically delivers a 60-80% profit.

    Sounds like a dream, right? Well, here’s the catch: it’s not foolproof. There’s no free lunch on Wall Street, and certainly no guaranteed money.

    The remaining 4% of trades that don’t align perfectly? They can result in complete losses—not just the position, but potentially your entire portfolio. Bearbot operates with tight risk tolerance, riding trades until the margin call. I’ve experienced this firsthand on a META trade. The day after opening the position, news of a data breach fine broke, causing the stock to plummet. I got wiped out because I didn’t have enough cash to cover the margin requirements. Ironically, Bearbot’s calculations were right—had I been able to hold through the temporary loss, I would’ve turned a profit. (Needless to say, I’ve since implemented much better risk management. You live, you learn.)

    If someone offers or sells you a foolproof trading strategy, it’s a scam. If their strategy truly worked, they’d keep it secret and wouldn’t share it with you. Certainly not for 100€ in some chat group.

    I’m not sharing Bearbot’s strategy either—and I have no plans to sell or disclose its inner workings. I built Bearbot purely for myself and a few close friends. The website offers no guidance on using the signals or where to trade, and I won’t answer questions about it.

    Bearbot is my personal project—a fun way to explore Django while experimenting with trading strategies 😁.

  • Scraproxy: A High-Performance Web Scraping API

    Scraproxy: A High-Performance Web Scraping API

    After building countless web scrapers over the past 15 years, I decided it was time to create something truly versatile—a tool I could use for all my projects, hosted anywhere I needed it. That’s how Scraproxy was born: a high-performance web scraping API that leverages the power of Playwright and is built with FastAPI.

    Scraproxy streamlines web scraping and automation by enabling browsing automation, content extraction, and advanced tasks like capturing screenshots, recording videos, minimizing HTML, and tracking network requests and responses. It even handles challenges like cookie banners, making it a comprehensive solution for any scraping or automation project.

    Best of all, it’s free and open-source. Get started today and see what it can do for you. 🔥

    👉 https://github.com/StasonJatham/scraproxy

    Features

    • Browse Web Pages: Gather detailed information such as network data, logs, redirects, cookies, and performance metrics.
    • Screenshots: Capture live screenshots or retrieve them from cache, with support for full-page screenshots and thumbnails.
    • Minify HTML: Minimize HTML content by removing unnecessary elements like comments and whitespace.
    • Extract Text: Extract clean, plain text from HTML content.
    • Video Recording: Record a browsing session and retrieve the video as a webm file.
    • Reader Mode: Extract the main readable content and title from an HTML page, similar to “reader mode” in browsers.
    • Markdown Conversion: Convert HTML content into Markdown format.
    • Authentication: Optional Bearer token authentication using API_KEY.

    Technology Stack

    • FastAPI: For building high-performance, modern APIs.
    • Playwright: For automating web browser interactions and scraping.
    • Docker: Containerized for consistent environments and easy deployment.
    • Diskcache: Efficient caching to reduce redundant scraping requests.
    • Pillow: For image processing, optimization, and thumbnail creation.

    Working with Scraproxy

    Thanks to FastAPI it has full API documentation via Redoc

    After deploying it like described on my GitHub page you can use it like so:

    #!/bin/bash
    
    # Fetch the JSON response from the API
    json_response=$(curl -s "http://127.0.0.1:5001/screenshot?url=https://karl.fail")
    
    # Extract the Base64 string using jq
    base64_image=$(echo "$json_response" | jq -r '.screenshot')
    
    # Decode the Base64 string and save it as an image
    echo "$base64_image" | base64 --decode > screenshot.png
    
    echo "Image saved as screenshot.png"

    Make sure jq is installed

    The API provides images in base64 format, so we use the native base64 command to decode it and save it as a PNG file. If everything went smoothly, you should now have a file named “screenshot.png”.

    Keep in mind, this isn’t a full-page screenshot. For that, you’ll want to use this script:

    #!/bin/bash
    
    # Fetch the JSON response from the API
    json_response=$(curl -s "http://127.0.0.1:5001/screenshot?url=https://karl.fail&full_page=true")
    
    # Extract the Base64 string using jq
    base64_image=$(echo "$json_response" | jq -r '.screenshot')
    
    # Decode the Base64 string and save it as an image
    echo "$base64_image" | base64 --decode > screenshot.png
    
    echo "Image saved as screenshot.png"

    Just add &full_page=true, and voilà! You’ll get a clean, full-page screenshot of the website.

    The best part? You can run this multiple times since the responses are cached, which helps you avoid getting blocked too quickly.

    Conclusion

    I’ll be honest with you—I didn’t go all out on the documentation for this. But don’t worry, the code is thoroughly commented, and you can easily figure things out by taking a look at the app.py file.

    That said, I’ve used this in plenty of my own projects as my go-to tool for fetching web data, and it’s been a lifesaver. Feel free to jump in, contribute, and help make this even better!

  • Sandkiste.io – A Smarter Sandbox for the Web

    Sandkiste.io – A Smarter Sandbox for the Web

    As a principal incident responder, my team and I often face the challenge of analyzing potentially malicious websites quickly and safely. This work is crucial, but it can also be tricky, especially when it risks compromising our test environments. Burning through test VMs every time we need to inspect a suspicious URL is far from efficient.

    There are some great tools out there to handle this, many of which are free and widely used, such as:

    • urlscan.io – A tool for visualizing and understanding web requests.
    • VirusTotal – Renowned for its file and URL scanning capabilities.
    • Joe Sandbox – A powerful tool for detailed malware analysis.
    • Web-Check – Another useful resource for URL scanning.

    While these tools are fantastic for general purposes, I found myself needing something more tailored to my team’s specific needs. We needed a solution that was straightforward, efficient, and customizable—something that fit seamlessly into our workflows.

    So, I decided to create it myself: Sandkiste.io. My goal was to build a smarter, more accessible sandbox for the web that not only matches the functionality of existing tools but offers the simplicity and flexibility we required for our day-to-day incident response tasks with advanced features (and a beautiful UI 🤩!).

    Sandkiste.io is part of a larger vision I’ve been working on through my Exploit.to platform, where I’ve built a collection of security-focused tools designed to make life easier for incident responders, analysts, and cybersecurity enthusiasts. This project wasn’t just a standalone idea—it was branded under the Exploit.to umbrella, aligning with my goal of creating practical and accessible solutions for security challenges.

    The Exploit.to logo

    If you haven’t explored Exploit.to, it’s worth checking out. The website hosts a range of open-source intelligence (OSINT) tools that are not only free but also incredibly handy for tasks like gathering public information, analyzing potential threats, and streamlining security workflows. You can find these tools here: https://exploit.to/tools/osint/.

    Technologies Behind Sandkiste.io: Building a Robust and Scalable Solution

    Sandkiste.io has been, and continues to be, an ambitious project that combines a variety of technologies to deliver speed, reliability, and flexibility. Like many big ideas, it started small—initially leveraging RabbitMQcustom Golang scripts, and chromedp to handle tasks like web analysis. However, as the project evolved and my vision grew clearer, I transitioned to my favorite tech stack, which offers the perfect blend of power and simplicity.

    Here’s the current stack powering Sandkiste.io:

    Django & Django REST Framework

    At the heart of the application is Django, a Python-based web framework known for its scalability, security, and developer-friendly features. Coupled with Django REST Framework (DRF), it provides a solid foundation for building robust APIs, ensuring smooth communication between the backend and frontend.

    Celery

    For task management, Celery comes into play. It handles asynchronous and scheduled tasks, ensuring the system can process complex workloads—like analyzing multiple URLs—without slowing down the user experience. It is easily integrated into Django and the developer experience and ecosystem around it is amazing.

    Redis

    Redis acts as the message broker for Celery and provides caching support. Its lightning-fast performance ensures tasks are queued and processed efficiently. Redis is and has been my go to although I did enjoy RabbitMQ a lot.

    PostgreSQL

    For the database, I chose PostgreSQL, a reliable and feature-rich relational database system. Its advanced capabilities, like full-text search and JSONB support, make it ideal for handling complex data queries. The full-text search works perfect with Django, here is a very detailed post about it.

    FastAPI

    FastAPI adds speed and flexibility to certain parts of the system, particularly where high-performance APIs are needed. Its modern Python syntax and automatic OpenAPI documentation make it a joy to work with. It is used to decouple the Scraper logic, since I wanted this to be a standalone project called “Scraproxy“.

    Playwright

    For web scraping and analysis, Playwright is the backbone. It’s a modern alternative to Selenium, offering cross-browser support and powerful features for interacting with websites in a headless (or visible) manner. This ensures that even complex, JavaScript-heavy sites can be accurately analyzed. The killer feature is how easy it is to capture a video and record network activity, which are basically the two main features needed here.

    React with Tailwind CSS and shadcn/ui

    On the frontend, I use React for building dynamic user interfaces. Paired with TailwindCSS, it enables rapid UI development with a clean, responsive design. shadcn/ui (a component library based on Radix) further enhances the frontend by providing pre-styled, accessible components that align with modern design principles.

    This combination of technologies allows Sandkiste.io to be fast, scalable, and user-friendly, handling everything from backend processing to an intuitive frontend experience. Whether you’re inspecting URLs, performing in-depth analysis, or simply navigating the site, this stack ensures a seamless experience. I also have the most experience with React and Tailwind 😁.

    Features of Sandkiste.io: What It Can Do

    Now that you know the technologies behind Sandkiste.io, let me walk you through what this platform is capable of. Here are the key features that make Sandkiste.io a powerful tool for analyzing and inspecting websites safely and effectively:

    Certificate Lookups

    One of the fundamental features is the ability to perform certificate lookups. This lets you quickly fetch and review SSL/TLS certificates for a given domain. It’s an essential tool for verifying the authenticity of websites, identifying misconfigurations, or detecting expired or suspicious certificates. We use it a lot to find possibly generated subdomains and to get a better picture of the adversary infrastructure, it helps with recon in general. I get the info from crt.sh, they offer an exposed SQL database for these lookups.

    DNS Records

    Another key feature of Sandkiste.io is the ability to perform DNS records lookups. By analyzing a domain’s DNS records, you can uncover valuable insights about the infrastructure behind it, which can often reveal patterns or tools used by adversaries.

    DNS records provide critical information about how a domain is set up and where it points. For cybersecurity professionals, this can offer clues about:

    • Hosting Services: Identifying the hosting provider or server locations used by the adversary.
    • Mail Servers: Spotting potentially malicious email setups through MX (Mail Exchange) records.
    • Subdomains: Finding hidden or exposed subdomains that may indicate a larger infrastructure or staging areas.
    • IP Addresses: Tracing A and AAAA records to uncover the IP addresses linked to a domain, which can sometimes reveal clusters of malicious activity.
    • DNS Security Practices: Observing whether DNSSEC is implemented, which might highlight the sophistication (or lack thereof) of the adversary’s setup.

    By checking DNS records, you not only gain insights into the domain itself but also start piecing together the tools and services the adversary relies on. This can be invaluable for identifying common patterns in malicious campaigns or for spotting weak points in their setup that you can exploit to mitigate threats.

    HTTP Requests and Responses Analysis

    One of the core features of Sandkiste.io is the ability to analyze HTTP requests and responses. This functionality is a critical part of the platform, as it allows you to dive deep into what’s happening behind the scenes when a webpage is loaded. It reveals the files, scripts, and external resources that the website requests—many of which users never notice.

    When you visit a webpage, the browser makes numerous background requests to load additional resources like:

    • JavaScript files
    • CSS stylesheets
    • Images
    • APIs
    • Third-party scripts or trackers

    These requests often tell a hidden story about the behavior of the website. Sandkiste captures and logs every requests. Every HTTP request made by the website is logged, along with its corresponding response. (Jup, we store the raw data as well). For security professionals, monitoring and understanding these requests is essential because:

    • Malicious Payloads: Background scripts may contain harmful code or trigger the download of malware.
    • Unauthorized Data Exfiltration: The site might be sending user data to untrusted or unexpected endpoints.
    • Suspicious Third-Party Connections: You can spot connections to suspicious domains, which might indicate phishing attempts, tracking, or other malicious activities.
    • Alerts for Security Teams: Many alerts in security monitoring tools stem from these unnoticed, automatic requests that trigger red flags.

    Security Blocklist Check

    The Security Blocklist Check is another standout feature of Sandkiste.io, inspired by the great work at web-check.xyz. The concept revolves around leveraging malware-blocking DNS servers to verify if a domain is blacklisted. But I took it a step further to make it even more powerful and insightful.

    Instead of simply checking whether a domain is blocked, Sandkiste.io enhances the process by using a self-hosted AdGuard DNS server. This server doesn’t just flag blocked domains—it captures detailed logs to provide deeper insights. By capturing logs from the DNS server, Sandkiste.io doesn’t just say “this domain is blacklisted.” It identifies why it’s flagged and where the block originated, this enables me to assign categories to the domains. The overall scores tells you very quickly if the page is safe or not.

    Video of the Session

    One of the most practical features of Sandkiste.io is the ability to create a video recording of the session. This feature was the primary reason I built the platform—because a single screenshot often falls short of telling the full story. With a video, you gain a complete, dynamic view of what happens during a browsing session.

    Static screenshots capture a single moment in time, but they don’t show the sequence of events that can provide critical insights, such as:

    • Pop-ups and Redirects: Videos reveal if and when pop-ups appear or redirects occur, helping analysts trace how users might be funneled into malicious websites or phishing pages.
    • Timing of Requests: Understanding when specific requests are triggered can pinpoint what actions caused them, such as loading an iframe, clicking a link, or executing a script.
    • Visualized Responses: By seeing the full process—what loads, how it behaves, and the result—you get a better grasp of the website’s functionality and intent.
    • Recreating the User Journey: Videos enable you to recreate the experience of a user who might have interacted with the target website, helping you diagnose what happened step by step.

    A video provides a much clearer picture of the target website’s behavior than static tools alone.

    How Sandkiste.io Works: From Start to Insight

    Using Sandkiste.io is designed to be intuitive and efficient, guiding you through the analysis process step by step while delivering detailed, actionable insights.

    You kick things off by simply starting a scan. Once initiated, you’re directed to a loading page, where you can see which tasks (or “workers”) are still running in the background.

    This page keeps you informed without overwhelming you with unnecessary technical details.

    The Results Page

    Once the scan is complete, you’re automatically redirected to the results page, where the real analysis begins. Let’s break down what you’ll see here:

    Video Playback

    At the top, you’ll find a video recording of the session, showing everything that happened after the target webpage was loaded. This includes:

    • Pop-ups and redirects.
    • The sequence of loaded resources (scripts, images, etc.).
    • Any suspicious behavior, such as unexpected downloads or external connections.

    This video gives you a visual recap of the session, making it easier to understand how the website behaves and identify potential threats.

    Detected Technologies

    Below the video, you’ll see a section listing the technologies detected. These are inferred from response headers and other site metadata, and they can include:

    • Web frameworks (e.g., Django, WordPress).
    • Server information (e.g., Nginx, Apache).

    This data is invaluable for understanding the website’s infrastructure and spotting patterns that could hint at malicious setups.

    Statistics Panel

    On the right side of the results page, there’s a statistics panel with several semi-technical but insightful metrics. Here’s what you can learn:

    • Size Percentile:
      • Indicates how the size of the page compares to other pages.
      • Why it matters: Unusually large pages can be suspicious, as they might contain obfuscated code or hidden malware.
    • Number of Responses:
      • Shows how many requests and responses were exchanged with the server.
      • Why it matters: A high number of responses could indicate excessive tracking, unnecessary redirects, or hidden third-party connections.
    • Duration to “Network Idle”:
      • Measures how long it took for the page to fully load and stop making network requests.
      • Why it matters: Some pages continue running scripts in the background even after appearing fully loaded, which can signal malicious or resource-intensive behavior.
    • Redirect Chain Analysis:
      • A list of all redirects encountered during the session.
      • Why it matters: A long chain of redirects is a common tactic in phishing, ad fraud, or malware distribution campaigns.

    By combining these insights—visual evidence from the video, infrastructure details from detected technologies, and behavioral stats from the metrics—you get a comprehensive view of the website’s behavior. This layered approach helps security analysts identify potential threats with greater accuracy and confidence.

    At the top of the page, you’ll see the starting URL and the final URL you were redirected to.

    • “Public” means that others can view the scan.
    • The German flag indicates that the page is hosted in Germany.
    • The IP address shows the final server we landed on.

    The party emoji signifies that the page is safe; if it weren’t, you’d see a red skull (spooky!). Earlier, I explained the criteria for flagging a page as good or bad.

    On the “Responses” page I mentioned earlier, you can take a closer look at them. Here, you can see exactly where the redirects are coming from and going to. I’ve added a red shield icon to clearly indicate when HTTP is used instead of HTTPS.

    As an analyst, it’s pretty common to review potentially malicious scripts. Clicking on one of the results will display the raw response safely. In the image below, I clicked on that long JavaScript URL (normally a risky move, but in Sandkiste, every link is completely safe!).

    Conclusion

    And that’s the story of Sandkiste.io, a project I built over the course of a month in my spare time. While the concept itself was exciting, the execution came with its own set of challenges. For me, the toughest part was achieving a real-time feel for the user experience while ensuring the asynchronous jobs running in the background were seamlessly synced back together. It required a deep dive into task coordination and real-time updates, but it taught me lessons that I now use with ease.

    Currently, Sandkiste.io is still in beta and runs locally within our company’s network. It’s used internally by my team to streamline our work and enhance our incident response capabilities. Though it’s not yet available to the public, it has already proven its value in simplifying complex tasks and delivering insights that traditional tools couldn’t match.

    Future Possibilities

    While it’s an internal tool for now, I can’t help but imagine where this could go.

    For now, Sandkiste.io remains a testament to what can be built with focus, creativity, and a drive to solve real-world problems. Whether it ever goes public or not, this project has been a milestone in my journey, and I’m proud of what it has already achieved. Who knows—maybe the best is yet to come!

  • Visualizing Home Assistant Logs in Grafana with InfluxDB: A Practical Guide Using FRITZ!Box Data

    Visualizing Home Assistant Logs in Grafana with InfluxDB: A Practical Guide Using FRITZ!Box Data

    Let’s face it—this is a pretty specific use case. But if you’ve ever had your internet throttled, you’ll understand why I’m doing this. I wanted a way to store my router connectivity data for up to a year to have solid proof (and maybe even get some money back from my ISP). Here’s what my setup looks like:

    • Log Server: Running Grafana, Loki, Promtail, rsyslog, and InfluxDB.
    • Home Assistant: I run the OS version. Judge me if you must—yes, the Docker version is way more lightweight, but I like the simplicity of the OS version.
    • FRITZ!Box: My modem, with a Dream Machine handling the rest of my network behind it.

    For those curious about Home Assistant on Proxmox, the easiest method is using the Proxmox VE Helper Scripts. There’s also a detailed blog post I found about other installation methods if you’re exploring your options.

    A more detailed look on my setup

    Proxmox

    Proxmox Virtual Environment (VE) is the backbone of my setup. It’s a powerful, open-source virtualization platform that allows you to run virtual machines and containers efficiently. I use it to host Home Assistant, my logging stack, and other services, all on a single physical server. Proxmox makes resource allocation simple and offers great features like snapshots, backups, and an intuitive web interface. It’s perfect for consolidating multiple workloads while keeping everything isolated and manageable.

    FRITZ!Box

    The FRITZ!Box is one of the most popular home routers in Germany, developed by AVM Computersysteme Vertriebs GmbH. It’s known for its reliability and user-friendly features. I use it as my primary modem, and I’ve configured it to forward logs about internet connectivity and other metrics to my logging stack. If you’re curious about their lineup, check out their products here.

    Home Assistant

    Home Assistant is my go-to for managing smart home devices, and I run the OS version (yes, even though the Docker version is more lightweight). It’s incredibly powerful and integrates with almost any device. I use it to collect data from the FRITZ!Box and send it to my logging setup. If you’re using Proxmox, installing Home Assistant is a breeze with the Proxmox VE Helper Scripts.

    The Logserver

    I run all of these services on a Debian LXC inside of my Proxmox. I assigned the following resources to it:

    • RAM: 2GB
    • SWAP: 2GB
    • Cores: 2
    • Disk: 100GB (NVMe SSD,)

    As I later realized, 100GB are overkill. For 30 days of data I need about 5GB of Storage. My log retention policy is currently set to 30 days, but my InfluxDB retention is Bucket based, so that I need to watch.

    I still do have a lot of duplicate logs and more or less useless systems logs I never look at, so I can probably improve this by a lot.

    Grafana

    Grafana is, in my opinion, one of the best free tools for visualizing logs and metrics. It allows you to create beautiful, customizable dashboards that make it easy to monitor your data at a glance. Plus, it integrates seamlessly with Loki, InfluxDB, and many other tools.

    Loki

    Think of Loki as a “database for logs.” It doesn’t require complex indexing like traditional logging systems, which makes it lightweight and efficient. Once your logs are sent to Loki, you can easily search, filter, and analyze them through Grafana.

    Promtail

    Promtail is an agent that collects logs from your local system and sends them to Loki. For example, you can point it to your /var/log/directory, set up rules to pick specific logs (like system or router logs), and Promtail will forward those logs to your Loki instance. It’s simple to configure and keeps everything organized.

    rsyslog

    This is a flexible logging system that can forward or store logs. In my setup, it collects logs from devices like routers and firewalls—especially those where you can’t easily install an agent or service—and makes those logs available for Promtail to pick up and send to Loki.

    InfluxDB

    InfluxDB is one of the most popular time-series databases, perfect for storing numerical data over time, like network speeds or uptime metrics. I use it alongside Grafana to visualize long-term trends in my router’s performance.

    Metrics vs. Logs: What’s the Difference?

    Metrics track numerical trends over time (e.g., CPU usage, internet speed), while logs provide detailed event records (e.g., an error message when your router loses connection). Both are incredibly useful for troubleshooting and monitoring, especially when used together.

    In this post, I’ll show you how I’ve tied all these tools together to monitor my internet connectivity and keep my ISP accountable. Let’s get started!

    Setting up Home Assistant with InfluxDB

    In Home Assistant, I have a dashboard that shows the internet speed my devices are getting within the network, along with the speeds my FRITZ!Box is receiving from my ISP. Don’t worry about the big difference in download speeds—I’m currently syncing a bunch of backups, which is pulling a lot of data.

    Home Assistant keeps data from the FRITZ!Box for only 10 days, which isn’t enough to prove to my ISP that they’re throttling my connection. A technician came by today, which is why my download speeds are back to normal. However, as you can see here, they had me on a slower speed before that.

    In Home Assistant, you can adjust data retention with the Recorder, but this applies to all sensors, which was a bit annoying in my case since I only wanted to keep data for specific entities for a year. Since I already use Grafana for other visualizations and have InfluxDB running, I decided to take that route instead.

    Home Assistant conveniently includes a built-in integration to export metrics directly to InfluxDB, making the setup straightforward.

    In InfluxDB, I created a new bucket specifically for this data—who knows, I might add more Home Assistant data there someday! I’ve set it to store data for two years, but if I ever run out of space, I can always adjust it. 😁

    Next, I created a new API token for the bucket. I opted for both read and write permissions, just in case I ever want to pull data from InfluxDB back into Home Assistant.

    In the Home Assistant file editor you simply have to edit your configuration.yaml

    configuration.yaml
    influxdb:
      api_version: 2
      ssl: true
      host: influx.home.karl.fail
      port: 443
      token: YOUR_INFLUX_TOKEN
      organization: XXXXXXa01d9de0a8
      bucket: home-assistant
      include:
        entity_globs:
          - sensor.fritz_box_7510*
          - sensor.speedtest_*

    You can find the organization ID for your InfluxDB organization by clicking the user icon in the top left and selecting “About” at the bottom of the page. That’s where the ID is listed. As you can see, I’m using port 443 because my setup uses HTTPS and is behind a reverse proxy. If you’re interested in setting up HTTPS with a reverse proxy, check out my post How to Get Real Trusted SSL Certificates with ACME-DNS in Nginx Proxy Manager.

    Once everything is configured, restart Home Assistant. Go to the Data Explorer tab in your InfluxDB UI to verify that data is flowing into your bucket.

    The Grafana Dashboard

    Alright, please don’t judge my dashboard too harshly! I’m still learning the ropes here. I usually rely on prebuilt ones, but this is my first attempt at creating one from scratch to help me learn.

    You’ll need to check the Explore tab in Grafana to find your specific entities, but here are the queries I used for reference:

    from(bucket: "home-assistant")
      |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
      |> filter(fn: (r) => r["entity_id"] == "fritz_box_7510_link_download_durchsatz" and r["_field"] == "value")
      |> map(fn: (r) => ({ r with _value: float(v: r._value) / 1000.0 }))

    The filter for the entity ID comes from Home Assistant. You can easily find it on your dashboard by double-clicking (or double-tapping) the widget and checking its settings.

    You do the same for Upload

    Keep in mind that the upload speed is only measured every few hours by your FRITZ!Box.

    from(bucket: "home-assistant")
      |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
      |> filter(fn: (r) => r["entity_id"] == "fritz_box_7510_link_upload_durchsatz" and r["_field"] == "value")
      |> map(fn: (r) => ({ r with _value: float(v: r._value) / 1000.0 }))

    For measuring data within the network, I’m using the Speedtest.net integration in Home Assistant.

    from(bucket: "home-assistant")
      |> range(start: v.timeRangeStart, stop: v.timeRangeStop)
      |> filter(fn: (r) => r["entity_id"] == "speedtest_download" and r["_field"] == "value")

    The query for this is quite similar, as you can see.

    Now, here’s the tricky part: extracting your public IP from the FRITZ!Box metrics. Out of the box, the metrics sent to InfluxDB seem to be messed up—maybe I did something wrong (feel free to comment and let me know 😁). To handle this, I wrote a filter that checks if an IP is present. I kept running into errors, so I ended up casting everything to a string before applying the check. Since my IP doesn’t change often (about once a week), I use a range of -30 days for the query:

    import "regexp"
    
    from(bucket: "home-assistant")
      |> range(start: -30d)
      |> filter(fn: (r) => r["entity_id"] == "fritz_box_7510_externe_ip")
      |> toString()
      |> filter(fn: (r) => regexp.matchRegexpString(r: /^([0-9]{1,3}\.){3}[0-9]{1,3}$/, v: r._value))
      |> map(fn: (r) => ({ IP: r._value, Date: r._time }))

    Now, you’ll get a neat little table showing the changes to your public IP (don’t worry, I’ve changed my public IP for obvious reasons). It’s a simple way to keep track of when those changes happen!

    I’m planning to write a longer post about how I set up my logging server and connected all these pieces together. But for now, I just wanted to share what I worked on tonight and how I can now hold my ISP accountable if I’m not getting what I paid for—or, as is often the case, confirm if it’s actually my fault 😅.

  • Changing the Server response header in Nginx Proxy Manager

    Changing the Server response header in Nginx Proxy Manager

    This is going to be a very short post.

    If you deployed Nginx Proxy Manager via Docker in your home directory you can edit this file with

    nano ~/data/nginx/custom/http.conf

    All you need to do is add the following at the top:

    http.conf
    more_set_headers 'Server: CuteKitten';

    Then, restart your Nginx Proxy Manager. If you’re using Docker, like I am, a simple docker compose restart will do the trick.

    With this, the custom Server header will be applied to every request, including those to the Nginx Proxy Manager UI itself. If you check the response headers of this website, you’ll see the header I set—proof of how easy and effective this customization can be!


    Understanding more_set_headers vs add_header

    When working with Nginx Proxy Manager, you may encounter two ways to handle HTTP headers:

    • add_header
    • more_set_headers

    What is add_header?

    add_header is a built-in Nginx directive that allows you to add new headers to your HTTP responses. It’s great for straightforward use cases where you just want to include additional information in your response headers.

    What is more_set_headers?

    more_set_headers is part of the “headers_more” module, an extension not included in standard Nginx but available out of the box with Nginx Proxy Manager (since it uses OpenResty). This directive gives you much more flexibility:

    • It can addoverwrite, or remove headers entirely.
    • It works seamlessly with Nginx Proxy Manager, so there’s no need to install anything extra.

    For more technical details, you can check out the official headers_more documentation.

    When to Use add_header or more_set_headers

    Here’s a quick guide to help you decide:

    Use add_header if:

    • You are just adding new headers to responses.
    • You don’t need to modify or remove existing headers.

    Example:

    add_header X-Frame-Options SAMEORIGIN;

    Use more_set_headers if:

    • You need to replace or remove existing headers, such as Server or X-Powered-By.
    • You want headers to apply to all responses, including error responses (e.g., 404, 500).

    Example:

    # Replace the default Nginx Server header
    more_set_headers "Server: MyCustomServer";

    Why Use more_set_headers?

    The key advantage of more_set_headers is that it provides full control over your headers. For example:

    • If you want to customize the Server header, add_header won’t work because the Server header is already set internally by Nginx, you would have to remove it first.
    • more_set_headers can replace the Server header or even remove it entirely, which is particularly useful for security or branding purposes.

    Since Nginx Proxy Manager includes the headers_more module by default, using more_set_headers is effortless and highly recommended for advanced header management.

    A Note on Security

    Many believe that masking or modifying the Server header improves security by hiding the server software you’re using. The idea is that attackers who can’t easily identify your web server (e.g., Nginx, Apache, OpenResty) or its version won’t know which exploits to try.

    While this may sound logical, it’s not a foolproof defense:

    • Why It May Be True: Obscuring server details could deter opportunistic attackers who rely on automated tools that scan for specific server types or versions.
    • Why It May Be False: Determined attackers can often gather enough information from other headers, server behavior, or fingerprinting techniques to deduce what you’re running, regardless of the Server header.

    Ultimately, changing the Server header should be seen as one small layer in a broader security strategy, not as a standalone solution. Real security comes from keeping your software updated, implementing proper access controls, and configuring firewalls—not just masking headers.

  • Gottor – Exploring the Depths of the Dark Web

    Gottor – Exploring the Depths of the Dark Web

    Gottor – Exploring the Depths of the Dark Web

    drawing

    Welcome to the realm of the hidden, where shadows dance and whispers echo through the digital corridors. Enter Gottor, a testament to curiosity, innovation, and a touch of madness. In this blog post, we embark on a journey through the creation of Gottor, a bespoke dark web search engine that defies convention and pushes the boundaries of exploration.drawing

    Genesis of an Idea

    The genesis of Gottor traces back to a spark of inspiration shared between friends, fueled by a desire to unveil the secrets lurking within the depths of the dark web. Drawing parallels to Shodan, but with a cloak of obscurity, we set out on a quest to build our own gateway to the clandestine corners of the internet.

    Forging Custom Solutions

    Determined to forge our path, we eschewed conventional wisdom and opted for custom solutions. Rejecting standard databases, we crafted our own using the robust framework of BleveSearch, laying the foundation for a truly unique experience. With a simple Tor proxy guiding our way, we delved deeper, fueled by an insatiable thirst for performance.

    However, our zeal for efficiency proved to be a double-edged sword, as our relentless pursuit often led to blacklisting. Undeterred, we embraced the challenge, refining our approach through meticulous processing and data extraction. Yet, the onslaught of onion sites proved overwhelming, prompting a shift towards the versatile embrace of Scrapy.drawing

    The Turning Point

    Amidst the trials and tribulations, a revelation emerged – the adoption of Ahmias’ tor proxy logic with Polipo. Through the ingenious utilization of multiple Tor entry nodes and a strategic round-robin approach, we achieved equilibrium, evading the ire of blacklisting and forging ahead with renewed vigor.

    The Ethical Conundrum

    As our creation took shape, we faced an ethical conundrum that cast a shadow over our endeavors. Consulting with legal counsel, we grappled with the implications of anonymity and the responsibility inherent in our pursuit. Ultimately, discretion prevailed, and Gottor remained veiled, a testament to the delicate balance between exploration and accountability.drawing

    Unveiling the Web of Intrigue

    In our quest for knowledge, we unearthed a web of intrigue, interconnected and teeming with hidden services. By casting our digital net wide, we traversed the labyrinthine pathways, guided by popular indexers and a relentless spirit of inquiry. What emerged was a tapestry of discovery, illuminating the clandestine landscape with each query and click.

    Lessons Learned

    Through the crucible of creation, we gained a newfound appreciation for the intricacies of search engines. While acquiring and storing data proved relatively straightforward, the true challenge lay in making it accessible, particularly amidst the myriad complexities of multilingual content. Yet, amidst the obstacles, we discovered the essence of exploration – a journey defined by perseverance, innovation, and the relentless pursuit of knowledge.

    In conclusion, Gottor stands as a testament to the boundless curiosity that drives us to explore the uncharted territories of the digital realm. Though shrouded in secrecy, its legacy endures, an embodiment of the relentless pursuit of understanding in an ever-evolving landscape of discovery.

    Explore. Discover. Gottor

    .drawing

    Although we have not talked in years. Shoutout to my good friend Milan who helped make this project possible.

  • Bearbot aka. Stonkmarket – Advanced Trading Probability Calculations

    Bearbot aka. Stonkmarket – Advanced Trading Probability Calculations

    Bearbot: Begins

    This project spans over 3 years with countless iterations and weeks of coding. I wrote almost 500,000 lines of code in Python and Javascript. The essential idea was to use a shorting strategy on put/call options to generate income based on time decay – also called “Theta”. You kind of have to know a little bit about how stock options work to understand this next part, but here I go. About 80% of all options expire worthless; statistically, you have a way higher chance shorting options and making a profit than going long. The further the strike price is from the price of the underlying stock, the higher the probability of the option expiring worthless. Technically, you always make 100%.

    Now back to reality, where things aren’t perfect.

    A lot can happen in the market. If you put your life savings into a seemingly safe trade (with Stonkmarket actually +95% safe, more on that later) then the next Meta scandal gets published and the trade goes against you. When shorting, you can actually lose more than 100% (since stocks can virtually rise to infinity), but you can only gain 100%. It sounds like a bad deal, but again, you can gain 100% profit a lot more than infinite losses, although the chance is never 0. The trick is to use stop loss, but risk management is obviously part of it when you trade for a living.

    DoD Scraper

    The idea actually started as a simple idea to scrape the U.S. Department of Defense contracts to predict the earnings of large weapons companies and make large option trades based upon that.

    Spoiler: It is not that easy. You can get lucky if the company is small enough and traded on the stock market, so it actually makes a huge impact on their financials. A company like Lockheed Martin is not really predictable by DoD contracts alone.

    Back then I called it Wallabe and this was the logo:

    More data

    I didn’t know the idea was bad, so I soon started to scrape news. I collected over 100 RSS feeds, archiving them, analyzing them, and making them accessible via a REST API. Let me tell you, I got a lot of news. Some people sell this data, but honestly, it was a lot more worth to me in my database enriching my decision-making algo.

    I guess it goes without saying that I was also collecting stock data including fundamental and earnings information; that is kind of the base for everything. Do you know how many useful metrics you can calculate with the data listed above alone? A lot. Some of them tell you the same stuff, true, but nonetheless, you can generate a lot.

    Bearbot is born

    Only works on dark mode, sorry.

    As I was writing this into a PWA (website that acts as a mobile app), I was reading up on how to use option Greeks to calculate stock prices when realizing that it is a lot easier to predict and set up options trades. That is when I started to gather a whole bunch of options data and calculating my own Greeks which led me to explore more about theta and likelihood of options expiring worthless.

    From that point on, I invested all the time asking the following question: What is the perfect environment for shorting Option XYZ?

    The answer is generally quite simple:

    • little volatility
    • not a lot of news
    • going with the general trend
    • Delta “probability” is in your favor

    You can really expand on these 3 points, and there are many ways to calculate and measure them, but that’s basically it. If you know a thing about trading, ask yourself this, what does the perfect time for a trade look like? Try to quantify it, should a stock have just gone down 20%, then released good news and something else? If yes, then what if you had a program to spot exactly these moments all over the stock market and alert you of those amazing chances to pretty much print money? That’s Bearbot.

    Success

    At the beginning, I actually planned on selling this and offering it as a service like all those other signal trading services or whatever, but honestly, why the hell would I give you something that sounds (and was) so amazing that I could practically print money? There is no value for me. Nobody is going to sell or offer you a trading strategy that works, unless it works by a lot of people knowing about it and doing it like candlestick patterns.

    Failure

    The day I started making a lot more money than with my regular job was the day I lost my mind. I stopped listening to Bearbot. I know it kind of sounds silly talking about it, but I thought none of my trades could go wrong and I didn’t need it anymore; I had developed some sort of gift for trading. I took on too much risk and took a loss so big saying the number would send shivers down your spine, unless you’re rich. Bearbot was right with the trade, but by not having enough to cover margin, I got a margin call and my position was closed. I got caught by a whipsaw (it was a Meta option and they had just announced a huge data leak).

    ”Learning” from my mistakes

    In the real world, not everything has an API. I spent weeks reversing the platform I was trading to automate the bot entirely without having to do anything in hopes of removing the human error-prone element. The problem was that they did not want bot traders, so they did a lot to counter it, and every 2 weeks I had to adjust the bot or else get captchas or blocked directly. Eventually, it reached a very unstable point to where I could not use it for live trading anymore since I could not trust it in closing positions on time.

    I was very sad at that point. Discouraged, I discontinued Bearbot for about a year.

    Stonkmarket: Rising

    Some time went on until a friend of mine encouraged me to try again.

    I decided to take a new approach. New architecture, new style, new name. I wanted to leave all the negative stuff behind. It had 4 parts:

    • central API with a large database (Django Rest Framework)
    • local scrapers (Python, Selenium)
    • static React-based front end
    • “Stonk, the Bot” a Discord bot for signals and monitoring

    I extended the UI a lot showing all the data I have gathered but sadly I also shut down the entire backend before writing this so I cannot show it with data.

    I spent about 3 months refactoring the old code, getting it to work, and putting it into a new structure. I then started trading again, and everything was well.

    Final Straw

    I was going through a very stressful phase, and on top of it, one scraper after the other was failing. It got to a point where I had to continuously change logic around and fix my data kraken, and I could not maintain it alone anymore. I also couldn’t let anyone in on it as I did not want to publish my code and open things up to reveal my secrets. It was at this point where I decided to shut it down for good. I poured 3 years, countless hours, lines of code, research, and even tears into this project. I learned more than on any other project ever.

    Summary

    On this project, I really mastered Django and Django Rest Framework, I experimented a lot with PWAs and React, pushing my web development knowledge. On top of that, I was making good money trading, but the stress and greed eventually got to me until I decided I was better off shutting it down. If you read this and know me, you probably know all about Bearbot and Stonkmarket, and maybe I have even given you a trade or 2 that you profited off of. Maybe one day I will open-source the code for this; that will be the day I am truly finished with Stonkmarket. For now, a small part inside myself is still thinking it is possible to redo it.

    Edit: We are baaaaaaaccckkk!!!🎉

    https://www.bearbot.dev

    Better than ever, cooler than ever! You cannot beat this bear.


  • Securing Your Debian Server

    Securing Your Debian Server

    Hey there, server samurais and cyber sentinels! Ready to transform your Debian server into an impregnable fortress? Whether you’re a seasoned sysadmin or a newbie just dipping your toes into the world of server security, this guide is your one-stop shop for all things safety on the wild, wild web. Buckle up, because we’re about to embark on a journey full of scripts, tips, and jokes to keep things light and fun. There are many good guides on this online, I decided to add another one with the things I usually do. Let’s dive in!

    Initial Setup: The First Line of Defense

    Imagine setting up your server like moving into a new house. You wouldn’t leave the door wide open, right? The same logic applies here.

    Update Your System

    Outdated software is like a welcome mat for hackers. Run the following commands to get everything current:

    Bash
    sudo apt update && sudo apt upgrade -y

    Create a New User

    Root users are like the king of the castle. Let’s create a new user with sudo privileges:

    Bash
    sudo adduser yourusername
    sudo usermod -aG sudo yourusername

    Now, switch to your newly crowned user:

    Bash
    su - yourusername

    Securing SSH: Locking Down Your Castle Gates

    SSH (Secure Shell) is the key to your castle gates. Leaving it unprotected is like leaving the keys under the doormat.

    Disable Root Login

    Edit the SSH configuration file:

    Bash
    sudo nano /etc/ssh/sshd_config

    Change PermitRootLogin to no:

    Bash
    PermitRootLogin no

    Change the Default SSH Port

    Edit the SSH configuration file:

    Bash
    sudo nano /etc/ssh/sshd_config

    Change the port to a number between 1024 and 65535 (e.g., 2222):

    Bash
    Port 2222

    Restart the SSH service:

    Bash
    sudo systemctl restart ssh

    There is actually some controversy about security through obscurity, in my long tenure as an analyst and incident responser I believe less automated “easy” attacks do improve security.

    Set Up SSH Keys

    Generate a key pair using elliptic curve cryptography:

    Bash
    ssh-keygen -t ed25519 -C "[email protected]"

    Copy the public key to your server:

    Bash
    ssh-copy-id yourusername@yourserver -p 2222

    Disable password authentication:

    Bash
    sudo nano /etc/ssh/sshd_config

    Change PasswordAuthentication to no:

    Bash
    PasswordAuthentication no

    Restart SSH:

    Bash
    sudo systemctl restart ssh

    For more details, refer to the sshd_config man page.

    Firewall Configuration: Building the Great Wall

    A firewall is like the Great Wall of China for your server. Let’s set up UFW (Uncomplicated Firewall).

    Install UFW

    Install UFW if it’s not already installed:

    Bash
    sudo apt install ufw -y

    Allow SSH

    Allow SSH connections on your custom port:

    Bash
    sudo ufw allow 2222/tcp
    # add more services if you are hosting anything like HTTP/HTTPS

    Enable the Firewall

    Enable the firewall and check its status:

    Bash
    sudo ufw enable
    sudo ufw status

    For more information, check out the UFW man page.

    Intrusion Detection Systems: The Watchful Eye

    An Intrusion Detection System (IDS) is like a guard dog that barks when something suspicious happens.

    Install Fail2Ban

    Fail2Ban protects against brute force attacks. Install it with:

    Bash
    sudo apt install fail2ban -y

    Configure Fail2Ban

    Edit the configuration file:

    Bash
    sudo nano /etc/fail2ban/jail.local

    Add the following content:

    Bash
    [sshd]
    enabled = true
    port = 2222
    logpath = %(sshd_log)s
    maxretry = 3

    Restart Fail2Ban:

    Bash
    sudo systemctl restart fail2ban

    For more details, refer to the Fail2Ban man page.

    Regular Updates and Patching: Keeping the Armor Shiny

    A knight with rusty armor won’t last long in battle. Keep your server’s software up to date.

    Enable Unattended Upgrades

    Debian can automatically install security updates. Enable this feature:

    Bash
    sudo apt install unattended-upgrades -y
    sudo dpkg-reconfigure --priority=low unattended-upgrades

    Edit the configuration:

    Bash
    sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

    Ensure the following line is uncommented:

    Bash
    "${distro_id}:${distro_codename}-security";

    For more details, refer to the unattended-upgrades man page.

    Again there is also some controversy about this. Most people are afraid that they wake up one night and all their servers are down, because a botched automated update. In my non-professional live with my home IT, this has never happened and even professionally, if we are just talking security updates of an OS like Debian, I haven’t seen it, yet.

    User Management: Only the Knights in the Realm

    Not everyone needs the keys to the kingdom. Ensure only trusted users have access. On a fresh install probably unnecessary, but good housekeeping.

    Review and Remove Unnecessary Users

    List all users:

    Bash
    cut -d: -f1 /etc/passwd

    Remove any unnecessary users:

    Bash
    sudo deluser username

    Implement Strong Password Policies

    Enforce strong passwords:

    Bash
    sudo apt install libpam-pwquality -y

    Edit the PAM configuration file:

    Bash
    sudo nano /etc/pam.d/common-password

    Add the following line:

    Bash
    password requisite pam_pwquality.so retry=3 minlen=12 difok=3

    For more details, refer to the pam_pwquality man page.

    File and Directory Permissions: Guarding the Treasure

    Permissions are like guards watching over the royal treasure. Make sure they’re doing their job.

    Secure /etc Directory

    Ensure the /etc directory is not writable by anyone except root:

    Bash
    sudo chmod -R go-w /etc

    This is heavily dependent on your distribution and may be a bad idea. I use it for locked down environments like Debian LXC that only do one thing.

    Set Permissions for User Home Directories

    Ensure user home directories are only accessible by their owners:

    Bash
    sudo chmod 700 /home/yourusername

    For more details, refer to the chmod man page.

    Automatic Backups: Preparing for the Worst

    Even the best fortress can be breached. Regular backups ensure you can recover from any disaster.

    Full disclosure: I have had a very bad data loss experience with rsync and have since switched to Borg. I can also recommend restic. This had nothing to do with rsync in itself, rather how easy it is to mess up.

    Install rsync

    rsync is a powerful tool for creating backups. Install it with:

    Bash
    sudo apt install rsync -y

    Create a Backup Script

    Create a script to backup your important files:

    Bash
    nano ~/backup.sh

    Add the following content:

    Bash
    #!/bin/bash
    rsync -a --delete /var/www/ /backup/var/www/
    rsync -a --delete /home/yourusername/ /backup/home/yourusername/

    Make the script executable:

    Bash
    chmod +x ~/backup.sh

    Schedule the Backup

    Use cron to schedule the backup to run daily:

    Bash
    crontab -e

    Add the following line:

    Bash
    0 2 * * * /home/yourusername/backup.sh

    For more details on cron, refer to the crontab man page.

    For longer backup jobs you should switch to a service with timer rather than cron. Here is a post from another blog about it. Since my data has grown to multiple terabyte this is what I do now too

    Advanced Security Best Practices

    Enable Two-Factor Authentication (2FA)

    Adding an extra layer of security with 2FA can significantly enhance your server’s protection. Use tools like Google Authenticator or Authy. I had this on an Ubuntu server for a while and thought it was kind of cool.

    1. Install the required packages:
    Bash
    sudo apt install libpam-google-authenticator -y
    1. Configure each user for 2FA:
    Bash
    google-authenticator
    1. Update the PAM configuration:
    Bash
    sudo nano /etc/pam.d/sshd

    Add the following line:

    Bash
    auth required pam_google_authenticator.so
    1. Update the SSH configuration to require 2FA:
    Bash
    sudo nano /etc/ssh/sshd_config

    Ensure the following lines are set:

    Bash
    ChallengeResponseAuthentication yes
    AuthenticationMethods publickey,keyboard-interactive

    Restart SSH:

    Bash
    sudo systemctl restart ssh

    Implement AppArmor

    AppArmor provides mandatory access control and can restrict programs to a limited set of resources.

    1. Install AppArmor:
    Bash
    sudo apt install apparmor apparmor-profiles apparmor-utils -y
    1. Enable and start AppArmor:
    Bash
    sudo systemctl enable apparmor
    sudo systemctl start apparmor

    For more details, refer to the AppArmor man page.

    Conclusion: The Crown Jewel of Security

    Congratulations, noble guardian! You’ve fortified your Debian server into a digital fortress. By following these steps, you’ve implemented strong security practices, ensuring your server is well-protected against common threats. Remember, security is an ongoing process, and staying vigilant is key to maintaining your kingdom’s safety.

    Happy guarding, and may your server reign long and prosper!

  • YOURLS: The Ultimate Weapon Against Long URLs

    YOURLS: The Ultimate Weapon Against Long URLs

    Introduction

    Let’s face it: long URLs are the bane of the internet. They’re unsightly, cumbersome, and frankly, nobody enjoys dealing with them. Every time I encounter a URL that stretches longer than a Monday morning, I can’t help but cringe. But here’s the silver lining: you don’t have to endure the tyranny of endless web addresses any longer. Introducing YOURLS—the ultimate weapon in your arsenal against the plague of elongated URLs!

    Imagine having the power to create your own URL shortening service, hosted right on your own domain, complete with every feature you could possibly desire. And the best part? It’s free, open-source, and infinitely customizable. So gear up, because we’re about to transform your domain into a sleek, efficient, URL-shortening powerhouse!

    The Problem with Long URLs

    Before we dive into the solution, let’s talk about why long URLs are such a headache. Not only do they look messy, but they can also be problematic when sharing links on social media, in emails, or on printed materials. Long URLs can break when sent via text message, and they’re nearly impossible to remember. They can also be a security risk, revealing sensitive query parameters. In a digital age where brevity and aesthetics matter, shortening your URLs isn’t just convenient—it’s essential.

    Meet YOURLS: Your URL Shortening Hero

    Enter YOURLS (Your Own URL Shortener), an open-source project that hands you the keys to your own URL kingdom. YOURLS lets you run your very own URL shortening service on your domain, giving you full control over your links and data. No more relying on third-party services that might go down, change their terms, or plaster your links with ads. With YOURLS, you’re in the driver’s seat.

    Why YOURLS Should Be Your Go-To URL Shortener

    YOURLS isn’t just another URL shortening tool—it’s a game-changer. Here’s why:

    • Full Control Over Your Data: Since YOURLS is self-hosted, you own all your data. No more worrying about data privacy or third-party data breaches.
    • Customizable Links: Create custom short URLs that match your branding, making your links not only shorter but also more professional and trustworthy.
    • Powerful Analytics: Get detailed insights into your link performance with historical click data, visitor geo-location, referrer tracking, and more. Understanding your audience has never been easier.
    • Developer-Friendly API: Automate your link management with YOURLS’s robust API, allowing you to integrate URL shortening into your applications seamlessly.
    • Extensible Through Plugins: With a rich plugin architecture, you can enhance YOURLS with additional features like spam protection, social sharing, and advanced analytics. Tailor the tool to fit your exact needs.

    How YOURLS Stacks Up Against Other URL Shorteners

    While YOURLS offers a fantastic solution, it’s worth considering how it compares to other popular URL shorteners out there.

    • Bitly: One of the most well-known services, Bitly offers a free plan with basic features and paid plans for advanced analytics and custom domains. However, you’re dependent on a third-party service, and your data resides on their servers.
    • TinyURL: A simple, no-frills URL shortener that’s been around for ages. It doesn’t offer analytics or customization options, making it less suitable for professional use.
    • Rebrandly: Focused on custom-branded links, Rebrandly offers advanced features but comes with a price tag. Again, your data is stored externally.
    • Short.io: Allows custom domains and offers analytics, but the free tier is limited, and you’ll need to pay for more advanced features.

    Why Choose YOURLS Over the Others?

    • Cost-Effective: YOURLS is free and open-source. No subscription fees or hidden costs.
    • Privacy and Security: Since you host it yourself, you have complete control over your data’s privacy and security.
    • Unlimited Customization: Modify and extend YOURLS to your heart’s content without any limitations imposed by third-party services.
    • Community Support: As an open-source project, YOURLS has a vibrant community that contributes plugins, support, and enhancements.

    Getting Started with YOURLS

    Now that you’re sold on YOURLS, let’s dive into how you can set it up and start conquering those unwieldy URLs.

    Step 1: Setting Up YOURLS with Docker Compose

    To make the installation process smooth and straightforward, we’ll use Docker Compose. This method ensures that all the necessary components are configured correctly and allows for easy management of your YOURLS instance. If you’re new to Docker, don’t worry—it’s simpler than you might think, and it’s a valuable tool to add to your arsenal.

    Creating the docker-compose.yml File

    The docker-compose.yml file orchestrates the services required for YOURLS to run. Here’s the template you’ll use:

    docker-compose.yml
    services:
      yourls:
        image: yourls:latest
        container_name: yourls
        ports:
          - "8081:80" # YOURLS accessible at http://localhost:8081
        environment:
          - YOURLS_SITE=https://yourdomain.com
          - YOURLS_DB_HOST=mysql-yourls
          - YOURLS_DB_USER=${YOURLS_DB_USER}
          - YOURLS_DB_PASS=${YOURLS_DB_PASS}
          - YOURLS_DB_NAME=yourls_db
          - YOURLS_USER=${YOURLS_USER}
          - YOURLS_PASS=${YOURLS_PASS}
        depends_on:
          - mysql-yourls
        volumes:
          - ./yourls_data:/var/www/html/user # Persist YOURLS data
        networks:
          - yourls-network
    
      mysql-yourls:
        image: mysql:latest
        container_name: mysql-yourls
        environment:
          - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
          - MYSQL_DATABASE=yourls_db
          - MYSQL_USER=${YOURLS_DB_USER}
          - MYSQL_PASSWORD=${YOURLS_DB_PASS}
        volumes:
          - ./mysql_data:/var/lib/mysql # Persist MySQL data
        networks:
          - yourls-network
    
    networks:
      yourls-network:
        driver: bridge

    Let’s break down what’s happening in this file:

    • Services:
    • yourls: This is the YOURLS application container. It exposes port 8081 and connects to the MySQL database.
    • mysql-yourls: The MySQL database container that stores all your URL data.
    • Environment Variables: These variables configure your YOURLS and MySQL instances. We’ll store sensitive information in a separate .env file for security.
    • Volumes: Mounts directories on your host machine to persist data even when the containers are recreated.
    • Networks: Defines a bridge network for the services to communicate securely.

    Step 2: Securing Your Credentials with an .env File

    To keep your sensitive information safe, we’ll use an .env file to store environment variables. Create a file named .env in the same directory as your docker-compose.yml file and add the following:

    Bash
    YOURLS_DB_USER=yourls_db_user
    YOURLS_DB_PASS=yourls_db_password
    YOURLS_USER=admin_username
    YOURLS_PASS=admin_password
    MYSQL_ROOT_PASSWORD=your_mysql_root_password

    Pro Tip: Generate strong passwords using the command openssl rand -base64 32. Security is paramount when running web services.

    Step 3: Launching YOURLS

    With your configuration files in place, you’re ready to bring your YOURLS instance to life. Run the following command in your terminal:

    Bash
    docker compose up -d

    This command tells Docker Compose to start your services in the background (-d for detached mode). Once the containers are up and running, you can access the YOURLS admin interface by navigating to http://yourdomain.com:8081/admin in your web browser. Log in using the credentials you specified in your .env file, and follow the setup wizard to complete the installation.

    Step 4: Securing Your YOURLS Installation with SSL

    Security should never be an afterthought. Protecting your YOURLS installation with SSL encryption ensures that data transmitted between your users and your server remains private.

    Using Let’s Encrypt for Free SSL Certificates

    • Install Certbot: The Let’s Encrypt client that automates certificate issuance.
    • Obtain a Certificate: Run certbot with appropriate options to get your SSL certificate.
    • Configure Your Reverse Proxy: Set up Nginx or Caddy to handle SSL termination.
    My Personal Setup

    I use Nginx Proxy Manager in conjunction with an Origin CA certificate from Cloudflare. This setup provides a user-friendly interface for managing SSL certificates and reverse proxy configurations. For some info on Nginx Proxy Manager check out my other post!

    Using the YOURLS API to Automate Your Workflow

    One of YOURLS’s standout features is its robust API, which allows you to integrate URL shortening into your applications, scripts, or websites. Automate link generation, expansion, and statistics retrieval without manual intervention.

    Examples of Using the YOURLS API with Bash Scripts

    Shortening a URL

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to shorten a URL
    shorten_url() {
    local long_url="$1"
      echo "Shortening URL: $long_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=shorturl&format=json&url=${long_url}"
    echo -e "\n"
    }
    
    shorten_url "https://example.com"

    Expanding a Short URL

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to expand a short URL
    expand_url() {
    local short_url="$1"
      echo "Expanding short URL: $short_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=expand&format=json&shorturl=${short_url}"
    echo -e "\n"
    }
    
    expand_url "https://yourpage.com/2"

    Retrieving URL Statistics

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to get URL statistics
    get_url_stats() {
    local short_url="$1"
      echo "Getting statistics for: $short_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=url-stats&format=json&shorturl=${short_url}"
    echo -e "\n"
    }
    
    get_url_stats "https://yourpage.com/2"

    Creating Short URLs with Custom Keywords

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to shorten a URL with a custom keyword
    shorten_url_custom_keyword() {
    local long_url="$1"
      local keyword="$2"
      echo "Shortening URL: $long_url with custom keyword: $keyword"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=shorturl&format=json&url=${long_url}&keyword=${keyword}"
    echo -e "\n"
    }
    
    shorten_url_custom_keyword "https://example.com" "customkeyword"

    Integrating YOURLS API in Other Languages

    While bash scripts are handy, you might prefer to use the YOURLS API with languages like Python, JavaScript, or PHP. There are libraries and examples available in various programming languages, making integration straightforward regardless of your tech stack.

    Supercharging YOURLS with Plugins

    YOURLS’s plugin architecture allows you to extend its functionality to meet your specific needs. Here are some popular plugins to consider:

    • Spam and Abuse Protection
    • reCAPTCHA: Adds Google reCAPTCHA to your public interface to prevent bots.
    • Akismet: Uses the Akismet service to filter out spam URLs.
    • Advanced Analytics
    • Clicks Counter: Provides detailed click statistics and visualizations.
    • GeoIP Tracking: Adds geographical data to your click analytics.
    • Social Media Integration
    • Share via Twitter: Adds a button to share your short links directly on Twitter.
    • Facebook Open Graph: Ensures your short links display correctly on Facebook.
    • Custom URL Keywords and Patterns
    • Random Keyword Generator: Creates more secure and hard-to-guess short URLs.
    • Reserved Keywords: Allows you to reserve certain keywords for special purposes.

    You can find a comprehensive list of plugins in the YOURLS Plugin Repository. Installing plugins is as simple as placing them in the user/plugins directory and activating them through the admin interface.

    Alternative Self-Hosted URL Shorteners

    While YOURLS is a fantastic option, it’s not the only self-hosted URL shortener available. Here are a few alternatives you might consider:

    • Polr: An open-source, minimalist URL shortener with a modern interface. Offers a robust API and can be customized with themes.
    • Kutt: A free and open-source URL shortener with advanced features like custom domains, password-protected links, and detailed statistics.
    • Shlink: A self-hosted URL shortener that provides detailed analytics, QR codes, and REST APIs.

    Each of these alternatives has its own set of features and advantages. Depending on your specific needs, one of them might be a better fit for your project. Based on my experience, YOURLS is by far the easiest and simplest option. I tried the others as well but ultimately chose it.

    Conclusion: Take Back Control of Your URLs Today

    Long URLs have overstayed their welcome, and it’s time to show them the door. With YOURLS, you have the tools to not only shorten your links but to own and control every aspect of them. No more compromises, no more third-party dependencies—just pure, unadulterated control over your online presence.

    So what are you waiting for? Join the revolution against long URLs, set up your YOURLS instance, and start sharing sleek, professional, and memorable links today!