Tag: self-hosted

  • Bearbot aka. Stonkmarket – Advanced Trading Probability Calculations

    Bearbot aka. Stonkmarket – Advanced Trading Probability Calculations

    Bearbot: Begins

    This project spans over 3 years with countless iterations and weeks of coding. I wrote almost 500,000 lines of code in Python and Javascript. The essential idea was to use a shorting strategy on put/call options to generate income based on time decay – also called “Theta”. You kind of have to know a little bit about how stock options work to understand this next part, but here I go. About 80% of all options expire worthless; statistically, you have a way higher chance shorting options and making a profit than going long. The further the strike price is from the price of the underlying stock, the higher the probability of the option expiring worthless. Technically, you always make 100%.

    Now back to reality, where things aren’t perfect.

    A lot can happen in the market. If you put your life savings into a seemingly safe trade (with Stonkmarket actually +95% safe, more on that later) then the next Meta scandal gets published and the trade goes against you. When shorting, you can actually lose more than 100% (since stocks can virtually rise to infinity), but you can only gain 100%. It sounds like a bad deal, but again, you can gain 100% profit a lot more than infinite losses, although the chance is never 0. The trick is to use stop loss, but risk management is obviously part of it when you trade for a living.

    DoD Scraper

    The idea actually started as a simple idea to scrape the U.S. Department of Defense contracts to predict the earnings of large weapons companies and make large option trades based upon that.

    Spoiler: It is not that easy. You can get lucky if the company is small enough and traded on the stock market, so it actually makes a huge impact on their financials. A company like Lockheed Martin is not really predictable by DoD contracts alone.

    Back then I called it Wallabe and this was the logo:

    More data

    I didn’t know the idea was bad, so I soon started to scrape news. I collected over 100 RSS feeds, archiving them, analyzing them, and making them accessible via a REST API. Let me tell you, I got a lot of news. Some people sell this data, but honestly, it was a lot more worth to me in my database enriching my decision-making algo.

    I guess it goes without saying that I was also collecting stock data including fundamental and earnings information; that is kind of the base for everything. Do you know how many useful metrics you can calculate with the data listed above alone? A lot. Some of them tell you the same stuff, true, but nonetheless, you can generate a lot.

    Bearbot is born

    Only works on dark mode, sorry.

    As I was writing this into a PWA (website that acts as a mobile app), I was reading up on how to use option Greeks to calculate stock prices when realizing that it is a lot easier to predict and set up options trades. That is when I started to gather a whole bunch of options data and calculating my own Greeks which led me to explore more about theta and likelihood of options expiring worthless.

    From that point on, I invested all the time asking the following question: What is the perfect environment for shorting Option XYZ?

    The answer is generally quite simple:

    • little volatility
    • not a lot of news
    • going with the general trend
    • Delta “probability” is in your favor

    You can really expand on these 3 points, and there are many ways to calculate and measure them, but that’s basically it. If you know a thing about trading, ask yourself this, what does the perfect time for a trade look like? Try to quantify it, should a stock have just gone down 20%, then released good news and something else? If yes, then what if you had a program to spot exactly these moments all over the stock market and alert you of those amazing chances to pretty much print money? That’s Bearbot.

    Success

    At the beginning, I actually planned on selling this and offering it as a service like all those other signal trading services or whatever, but honestly, why the hell would I give you something that sounds (and was) so amazing that I could practically print money? There is no value for me. Nobody is going to sell or offer you a trading strategy that works, unless it works by a lot of people knowing about it and doing it like candlestick patterns.

    Failure

    The day I started making a lot more money than with my regular job was the day I lost my mind. I stopped listening to Bearbot. I know it kind of sounds silly talking about it, but I thought none of my trades could go wrong and I didn’t need it anymore; I had developed some sort of gift for trading. I took on too much risk and took a loss so big saying the number would send shivers down your spine, unless you’re rich. Bearbot was right with the trade, but by not having enough to cover margin, I got a margin call and my position was closed. I got caught by a whipsaw (it was a Meta option and they had just announced a huge data leak).

    ”Learning” from my mistakes

    In the real world, not everything has an API. I spent weeks reversing the platform I was trading to automate the bot entirely without having to do anything in hopes of removing the human error-prone element. The problem was that they did not want bot traders, so they did a lot to counter it, and every 2 weeks I had to adjust the bot or else get captchas or blocked directly. Eventually, it reached a very unstable point to where I could not use it for live trading anymore since I could not trust it in closing positions on time.

    I was very sad at that point. Discouraged, I discontinued Bearbot for about a year.

    Stonkmarket: Rising

    Some time went on until a friend of mine encouraged me to try again.

    I decided to take a new approach. New architecture, new style, new name. I wanted to leave all the negative stuff behind. It had 4 parts:

    • central API with a large database (Django Rest Framework)
    • local scrapers (Python, Selenium)
    • static React-based front end
    • “Stonk, the Bot” a Discord bot for signals and monitoring

    I extended the UI a lot showing all the data I have gathered but sadly I also shut down the entire backend before writing this so I cannot show it with data.

    I spent about 3 months refactoring the old code, getting it to work, and putting it into a new structure. I then started trading again, and everything was well.

    Final Straw

    I was going through a very stressful phase, and on top of it, one scraper after the other was failing. It got to a point where I had to continuously change logic around and fix my data kraken, and I could not maintain it alone anymore. I also couldn’t let anyone in on it as I did not want to publish my code and open things up to reveal my secrets. It was at this point where I decided to shut it down for good. I poured 3 years, countless hours, lines of code, research, and even tears into this project. I learned more than on any other project ever.

    Summary

    On this project, I really mastered Django and Django Rest Framework, I experimented a lot with PWAs and React, pushing my web development knowledge. On top of that, I was making good money trading, but the stress and greed eventually got to me until I decided I was better off shutting it down. If you read this and know me, you probably know all about Bearbot and Stonkmarket, and maybe I have even given you a trade or 2 that you profited off of. Maybe one day I will open-source the code for this; that will be the day I am truly finished with Stonkmarket. For now, a small part inside myself is still thinking it is possible to redo it.

    Edit: We are baaaaaaaccckkk!!!🎉

    https://www.bearbot.dev

    Better than ever, cooler than ever! You cannot beat this bear.


  • Securing Your Debian Server

    Securing Your Debian Server

    Hey there, server samurais and cyber sentinels! Ready to transform your Debian server into an impregnable fortress? Whether you’re a seasoned sysadmin or a newbie just dipping your toes into the world of server security, this guide is your one-stop shop for all things safety on the wild, wild web. Buckle up, because we’re about to embark on a journey full of scripts, tips, and jokes to keep things light and fun. There are many good guides on this online, I decided to add another one with the things I usually do. Let’s dive in!

    Initial Setup: The First Line of Defense

    Imagine setting up your server like moving into a new house. You wouldn’t leave the door wide open, right? The same logic applies here.

    Update Your System

    Outdated software is like a welcome mat for hackers. Run the following commands to get everything current:

    Bash
    sudo apt update && sudo apt upgrade -y

    Create a New User

    Root users are like the king of the castle. Let’s create a new user with sudo privileges:

    Bash
    sudo adduser yourusername
    sudo usermod -aG sudo yourusername

    Now, switch to your newly crowned user:

    Bash
    su - yourusername

    Securing SSH: Locking Down Your Castle Gates

    SSH (Secure Shell) is the key to your castle gates. Leaving it unprotected is like leaving the keys under the doormat.

    Disable Root Login

    Edit the SSH configuration file:

    Bash
    sudo nano /etc/ssh/sshd_config

    Change PermitRootLogin to no:

    Bash
    PermitRootLogin no

    Change the Default SSH Port

    Edit the SSH configuration file:

    Bash
    sudo nano /etc/ssh/sshd_config

    Change the port to a number between 1024 and 65535 (e.g., 2222):

    Bash
    Port 2222

    Restart the SSH service:

    Bash
    sudo systemctl restart ssh

    There is actually some controversy about security through obscurity, in my long tenure as an analyst and incident responser I believe less automated “easy” attacks do improve security.

    Set Up SSH Keys

    Generate a key pair using elliptic curve cryptography:

    Bash
    ssh-keygen -t ed25519 -C "[email protected]"

    Copy the public key to your server:

    Bash
    ssh-copy-id yourusername@yourserver -p 2222

    Disable password authentication:

    Bash
    sudo nano /etc/ssh/sshd_config

    Change PasswordAuthentication to no:

    Bash
    PasswordAuthentication no

    Restart SSH:

    Bash
    sudo systemctl restart ssh

    For more details, refer to the sshd_config man page.

    Firewall Configuration: Building the Great Wall

    A firewall is like the Great Wall of China for your server. Let’s set up UFW (Uncomplicated Firewall).

    Install UFW

    Install UFW if it’s not already installed:

    Bash
    sudo apt install ufw -y

    Allow SSH

    Allow SSH connections on your custom port:

    Bash
    sudo ufw allow 2222/tcp
    # add more services if you are hosting anything like HTTP/HTTPS

    Enable the Firewall

    Enable the firewall and check its status:

    Bash
    sudo ufw enable
    sudo ufw status

    For more information, check out the UFW man page.

    Intrusion Detection Systems: The Watchful Eye

    An Intrusion Detection System (IDS) is like a guard dog that barks when something suspicious happens.

    Install Fail2Ban

    Fail2Ban protects against brute force attacks. Install it with:

    Bash
    sudo apt install fail2ban -y

    Configure Fail2Ban

    Edit the configuration file:

    Bash
    sudo nano /etc/fail2ban/jail.local

    Add the following content:

    Bash
    [sshd]
    enabled = true
    port = 2222
    logpath = %(sshd_log)s
    maxretry = 3

    Restart Fail2Ban:

    Bash
    sudo systemctl restart fail2ban

    For more details, refer to the Fail2Ban man page.

    Regular Updates and Patching: Keeping the Armor Shiny

    A knight with rusty armor won’t last long in battle. Keep your server’s software up to date.

    Enable Unattended Upgrades

    Debian can automatically install security updates. Enable this feature:

    Bash
    sudo apt install unattended-upgrades -y
    sudo dpkg-reconfigure --priority=low unattended-upgrades

    Edit the configuration:

    Bash
    sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

    Ensure the following line is uncommented:

    Bash
    "${distro_id}:${distro_codename}-security";

    For more details, refer to the unattended-upgrades man page.

    Again there is also some controversy about this. Most people are afraid that they wake up one night and all their servers are down, because a botched automated update. In my non-professional live with my home IT, this has never happened and even professionally, if we are just talking security updates of an OS like Debian, I haven’t seen it, yet.

    User Management: Only the Knights in the Realm

    Not everyone needs the keys to the kingdom. Ensure only trusted users have access. On a fresh install probably unnecessary, but good housekeeping.

    Review and Remove Unnecessary Users

    List all users:

    Bash
    cut -d: -f1 /etc/passwd

    Remove any unnecessary users:

    Bash
    sudo deluser username

    Implement Strong Password Policies

    Enforce strong passwords:

    Bash
    sudo apt install libpam-pwquality -y

    Edit the PAM configuration file:

    Bash
    sudo nano /etc/pam.d/common-password

    Add the following line:

    Bash
    password requisite pam_pwquality.so retry=3 minlen=12 difok=3

    For more details, refer to the pam_pwquality man page.

    File and Directory Permissions: Guarding the Treasure

    Permissions are like guards watching over the royal treasure. Make sure they’re doing their job.

    Secure /etc Directory

    Ensure the /etc directory is not writable by anyone except root:

    Bash
    sudo chmod -R go-w /etc

    This is heavily dependent on your distribution and may be a bad idea. I use it for locked down environments like Debian LXC that only do one thing.

    Set Permissions for User Home Directories

    Ensure user home directories are only accessible by their owners:

    Bash
    sudo chmod 700 /home/yourusername

    For more details, refer to the chmod man page.

    Automatic Backups: Preparing for the Worst

    Even the best fortress can be breached. Regular backups ensure you can recover from any disaster.

    Full disclosure: I have had a very bad data loss experience with rsync and have since switched to Borg. I can also recommend restic. This had nothing to do with rsync in itself, rather how easy it is to mess up.

    Install rsync

    rsync is a powerful tool for creating backups. Install it with:

    Bash
    sudo apt install rsync -y

    Create a Backup Script

    Create a script to backup your important files:

    Bash
    nano ~/backup.sh

    Add the following content:

    Bash
    #!/bin/bash
    rsync -a --delete /var/www/ /backup/var/www/
    rsync -a --delete /home/yourusername/ /backup/home/yourusername/

    Make the script executable:

    Bash
    chmod +x ~/backup.sh

    Schedule the Backup

    Use cron to schedule the backup to run daily:

    Bash
    crontab -e

    Add the following line:

    Bash
    0 2 * * * /home/yourusername/backup.sh

    For more details on cron, refer to the crontab man page.

    For longer backup jobs you should switch to a service with timer rather than cron. Here is a post from another blog about it. Since my data has grown to multiple terabyte this is what I do now too

    Advanced Security Best Practices

    Enable Two-Factor Authentication (2FA)

    Adding an extra layer of security with 2FA can significantly enhance your server’s protection. Use tools like Google Authenticator or Authy. I had this on an Ubuntu server for a while and thought it was kind of cool.

    1. Install the required packages:
    Bash
    sudo apt install libpam-google-authenticator -y
    1. Configure each user for 2FA:
    Bash
    google-authenticator
    1. Update the PAM configuration:
    Bash
    sudo nano /etc/pam.d/sshd

    Add the following line:

    Bash
    auth required pam_google_authenticator.so
    1. Update the SSH configuration to require 2FA:
    Bash
    sudo nano /etc/ssh/sshd_config

    Ensure the following lines are set:

    Bash
    ChallengeResponseAuthentication yes
    AuthenticationMethods publickey,keyboard-interactive

    Restart SSH:

    Bash
    sudo systemctl restart ssh

    Implement AppArmor

    AppArmor provides mandatory access control and can restrict programs to a limited set of resources.

    1. Install AppArmor:
    Bash
    sudo apt install apparmor apparmor-profiles apparmor-utils -y
    1. Enable and start AppArmor:
    Bash
    sudo systemctl enable apparmor
    sudo systemctl start apparmor

    For more details, refer to the AppArmor man page.

    Conclusion: The Crown Jewel of Security

    Congratulations, noble guardian! You’ve fortified your Debian server into a digital fortress. By following these steps, you’ve implemented strong security practices, ensuring your server is well-protected against common threats. Remember, security is an ongoing process, and staying vigilant is key to maintaining your kingdom’s safety.

    Happy guarding, and may your server reign long and prosper!

  • YOURLS: The Ultimate Weapon Against Long URLs

    YOURLS: The Ultimate Weapon Against Long URLs

    Introduction

    Let’s face it: long URLs are the bane of the internet. They’re unsightly, cumbersome, and frankly, nobody enjoys dealing with them. Every time I encounter a URL that stretches longer than a Monday morning, I can’t help but cringe. But here’s the silver lining: you don’t have to endure the tyranny of endless web addresses any longer. Introducing YOURLS—the ultimate weapon in your arsenal against the plague of elongated URLs!

    Imagine having the power to create your own URL shortening service, hosted right on your own domain, complete with every feature you could possibly desire. And the best part? It’s free, open-source, and infinitely customizable. So gear up, because we’re about to transform your domain into a sleek, efficient, URL-shortening powerhouse!

    The Problem with Long URLs

    Before we dive into the solution, let’s talk about why long URLs are such a headache. Not only do they look messy, but they can also be problematic when sharing links on social media, in emails, or on printed materials. Long URLs can break when sent via text message, and they’re nearly impossible to remember. They can also be a security risk, revealing sensitive query parameters. In a digital age where brevity and aesthetics matter, shortening your URLs isn’t just convenient—it’s essential.

    Meet YOURLS: Your URL Shortening Hero

    Enter YOURLS (Your Own URL Shortener), an open-source project that hands you the keys to your own URL kingdom. YOURLS lets you run your very own URL shortening service on your domain, giving you full control over your links and data. No more relying on third-party services that might go down, change their terms, or plaster your links with ads. With YOURLS, you’re in the driver’s seat.

    Why YOURLS Should Be Your Go-To URL Shortener

    YOURLS isn’t just another URL shortening tool—it’s a game-changer. Here’s why:

    • Full Control Over Your Data: Since YOURLS is self-hosted, you own all your data. No more worrying about data privacy or third-party data breaches.
    • Customizable Links: Create custom short URLs that match your branding, making your links not only shorter but also more professional and trustworthy.
    • Powerful Analytics: Get detailed insights into your link performance with historical click data, visitor geo-location, referrer tracking, and more. Understanding your audience has never been easier.
    • Developer-Friendly API: Automate your link management with YOURLS’s robust API, allowing you to integrate URL shortening into your applications seamlessly.
    • Extensible Through Plugins: With a rich plugin architecture, you can enhance YOURLS with additional features like spam protection, social sharing, and advanced analytics. Tailor the tool to fit your exact needs.

    How YOURLS Stacks Up Against Other URL Shorteners

    While YOURLS offers a fantastic solution, it’s worth considering how it compares to other popular URL shorteners out there.

    • Bitly: One of the most well-known services, Bitly offers a free plan with basic features and paid plans for advanced analytics and custom domains. However, you’re dependent on a third-party service, and your data resides on their servers.
    • TinyURL: A simple, no-frills URL shortener that’s been around for ages. It doesn’t offer analytics or customization options, making it less suitable for professional use.
    • Rebrandly: Focused on custom-branded links, Rebrandly offers advanced features but comes with a price tag. Again, your data is stored externally.
    • Short.io: Allows custom domains and offers analytics, but the free tier is limited, and you’ll need to pay for more advanced features.

    Why Choose YOURLS Over the Others?

    • Cost-Effective: YOURLS is free and open-source. No subscription fees or hidden costs.
    • Privacy and Security: Since you host it yourself, you have complete control over your data’s privacy and security.
    • Unlimited Customization: Modify and extend YOURLS to your heart’s content without any limitations imposed by third-party services.
    • Community Support: As an open-source project, YOURLS has a vibrant community that contributes plugins, support, and enhancements.

    Getting Started with YOURLS

    Now that you’re sold on YOURLS, let’s dive into how you can set it up and start conquering those unwieldy URLs.

    Step 1: Setting Up YOURLS with Docker Compose

    To make the installation process smooth and straightforward, we’ll use Docker Compose. This method ensures that all the necessary components are configured correctly and allows for easy management of your YOURLS instance. If you’re new to Docker, don’t worry—it’s simpler than you might think, and it’s a valuable tool to add to your arsenal.

    Creating the docker-compose.yml File

    The docker-compose.yml file orchestrates the services required for YOURLS to run. Here’s the template you’ll use:

    docker-compose.yml
    services:
      yourls:
        image: yourls:latest
        container_name: yourls
        ports:
          - "8081:80" # YOURLS accessible at http://localhost:8081
        environment:
          - YOURLS_SITE=https://yourdomain.com
          - YOURLS_DB_HOST=mysql-yourls
          - YOURLS_DB_USER=${YOURLS_DB_USER}
          - YOURLS_DB_PASS=${YOURLS_DB_PASS}
          - YOURLS_DB_NAME=yourls_db
          - YOURLS_USER=${YOURLS_USER}
          - YOURLS_PASS=${YOURLS_PASS}
        depends_on:
          - mysql-yourls
        volumes:
          - ./yourls_data:/var/www/html/wordpress/user # Persist YOURLS data
        networks:
          - yourls-network
    
      mysql-yourls:
        image: mysql:latest
        container_name: mysql-yourls
        environment:
          - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
          - MYSQL_DATABASE=yourls_db
          - MYSQL_USER=${YOURLS_DB_USER}
          - MYSQL_PASSWORD=${YOURLS_DB_PASS}
        volumes:
          - ./mysql_data:/var/lib/mysql # Persist MySQL data
        networks:
          - yourls-network
    
    networks:
      yourls-network:
        driver: bridge

    Let’s break down what’s happening in this file:

    • Services:
    • yourls: This is the YOURLS application container. It exposes port 8081 and connects to the MySQL database.
    • mysql-yourls: The MySQL database container that stores all your URL data.
    • Environment Variables: These variables configure your YOURLS and MySQL instances. We’ll store sensitive information in a separate .env file for security.
    • Volumes: Mounts directories on your host machine to persist data even when the containers are recreated.
    • Networks: Defines a bridge network for the services to communicate securely.

    Step 2: Securing Your Credentials with an .env File

    To keep your sensitive information safe, we’ll use an .env file to store environment variables. Create a file named .env in the same directory as your docker-compose.yml file and add the following:

    Bash
    YOURLS_DB_USER=yourls_db_user
    YOURLS_DB_PASS=yourls_db_password
    YOURLS_USER=admin_username
    YOURLS_PASS=admin_password
    MYSQL_ROOT_PASSWORD=your_mysql_root_password

    Pro Tip: Generate strong passwords using the command openssl rand -base64 32. Security is paramount when running web services.

    Step 3: Launching YOURLS

    With your configuration files in place, you’re ready to bring your YOURLS instance to life. Run the following command in your terminal:

    Bash
    docker compose up -d

    This command tells Docker Compose to start your services in the background (-d for detached mode). Once the containers are up and running, you can access the YOURLS admin interface by navigating to http://yourdomain.com:8081/admin in your web browser. Log in using the credentials you specified in your .env file, and follow the setup wizard to complete the installation.

    Step 4: Securing Your YOURLS Installation with SSL

    Security should never be an afterthought. Protecting your YOURLS installation with SSL encryption ensures that data transmitted between your users and your server remains private.

    Using Let’s Encrypt for Free SSL Certificates

    • Install Certbot: The Let’s Encrypt client that automates certificate issuance.
    • Obtain a Certificate: Run certbot with appropriate options to get your SSL certificate.
    • Configure Your Reverse Proxy: Set up Nginx or Caddy to handle SSL termination.
    My Personal Setup

    I use Nginx Proxy Manager in conjunction with an Origin CA certificate from Cloudflare. This setup provides a user-friendly interface for managing SSL certificates and reverse proxy configurations. For some info on Nginx Proxy Manager check out my other post!

    Using the YOURLS API to Automate Your Workflow

    One of YOURLS’s standout features is its robust API, which allows you to integrate URL shortening into your applications, scripts, or websites. Automate link generation, expansion, and statistics retrieval without manual intervention.

    Examples of Using the YOURLS API with Bash Scripts

    Shortening a URL

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to shorten a URL
    shorten_url() {
    local long_url="$1"
      echo "Shortening URL: $long_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=shorturl&format=json&url=${long_url}"
    echo -e "\n"
    }
    
    shorten_url "https://example.com"

    Expanding a Short URL

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to expand a short URL
    expand_url() {
    local short_url="$1"
      echo "Expanding short URL: $short_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=expand&format=json&shorturl=${short_url}"
    echo -e "\n"
    }
    
    expand_url "https://yourpage.com/2"

    Retrieving URL Statistics

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to get URL statistics
    get_url_stats() {
    local short_url="$1"
      echo "Getting statistics for: $short_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=url-stats&format=json&shorturl=${short_url}"
    echo -e "\n"
    }
    
    get_url_stats "https://yourpage.com/2"

    Creating Short URLs with Custom Keywords

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to shorten a URL with a custom keyword
    shorten_url_custom_keyword() {
    local long_url="$1"
      local keyword="$2"
      echo "Shortening URL: $long_url with custom keyword: $keyword"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=shorturl&format=json&url=${long_url}&keyword=${keyword}"
    echo -e "\n"
    }
    
    shorten_url_custom_keyword "https://example.com" "customkeyword"

    Integrating YOURLS API in Other Languages

    While bash scripts are handy, you might prefer to use the YOURLS API with languages like Python, JavaScript, or PHP. There are libraries and examples available in various programming languages, making integration straightforward regardless of your tech stack.

    Supercharging YOURLS with Plugins

    YOURLS’s plugin architecture allows you to extend its functionality to meet your specific needs. Here are some popular plugins to consider:

    • Spam and Abuse Protection
    • reCAPTCHA: Adds Google reCAPTCHA to your public interface to prevent bots.
    • Akismet: Uses the Akismet service to filter out spam URLs.
    • Advanced Analytics
    • Clicks Counter: Provides detailed click statistics and visualizations.
    • GeoIP Tracking: Adds geographical data to your click analytics.
    • Social Media Integration
    • Share via Twitter: Adds a button to share your short links directly on Twitter.
    • Facebook Open Graph: Ensures your short links display correctly on Facebook.
    • Custom URL Keywords and Patterns
    • Random Keyword Generator: Creates more secure and hard-to-guess short URLs.
    • Reserved Keywords: Allows you to reserve certain keywords for special purposes.

    You can find a comprehensive list of plugins in the YOURLS Plugin Repository. Installing plugins is as simple as placing them in the user/plugins directory and activating them through the admin interface.

    Alternative Self-Hosted URL Shorteners

    While YOURLS is a fantastic option, it’s not the only self-hosted URL shortener available. Here are a few alternatives you might consider:

    • Polr: An open-source, minimalist URL shortener with a modern interface. Offers a robust API and can be customized with themes.
    • Kutt: A free and open-source URL shortener with advanced features like custom domains, password-protected links, and detailed statistics.
    • Shlink: A self-hosted URL shortener that provides detailed analytics, QR codes, and REST APIs.

    Each of these alternatives has its own set of features and advantages. Depending on your specific needs, one of them might be a better fit for your project. Based on my experience, YOURLS is by far the easiest and simplest option. I tried the others as well but ultimately chose it.

    Conclusion: Take Back Control of Your URLs Today

    Long URLs have overstayed their welcome, and it’s time to show them the door. With YOURLS, you have the tools to not only shorten your links but to own and control every aspect of them. No more compromises, no more third-party dependencies—just pure, unadulterated control over your online presence.

    So what are you waiting for? Join the revolution against long URLs, set up your YOURLS instance, and start sharing sleek, professional, and memorable links today!

  • Certsplotting: Exploiting Certificate Transparency for Mischief – Part 1

    Certsplotting: Exploiting Certificate Transparency for Mischief – Part 1

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Introduction

    Certspotter stands as the original authority in certificate transparency log monitoring—a mouthful, indeed. Let’s dissect why you, as a hacker, should pay attention to it.

    One of your primary maneuvers when targeting a system is reconnaissance, particularly passive reconnaissance. Unlike active reconnaissance, which directly engages the target, passive recon operates discreetly.

    Passive recon involves employing tactics that evade triggering any alerts from the target. For instance, conducting a Google search about your target doesn’t tip them off. While technically they might detect someone from your area or country searching for them via Search Console, using a VPN and a private browser can easily circumvent this.

    You can even explore their entire website using Google cache (just search for cache:your-target.com) or archive.org without exposing your IP or intentions to them. On the other hand, active recon tends to be more assertive, such as port scanning, which leaves traces in the target’s logs. Depending on their security measures and level of vigilance, they might notice and decide to block you.

    If you were to scan my public IP, I’d promptly block you 😃.

    But I digress. What if you could continuously and passively monitor your target for new subdomains, project developments, systems, or any other endeavors that require a certificate? Imagine being alerted right as they register it.

    Now, you might wonder, “How much will that cost me?” Surprisingly, nothing but the electricity to power your server or whatever charges your cloud provider levies. With Certspotter, you can scrutinize every certificate issued to your target’s domains and subdomains.

    What mischief can I stir?

    Your mind is probably already concocting schemes, so here’s a scenario to fuel your imagination:

    Imagine your target sets up a WordPress site requiring an admin password upon the first visit. You could swoop in ahead of them, seizing control of their server. (Sure, they might reinstall, but it’ll definitely ruffle their feathers 😏).

    A bit sneakier? How about adding a covert admin account to a fresh Grafana or Jenkins installation, which might still be using default credentials upon release. Truly, you never know what you might uncover.

    Setting up Certspotter

    To begin, you’ll need a fresh Debian-based Linux distro. I’ll opt for Kali to simplify later use of other hacking tools. Alternatively, you can choose any Linux distribution to keep your image size compact.

    Certspotter

    Start by visiting their Certspotter GitHub. I strongly advise thoroughly reading their documentation to acquaint yourself with the tool.

    Installation:

    Bash
    go install software.sslmate.com/src/certspotter/cmd/certspotter@latest

    Next, create directories:

    Bash
    mkdir $HOME/.certspotter
    mkdir $HOME/.certspotter/hooks.d # scripts
    touch $HOME/.certspotter/watchlist # targets

    The watchlist file is straightforward:

    Bash
    exploit.to
    virus.malware.to
    .bmw.de

    Prefixing a domain with a . signifies monitoring the domain and all its subdomains. Without the prefix, Certspotter will monitor certificates matching the exact domain/subdomain.

    I can anticipate your next thought—you want all the logs, don’t you? Since 2013, there have been 7,485,653,605 of them (Source), requiring substantial storage. If you’re undeterred, you’d need to modify this code here and rebuild Certspotter to bypass the watchlist and retrieve everything.

    Now, let’s set up the systemd service. Here’s how mine looks:

    Bash
    sudo nano /etc/systemd/system/certspotter.service

    You’ll need to adjust the paths unless your username is also karl:

    Bash
    [Unit]
    Description=Certspotter Service
    After=network.target
    
    [Service]
    Environment=HOME=/home/karl
    Environment=CERTSPOTTER_CONFIG_DIR=/home/karl/.certspotter
    Type=simple
    ExecStart=/home/karl/go/bin/certspotter -verbose
    Restart=always
    RestartSec=3
    
    [Install]
    WantedBy=multi-user.target

    Note: I’m currently not utilizing the -start_at_end flag. As a result, my script begins its operation from the initial point and might take a considerable amount of time to detect recently issued certificates. By modifying the line that begins with ExecStart= and adding the -start_at_end parameter to the certspotter command, you instruct the script to disregard previously issued certificates and commence monitoring from the current time onward.

    To activate and check if it’s running, run this:

    Bash
    sudo systemctl daemon-reload
    sudo systemctl start certspotter
    sudo systemctl status certspotter

    Now let us add a script in hooks.d:

    Bash
    touch $HOME/.certspotter/hooks.d/certspotter.sh
    sudo chmod u+x $HOME/.certspotter/hooks.d/certspotter.sh

    If you have issues with reading ENV, you might have to experiment with the permissions.

    In cerstpotter.sh:

    Bash
    #!/bin/bash
    
    if [ -z "$EVENT" ] || [ "$EVENT" != 'discovered_cert' ]; then
        # no event
        exit 0
    fi
    
    DNS=$(cut -d "=" -f2 <<< "$SUBJECT_DN")
    IP="$(dig "$DNS" A +short | grep -v '\.$' | head -n 1 | tr -d '\n')"
    IP6="$(dig "$DNS" AAAA +short | grep -v '\.$' | head -n 1 | tr -d '\n')"
    
    JSON_FILE_DATA=$(cat "$JSON_FILENAME")
    dns_names=$(echo "$JSON_FILE_DATA" | jq -r '.dns_names | join("\n")')
    
    JSON_DATA=$(cat <<EOF
    {
        "pubkey": "$PUBKEY_SHA256",
        "watch_item": "$WATCH_ITEM",
        "not_before": "$NOT_BEFORE_RFC3339",
        "not_after": "$NOT_AFTER_RFC3339",
        "dns_names": "$dns_names",
        "issuer": "$ISSUER_DN",
        "asn": "$ASN",
        "ipv4": "$IP",
        "ipv6": "$IP6",
        "cn": "$SUBJECT_DN",
        "crt.sh": "https://crt.sh/?sha256=$CERT_SHA256"
    }
    EOF
    )
    
    # post data to br... might do somethign with answer
    response=$(curl -s -X POST -H "Content-Type: application/json" \
        -H "Content-Type: application/json" \
        -d "$JSON_DATA" \
        "http://10.102.0.11:8080/api/v1/certspotter/in")

    You could edit this to your liking. The data should look like this:

    JSON
    {
      "pubkey": "ca4567a91cfe51a2771c14f1462040a71d9b978ded9366fe56bcb990ae25b73d",
      "watch_item": ".google.com",
      "not_before": "2023-11-28T14:30:55Z",
      "not_after": "2024-01-09T14:30:54Z",
      "dns_names": ["*.sandbox.google.com"],
      "isssuer": "C=US, O=Google Trust Services LLC, CN=GTS CA 1C3",
      "asn": "GOOGLE,US",
      "ipv4": "142.250.102.81",
      "ipv6": "2a00:1450:4013:c00::451",
      "cn": "CN=*.sandbox.google.com",
      "crt.sh": "https://crt.sh/?sha256=cb657858d9fb6475f20ed5413d06da261be20951f6f379cbd30fe6f1e2558f01"
    }

    Depending on your target, it will take a while until you see results. Maybe even days.

    Summary

    In this first part of our exploration into Certspotter, we’ve laid the groundwork for understanding its significance in passive reconnaissance. Certspotter emerges as a pivotal tool in monitoring certificate transparency logs, enabling hackers to gather crucial intelligence without alerting their targets.

    We’ve delved into the distinction between passive and active reconnaissance, emphasizing the importance of discreet operations in avoiding detection. Through Certspotter, hackers gain the ability to monitor target domains and subdomains continuously, staying informed about new developments and potential vulnerabilities.

    As we conclude Part 1, we’ve only scratched the surface of what Certspotter has to offer. In Part 2, we’ll dive deeper into advanced techniques for leveraging Certspotter’s capabilities, exploring tools to enrich our data and enhance our reconnaissance efforts. Stay tuned for an in-depth exploration of Certspotter’s potential in uncovering valuable insights for hackers.

    For Part 2 go this way -> Here

  • Exploring OSINT Tools: From Lightweight to Powerhouse

    Exploring OSINT Tools: From Lightweight to Powerhouse

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Introduction

    Welcome to a journey through the exciting world of Open Source Intelligence (OSINT) tools! In this post, we’ll dive into some valuable tools, from the lightweight to the powerhouse, culminating in the grand reveal of Spiderfoot.

    The main star of this post is Spiderfoot, but before we get there, I want to show you some other more lightweight tools you might find useful.

    Holehe

    While perusing one of my favorite OSINT blogs (Oh Shint), I stumbled upon a gem to enhance my free OSINT email tool: Holehe.

    Holehe might seem like a forgotten relic to some, but its capabilities are enduring. Developed by megadose, this tool packs a punch when it comes to unearthing crucial information.

    Sherlock

    Ah, Sherlock – an old friend in my toolkit. I’ve relied on this tool for countless investigations, probably on every single one. The ability to swiftly uncover and validate your targets’ online presence is invaluable.

    Sherlock’s prowess lies in its efficiency. Developed by Sherlock Project, it’s designed to streamline the process of gathering information, making it a staple for OSINT enthusiasts worldwide.

    Introducing Holehe

    First up, let’s shine a spotlight on Holehe, a tool that might have slipped under your radar but packs a punch in the OSINT arena.

    Easy Installation

    Getting Holehe up and running is a breeze. Just follow these simple steps bewlo. I quickly hopped on my Kali test machine and installed it:

    Bash
    git clone https://github.com/megadose/holehe.git
    cd holehe/
    sudo python3 setup.py install

    I’d recommend installing it with Docker, but since I reinstall my demo Kali box every few weeks, it doesn’t matter that I globally install a bunch of Python libraries.

    Running Holehe

    Running Holehe is super simple:

    Bash
    holehe --no-clear --only-used [email protected]

    I used the --no-clear flag so I can just copy my executed command; otherwise, it clears the terminal. I use the --only-used flag because I only care about pages that my target uses.

    Let’s check out the result:

    Bash
    *********************
       [email protected]
    *********************
    [+] wordpress.com
    
    [+] Email used, [-] Email not used, [x] Rate limit, [!] Error
    121 websites checked in 10.16 seconds
    Twitter : @palenath
    Github : https://github.com/megadose/holehe
    For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ
    100%|█████████████████████████████████████████| 121/121 [00:10<00:00, 11.96it/s]

    Sweet! We have a hit! Holehe checked 121 different pages in 10.16 seconds.

    Debugging Holehe

    So running the tool without the --only-used flag is, in my opinion, important for debugging. It seems that a lot of pages rate-limited me or are throwing errors. So there is a lot of potential of missed accounts here.

    Bash
    *********************
       [email protected]
    *********************
    [x] about.me
    [-] adobe.com
    [-] amazon.com
    [x] amocrm.com
    [-] any.do
    [-] archive.org
    [x] forum.blitzortung.org
    [x] bluegrassrivals.com
    [-] bodybuilding.com
    [!] buymeacoffee.com
    
    [+] Email used, [-] Email not used, [x] Rate limit, [!] Error
    121 websites checked in 10.22 seconds

    the list is very long so I removed a lot of the output

    Personally, I think that since a lot of that code is 2 years old, many of these pages have become a lot smarter about detecting bots, which is why the rate limit gets reached.

    Holehe Deep Dive

    Let us look at how Holehe works by analyzing one of the modules. I picked Codepen.

    Please check out the code. I added some comments:

    Python
    from holehe.core import *
    from holehe.localuseragent import *
    
    
    async def codepen(email, client, out):
        name = "codepen"
        domain = "codepen.io"
        method = "register"
        frequent_rate_limit = False
    
        # adding necessary headers for codepen signup request
        headers = {
            "User-Agent": random.choice(ua["browsers"]["chrome"]),
            "Accept": "*/*",
            "Accept-Language": "en,en-US;q=0.5",
            "Referer": "https://codepen.io/accounts/signup/user/free",
            "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
            "X-Requested-With": "XMLHttpRequest",
            "Origin": "https://codepen.io",
            "DNT": "1",
            "Connection": "keep-alive",
            "TE": "Trailers",
        }
    
        # getting the CSRF token for later use, adding it to the headers
        try:
            req = await client.get(
                "https://codepen.io/accounts/signup/user/free", headers=headers
            )
            soup = BeautifulSoup(req.content, features="html.parser")
            token = soup.find(attrs={"name": "csrf-token"}).get("content")
            headers["X-CSRF-Token"] = token
        except Exception:
            out.append(
                {
                    "name": name,
                    "domain": domain,
                    "method": method,
                    "frequent_rate_limit": frequent_rate_limit,
                    "rateLimit": True,
                    "exists": False,
                    "emailrecovery": None,
                    "phoneNumber": None,
                    "others": None,
                }
            )
            return None
    
        # here is where the supplied email address is added
        data = {"attribute": "email", "value": email, "context": "user"}
    
        # post request that checks if account exists
        response = await client.post(
            "https://codepen.io/accounts/duplicate_check", headers=headers, data=data
        )
    
        # checks response for specified text. If email is taken we have a hit
        if "That Email is already taken." in response.text:
            out.append(
                {
                    "name": name,
                    "domain": domain,
                    "method": method,
                    "frequent_rate_limit": frequent_rate_limit,
                    "rateLimit": False,
                    "exists": True,
                    "emailrecovery": None,
                    "phoneNumber": None,
                    "others": None,
                }
            )
        else:
            # we land here if email is not taken, meaning no account on codepen
            out.append(
                {
                    "name": name,
                    "domain": domain,
                    "method": method,
                    "frequent_rate_limit": frequent_rate_limit,
                    "rateLimit": False,
                    "exists": False,
                    "emailrecovery": None,
                    "phoneNumber": None,
                    "others": None,
                }
            )

    The developer of Holehe had to do a lot of digging. They had to manually analyze the signup flow of a bunch of different pages to build these modules. You can easily do this by using a tool like OWASP ZAP or Burp Suite or Postman. It is a lot of manual work, though.

    The issue is that flows like this often change. If Codepen changed the response message or format, this code would fail. That’s the general problem with building web scrapers. If a header name or HTML element is changed, the code fails. This sort of code is very hard to maintain. I am guessing it is why this project has been more or less abandoned.

    Nonetheless, you could easily fix the modules, and this would work perfectly again. I suggest using Python Playwright for the requests; using a headless browser is harder to detect and will probably lead to higher success.

    Sherlock

    Let me introduce you to another tool called Sherlock, which I’ve frequently used in investigations.

    Installation

    I’m just going to install it on my test system. But there’s also a Docker image I’d recommend for a production server:

    Bash
    git clone https://github.com/sherlock-project/sherlock.git
    cd sherlock
    python3 -m pip install -r requirements.txt

    Sherlock offers a plethora of options, and I recommend studying them for your specific case. It’s best used with usernames, but today, we’ll give it a try with an email address.

    Running Sherlock

    Simply run:

    Bash
    python3 sherlock [email protected]

    Sherlock takes a little bit longer than holehe, so you need a little more patience. Here are the results of my search:

    Bash
    [*] Checking username [email protected] on:
    
    [+] Archive.org: https://archive.org/details/@[email protected]
    [+] BitCoinForum: https://bitcoinforum.com/profile/[email protected]
    [+] CGTrader: https://www.cgtrader.com/[email protected]
    [+] Chaos: https://chaos.social/@[email protected]
    [+] Cults3D: https://cults3d.com/en/users/[email protected]/creations
    [+] Euw: https://euw.op.gg/summoner/[email protected]
    [+] Mapify: https://mapify.travel/[email protected]
    [+] NationStates Nation: https://nationstates.net/[email protected]
    [+] NationStates Region: https://nationstates.net/[email protected]
    [+] Oracle Community: https://community.oracle.com/people/[email protected]
    [+] Polymart: https://polymart.org/user/[email protected]
    [+] Slides: https://slides.com/[email protected]
    [+] Trello: https://trello.com/[email protected]
    [+] chaos.social: https://chaos.social/@[email protected]
    [+] mastodon.cloud: https://mastodon.cloud/@[email protected]
    [+] mastodon.social: https://mastodon.social/@[email protected]
    [+] mastodon.xyz: https://mastodon.xyz/@[email protected]
    [+] mstdn.io: https://mstdn.io/@[email protected]
    [+] social.tchncs.de: https://social.tchncs.de/@[email protected]
    
    [*] Search completed with 19 results

    At first glance, there are a lot more results. However, upon review, only 2 were valid, which is still good considering this tool is normally not used for email addresses.

    Sherlock Deep Dive

    Sherlock has a really nice JSON file that can easily be edited to add or remove old tools. You can check it out sherlock/resources/data.json.

    This makes it a lot easier to maintain. I use the same approach for my OSINT tools here on this website.

    This is what one of Sherlock’s modules looks like:

    JSON
      "Docker Hub": {
        "errorType": "status_code",
        "url": "https://hub.docker.com/u/{}/",
        "urlMain": "https://hub.docker.com/",
        "urlProbe": "https://hub.docker.com/v2/users/{}/",
        "username_claimed": "blue"
      },

    There’s not much more to it; they basically use these “templates” and test the responses they get from requests sent to the respective endpoints. Sometimes by matching text, sometimes by using regex.

    Spiderfoot

    Now we get to the star of the show: Spiderfoot. I love Spiderfoot. I use it on every engagement, usually only in Passive mode with just about all the API Keys that are humanly affordable. The only thing I do not like about it is that it actually finds so much information that it takes a while to sort through the data and filter out false positives or irrelevant data. Playing around with the settings can drastically reduce this.

    Installation

    Spiderfoot is absolutely free and even without API Keys for other services, it finds a mind-boggling amount of information. It has saved me countless hours on people investigations, you would not believe it.

    You can find the installation instructions on the Spiderfoot GitHub page. There are also Docker deployments available for this. In my case, it is already pre-installed on Kali, so I just need to start it.

    Bash
    spiderfoot -l 0.0.0.0:8081

    This starts the Spiderfoot webserver, and I can reach it from my network on the IP of my Kali machine on port 8081. In my case, that would be http://10.102.0.11:8081/.

    After you navigate to the address, you will be greeted with this screen:

    I run a headless Kali, so I just SSH into my Kali “server.” If you are following along, you can simply run spiderfoot -l 127.0.0.1:8081 and only expose it on localhost, then browse there on your Kali Desktop.

    Running Spiderfoot

    Spiderfoot is absolutely killer when you add as many of the API Keys as possible. A lot of them are for free. Just export the Spiderfoot.cfg from the settings page, fill in the keys, then import them.

    Important: before you begin, check the settings. Things like port scans are enabled by default. Your target will know you are scanning them. By default, this is not a passive recon tool like the others. You can disable them OR just run Spiderfoot in Passive mode when you configure a new scan.

    My initial scan did not find many infos, that’s good. The email address I supplied should be absolutely clean. I did want to show you some results, so I started another search with my karlcom.de domain, which is my consulting company.

    By the time the scan was done, it had found over 2000 results linking Karlcom to Exploit and a bunch of other businesses and websites I run. It found my clear name and a whole bunch of other interesting information about what I do on the internet and how things are connected. All that just by putting my domain in without ANY API keys. That is absolutely nuts.

    You get a nice little correlation report at the end (you do not really need to see all the things in detail here):

    Once you start your own Spiderfoot journey, you will have more than enough time to study the results there and see them as big as you like.

    Another thing I did not show you was the “Browse” option. While a scan is running, you can view the results in the web front end and already check for possible other attack vectors or information.

    Summary

    So, what did we accomplish on our OSINT adventure? We took a spin through some seriously cool tools! From the nifty Holehe to the trusty Sherlock and the mighty Spiderfoot, each tool brings its own flair to the table. Whether you’re sniffing out secrets or just poking around online, these tools have your back. With their easy setups and powerful features, Holehe, Sherlock, and Spiderfoot are like the trusty sidekicks you never knew you needed in the digital world.

    Keep exploring, stay curious, and until next time!

  • Node-RED, building Nmap as a Service

    Node-RED, building Nmap as a Service

    Introduction

    In the realm of cybersecurity, automation is not just a convenience but a necessity. Having a tool that can effortlessly construct endpoints and interconnect various security tools can revolutionize your workflow. Today, I’m excited to introduce you to Node-RED, a powerhouse for such tasks.

    This is part of a series of hacking tools automated with Node-RED.

    Setup

    While diving into the intricacies of setting up a Kali VM with Node-RED is beyond the scope of this blog post, I’ll offer some guidance to get you started.

    Base OS

    To begin, you’ll need a solid foundation, which is where Kali Linux comes into play. Whether you opt for a virtual machine setup or use it as the primary operating system for your Raspberry Pi, the choice is yours.

    Running Node-RED

    Once you’ve got Kali Linux up and running, the next step is to install Node-RED directly onto your machine, NOT in a Docker container since you will ned root access to the host system. Follow the installation guide provided by the Node-RED team.

    To ensure seamless operation, I highly recommend configuring Node-RED to start automatically at boot. One effective method to achieve this is by utilizing PM2.

    By following these steps, you’ll have Node-RED set up and ready to streamline your cybersecurity automation tasks.

    Nmap as a Service

    In this section, we’ll create a web service that executes Nmap scans, accessible via a URL like so: http://10.10.0.11:8080/api/v1/nmap?target=exploit.to (Note: Your IP, port, and target will differ).

    Building the Flow

    To construct this service, we’ll need to assemble the following nodes:

    • HTTP In
    • Function
    • Exec
    • Template
    • HTTP Response

    That’s all it takes.

    You can define any path you prefer for the HTTP In node. In my setup, it’s /api/v1/nmap.

    The function node contains the following JavaScript code:

    JavaScript
    msg.scan_options = "-sS -Pn -T3";
    msg.scan_target = msg.payload.target;
    
    msg.payload = msg.scan_options + " " + msg.scan_target;
    return msg;

    It’s worth noting that this scan needs to be run as a root user due to the -sS flag (learn more here). The msg.payload.target parameter holds the ?target= value. While in production, it’s crucial to filter and validate input (e.g., domain or IP), for local testing, it suffices.

    The Exec node is straightforward:

    It simply executes Nmap and appends the msg.payload from the previous function node. So, in this example, it results in:

    Bash
    nmap -sS -Pn -T3 exploit.to

    The Template node formats the result for web display using Mustache syntax:

    <pre>
    {{payload}}
    </pre>

    Finally, the HTTP Response node sends the raw Nmap output back to the browser. It’s important to note that this setup isn’t suitable for extensive Nmap scans that take a while, as the browser may timeout while waiting for the response to load.

    You now have a basic Nmap as a Service.

    TODO

    You can go anywhere from here, but I would suggest:

    •  add validation to the endpoint
    •  add features to supply custom nmap flags
    •  stream result to browser via websocket
    •  save output to database or file and poll another endpoint to check if done
    •  format output for web (either greppable nmap or xml)
    •  ChatOps (Discord, Telegram bot)

    Edit 1:

    I ended up adding validation for domain and IPv4. I also modified the target variable. It is now msg.target vs. msg.payload.target.

    JavaScript
    function validateDomain(domain) {
      var domainRegex = /^(?!:<strong>\/\/</strong>)([a-zA-Z0-9-]+<strong>\.</strong>)+[a-zA-Z]{2,}$/;
      return domainRegex.test(domain);
    }
    
    function validateIPv4(ipv4) {
      var ipv4Regex =
        /^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$/;
      return ipv4Regex.test(ipv4);
    }
    
    if (validateDomain(msg.payload.target) || validateIPv4(msg.payload.target)) {
      msg.passed = true;
      msg.target = msg.payload.target;
      return msg;
    }
    
    msg.passed = false;
    return msg;
    

    The flow now looks like this, and checks the msg.passed. If it is false then it returns a HTTP 400 Bad Request, else it starts the Nmap scan.

  • AdGuard Home for security and blocking ads

    AdGuard Home for security and blocking ads

    Introduction

    In today’s digital age, the internet is saturated with advertisements and trackers, leading to slower browsing speeds and potential security threats. To combat these issues, leveraging AdGuard Home as a central ad blocker provides several key benefits, such as improved security and faster internet speeds. While browser plugins like “uBlock Origin” offer protection for individual devices within specific browsers, they fall short in safeguarding all devices, particularly those without browser support, such as IoT devices.

    Enhanced Security

    AdGuard Home blocks intrusive ads and potentially harmful websites before they even reach your devices. By filtering out malicious content, AdGuard Home significantly reduces the risk of malware infections and phishing attacks, safeguarding your personal information and sensitive data.

    Protecting Privacy

    Ads often come bundled with tracking scripts that monitor your online behavior, compromising your privacy. With AdGuard Home, you can prevent these trackers from collecting your data, preserving your anonymity and preventing targeted advertising based on your browsing habits.

    Faster Internet Speeds

    Advertisements consume valuable bandwidth and resources, leading to slower loading times and sluggish internet performance. By eliminating ads and unnecessary tracking requests, AdGuard Home helps optimize your network’s efficiency, resulting in faster page loading speeds and smoother browsing experiences.

    Open-Source and Transparent

    AdGuard Home is open-source software, meaning its source code is freely available for scrutiny and verification by the community. This transparency fosters trust and ensures that the software operates with integrity, free from hidden agendas or backdoors.

    Hosting AdGuard Home

    Now that you understand why you would want a central adblocker in your home network let’s get started in setting it up.

    First you need to decide if you want to host locally or in the cloud.

    Hosting locally:

    Benefits:

    • easier setup
    • more secure
    • private

    Drawbacks:

    • need a server

    Cloud hosting:

    Benefits:

    • no local server
    • better uptime

    Drawbacks:

    • more setup for same privacy and security

    Since DNS is not encrypted, like HTTPS for example, the cloud provider will most likely be able to see you DNS queries. You will need to use DNS-over-TLS or DNS-over-HTTPS. In a home network it is usually okay to use regular DNS, because AdGuard Home itself uses DNS-over-*S resolvers to get the final address. An example of Cloudflare’s resolvers

    https://dns.cloudflare.com/dns-query
    tls://1dot1dot1dot1.cloudflare-dns.com
    https://security.cloudflare-dns.com/dns-query
    tls://security.cloudflare-dns.com

    the regular DNS server would be 1.1.1.1

    Local Setup

    Depending on your setup, you might already have a server, Home Assistant, pfSesne firewall or nothing at all. I am going to assume you do not have any of the above. So first of all you will need to repuporse an old PC or somethign like a Raspberry Pi (which i highly recommend, i own 7).

    I have personally run AdGuard Home in a reasoably sized home network on a Raspberry Pi Zero W. I recommend a bigger one, but if you’re on a really tight budget, that is probably your best bet.

    Once you have a machine and it is running a Linux distribution of your choice, you now can install AdGuard Home either in a Container or directly.

    Adguard already has a pretty good documentation you can find here, i will shortly show you the 2 options:

    Docker

    Firstly you need to install Docker. Once you did this, all that is left to do is basically just run:

    docker run --name adguardhome\
        --restart unless-stopped\
        -v ./work:/opt/adguardhome/work\
        -v ./conf:/opt/adguardhome/conf\
        -p 53:53/tcp -p 53:53/udp\
        -p 67:67/udp -p 68:68/udp\
        -p 80:80/tcp -p 443:443/tcp -p 443:443/udp -p 3000:3000/tcp\
        -p 853:853/tcp\
        -p 784:784/udp -p 853:853/udp -p 8853:8853/udp\
        -p 5443:5443/tcp -p 5443:5443/udp\
        -d adguard/adguardhome

    and that is it. Now you can go to http://[YOUR-IP]:3000 and follow the steps.

    Direct

    Installing directly is just as simple:

    curl -s -S -L https://raw.githubusercontent.com/AdguardTeam/AdGuardHome/master/scripts/install.sh | sh -s -- -v

    That is it. You might have to copy the start command which will be shown in your terminal before you can set up your AdGuard in the browser.

    Web UI

    The steps for AdGuard in the Web UI are self explanatory so I did not go into detail here. You are basically good to go. I recommend 2 things:

    Go to Filters > DNS blocklists, then click on “Add blocklist” and select every Security list. I have also enabled all the “General” blocklists and not trouble.

    Go to Settings > DNS settings, I have the following upstream DNS server:

    quic://dns.adguard-dns.com
    # SWITCH
    https://dns.switch.ch/dns-query
    tls://dns.switch.ch
    # quad9
    tls://dns.quad9.net
    # cloudflare
    https://dns.cloudflare.com/dns-query
    tls://1dot1dot1dot1.cloudflare-dns.com
    https://security.cloudflare-dns.com/dns-query
    tls://security.cloudflare-dns.com

    Router Configuration

    I think most of the readers will have a Fritz!Box so this is what I will show here. The setup is probably very similar on your router. You basically just have to edit the DNS server settings in your router.

    Fritz!Box

    imageI did not show you my DNS Server IP by mistake. Scroll to the end to find out why. 9.9.9.9 is the Quad9 DNS Server, I use this in case my AdGuard goes down, otherwise I would not be able to browse the web.

    Bonus

    You are welcome to use my public AdGuard Home instance at 130.61.74.147, just enter it into your router. Please be advised that i will be able to see you DNS logs, which means I would know what you do on the internet… I do not care to be honest so you are free to make your own choice here.

  • How to sell rugs online (fast) – hosting your own Dark web market

    How to sell rugs online (fast) – hosting your own Dark web market

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Welcome to the Dark Web Rug Emporium!

    So, you’ve made the bold decision to take your rug-selling business to the mysterious realms of the internet’s underworld? Congratulations on joining the league of adventurers! But before you take the plunge into this clandestine universe, let’s shed some light on what exactly the dark web is.

    Unveiling the Dark Web

    Picture the dark web as the shady back alleys of cyberspace, lurking beyond the reach of traditional search engines like Google or Bing. To access this hidden realm, you’ll need specialized software such as Tor (The Onion Router). Tor works like a digital disguise, masking your online activities by bouncing them through a global network of servers, rendering them virtually untraceable. Think of it as donning a digital ski mask while you explore.

    The Secrets Within

    Within this shadowy domain lies a treasure trove of hidden services known as onion sites. These sites sport the “.onion” suffix and are exclusively accessible via Tor. They operate on encrypted networks, providing users with a veil of anonymity for their online dealings and conversations. Yes, your potential rug emporium can thrive in this covert corner of the internet.

    Setting Up Shop

    But don’t think setting up shop in the dark web is as simple as putting up a “For Sale” sign. It demands a certain level of technical expertise and a deep understanding of anonymity protocols. But fret not, brave entrepreneur, for we’re about to embark on a journey to illuminate the path to rug-selling triumph in the internet’s shadows. So, buckle up, adjust your night vision goggles, and let’s dive in.

    For valuable insights into navigating the dark web as a rug salesman, I highly recommend checking out this enlightening talk: DEF CON 30 – Sam Bent – Tor – Darknet Opsec By a Veteran Darknet Vendor

    Establishing Your Den

    Now that we’ve suited up with our cybernetic fedoras and armed ourselves with the necessary tools, it’s time to establish our base of operations. Think of it as laying the foundation for your virtual rug emporium.

    Payment Processing: Decrypting the Coinage

    In the dark web marketplace, cash is so last millennium. Cryptocurrencies reign supreme, offering a level of anonymity and decentralization that traditional fiat currencies can only dream of. To cater to our discerning clientele, we’ll be accepting payments in Bitcoin and Monero, the preferred currencies of choice for denizens of the deep web.

    But how do we integrate these cryptocurrencies into our rug-selling empire? Fear not, for the internet offers solutions to meet our clandestine needs. Here are a few notable options to consider:

    1. Bitcart: A sleek and user-friendly payment processor. With its robust features and seamless integration, Bitcart ensures a smooth transaction experience for both buyers and sellers. Check out their website for a complete list of features.
    2. BTCPay Server: For the more tech-savvy rug merchants among us, BTCPay Server offers unparalleled flexibility and control over our payment infrastructure. This open-source platform allows us to self-host our payment gateway, giving us complete autonomy over our financial transactions. Check out their website for a complete list of features.

    Now that we’ve selected our payment processors, it’s time to lay the groundwork for our virtual storefront. We’ll be starting with a fresh Debian 12 LXC container, providing us with a clean slate to build upon. Let’s roll up our sleeves and prepare our base system for the dark web bazaar:

    Bash
    sudo su
    apt update && apt upgrade -y
    apt install git curl sudo -y
    curl -fsSL https://get.docker.com -o get-docker.sh
    sh get-docker.sh
    

    With our base system primed and ready, we’re one step closer to realizing our rug-selling dreams in the shadowy corners of the internet. But remember, dear reader, the journey ahead is fraught with peril and intrigue. So, steel yourself, for the dark web awaits.

    Bitcart

    Bitcart <store dash

    Effortless Deployment

    Deploying Bitcart is a breeze with our simplified steps:

    Replace YOUR_DOMAIN_OR_IP with your domain/IP

    Bash
    sudo su -
    apt-get update && apt-get install -y git
    if [ -d "bitcart-docker" ]; then echo "Existing bitcart-docker folder found, pulling instead of cloning."; git pull; fi
    if [ ! -d "bitcart-docker" ]; then echo "Cloning bitcart-docker"; git clone https://github.com/bitcart/bitcart-docker bitcart-docker; fi
    export BITCART_HOST=YOUR_DOMAIN_OR_IP
    export BITCART_REVERSEPROXY=nginx
    export BITCART_CRYPTOS=btc,xmr
    export BITCART_ADDITIONAL_COMPONENTS=tor
    cd bitcart-docker
    ./setup.sh
    

    This will add Tor support and make Monero (XMR) and Bitcoin (BTC) usable.

    After setup, navigate to http://DOMAIN_OR_IP/admin/register to register your first user, who will be designated as your admin.

    Real talk about Bitcart

    Using Bitcart to set up your online store is straightforward, but there’s a lot to learn to make the most of it. Check out their documentation to understand all the options and features.

    Running an online store may seem easy, but it’s actually quite complex. Even though Bitcart makes it easier, there are still challenges, especially if you want to use it with Tor. Tor users might have trouble loading certain parts of your store, which could reveal their identity.

    If you’re comfortable with WordPress, you might want to try Bitcart’s WooCommerce integration. But if you’re serious about building a dark web store, a custom solution is best. Bitcart offers a way to do this, which you can learn about here. You can use Python and Django to build it, which is great because Django lets you make pages with less JavaScript, which is important for user privacy.

    So, while Bitcart is a good starting point, building your own store tailored for the dark web ensures you have more control and can give your users a safer experience. With the right tools and approach, you can create a successful online store in the hidden corners of the internet.

    Harnessing Bitcart’s Capabilities

    If you’re contemplating Bitcart, delving into their documentation could revolutionize your approach. Crafting a tailored solution using their API opens up a plethora of opportunities.

    To bolster security, consider limiting Bitcart’s accessibility to your local machine, shielding it from prying eyes. Meanwhile, powering your marketplace storefront with platforms like PHP (Laravel)Django, or even Next.js provides scalability and flexibility.

    This strategy seamlessly integrates Bitcart’s robust backend features with the versatility of these frameworks, ensuring a smooth and secure shopping experience for your users.

    The reasoning behind this suggestion lies in the solid community support and reliability of battle-tested technologies. Platforms such as PHP (Laravel), Django, and Next.js boast extensive communities and proven track records—essential qualities in the dark web landscape.

    In the clandestine corners of cyberspace, resilience reigns supreme. A single vulnerability in your storefront could lead to catastrophe. By aligning with established frameworks, you gain access to a wealth of expertise and resources, bolstering your defenses against potential threats.

    Ultimately, adopting these trusted technologies isn’t merely a matter of preference—it’s a strategic necessity for safeguarding your online presence in the murky depths of the internet.

    BTCPayServer: Unveiling a Sophisticated Setup

    Setting up BTCPayServer demands a bit more effort due to its slightly complex documentation, especially when deploying on a local network. However, integrating Monero turned out to be surprisingly straightforward. Here’s an excellent guide on that: Accepting Monero via BTCPay Server.

    I’ve made slight modifications to the deployment script from the official documentation:

    Bash
    mkdir BTCPayServer
    cd BTCPayServer
    git clone https://github.com/btcpayserver/btcpayserver-docker
    cd btcpayserver-docker
    export BTCPAY_HOST="btcpay.local"
    export REVERSEPROXY_DEFAULT_HOST="$BTCPAY_HOST"
    export NBITCOIN_NETWORK="mainnet"
    export BTCPAYGEN_CRYPTO1="btc"
    export BTCPAYGEN_CRYPTO2="xmr"
    export BTCPAYGEN_ADDITIONAL_FRAGMENTS="opt-save-storage-xxs" # for demo
    export BTCPAYGEN_REVERSEPROXY="nginx"
    export BTCPAYGEN_LIGHTNING="clightning"
    . ./btcpay-setup.sh -i
    

    Note that this is a local setup, but it will be publicly accessible over the onion address.

    What distinguishes BTCPayServer is its sleek and modern admin interface. As someone who appreciates good design, I find its aesthetics truly appealing. Furthermore, it includes a built-in store and support for Tor, adding an extra layer of privacy.

    Customization is seamless with BTCPayServer’s highly adaptable UI. Additionally, its robust API empowers users to craft their own frontend experiences, ensuring flexibility and control.

    Their documentation provides clear and insightful examples, making development a delightful experience. Personally, as a fan of NodeJS, I found their NodeJS examples particularly helpful.

    In this demonstration, I’ll initiate a Fast Sync to expedite the process. However, in practical scenarios, exercising patience becomes crucial. Given my location in a less technologically advanced country like Germany, Fast Sync typically completes within a few hours on my 100Mbit/s line, whereas the regular sync could span over several days.BTC-XMR Sync

    Starting Fast Sync

    Initiating Fast Sync is straightforward. Either follow the documentation or run these commands in your BTCPayServer directory:

    Bash
    btcpay-down.sh
    cd contrib/FastSync
    ./load-utxo-set.sh
    Bash
    # Once FastSync has completed
    cd ../
    btcpay-up.sh

    After the snyc is done you can accept payments:a bitcoin payment

    (Please do not send any Bitcoin to this address. They will be lost.)

    Clearing Things Up

    Before we conclude, let’s debunk a common misconception about the “dark web.” It’s not merely a haven for illicit activities. While I used attention-grabbing examples to highlight these tools, it’s essential to recognize their legitimate applications.

    Gone are the days when Tor provided complete anonymity for nefarious actors. As your enterprise expands, tracing your activities becomes increasingly feasible, albeit challenging.

    I emphasize this point to underscore that the services and tools discussed here aren’t inherently unlawful. While they can be exploited for illicit purposes, they also serve valid functions.

    Consider the case of “Shiny Flakes,” who operated a drug trade through a conventional website without relying on Tor, evading detection for a significant duration. You can explore this story further on Netflix: Shiny Flakes: The Teenage Drug Lord. The takeaway is that we shouldn’t demonize technology solely based on its potential for misuse. Encryption, for example, is integral for safeguarding data, despite its association with ransomware.

    Understanding the dual nature of these technologies is crucial for fostering responsible usage and harnessing their benefits while mitigating risks. It’s a delicate balance between innovation and accountability in the ever-evolving landscape of cybersecurity.

    Crafting Your Own Payment Processor

    Creating a custom lightweight solution isn’t as daunting as it sounds. While the previously mentioned platforms offer comprehensive features, you might find yourself needing only a fraction of them. Allow me to introduce you to one of my “Karl Projects” that I never quite finished. One day, while procrastinating on my actual project, I stumbled upon the idea of a super-secret Telegram chat where people would have to pay fees in Bitcoin or Monero. This brainchild was inspired by contemplating the possibilities of utilizing a State Machine.

    Here’s the gist of what you’ll need:

    • State Management: Maintain states such as ORDER_NEWORDER_PROCESSINGORDER_PAID.
    • Dynamic Address Generation: Generate a new address for each transaction (because, let’s face it, that’s what the cool kids do).
    • Transaction Verification: Verify if transactions are confirmed.
    • Payment Request Generation: Create a mechanism for generating payment requests.

    Now, let’s take a peek at my unfinished test code. May it ignite your creativity and spur you on to achieve remarkable feats:

    Python
    import json
    from typing import List
    from bitcoinlib.wallets import Wallet, wallet_create_or_open, WalletKey, BKeyError
    
    # Creating or opening a wallet
    w = wallet_create_or_open(
        "karls_wallet",
        keys="",
        owner="",
        network=None,
        account_id=0,
        purpose=None,
        scheme="bip32",
        sort_keys=True,
        password="",
        witness_type=None,
        encoding=None,
        multisig=None,
        sigs_required=None,
        cosigner_id=None,
        key_path=None,
        db_uri=None,
        db_cache_uri=None,
        db_password=None,
    )
    
    def get_personal_address(wallet: Wallet, name: str = "") -> WalletKey | List[WalletKey]:
        if not name:
            return wallet.keys()
    
        return wallet.key(name)
    
    def create_new_address(wallet: Wallet, name: str = "") -> WalletKey:
        if not name:
            return wallet.get_key()
    
        return wallet.new_key(name)
    
    def check_for_transaction(wallet_key: str | WalletKey, wallet: Wallet):
        if isinstance(wallet_key, str):
            try:
                wallet_key = wallet.key(wallet_key)
            except BKeyError as e:
                print(f'Sorry, no key by the name of "{wallet_key}" in the wallet.')
                return
    
        wallet.scan_key(wallet_key)
        recent_transaction = w.transaction_last(wallet_key.address)
    
        if recent_transaction:
            print("Most Recent Transaction:")
            print("Transaction ID:", recent_transaction.txid)
            print("Amount:", recent_transaction.balance_change)
            print("Confirmations:", recent_transaction.confirmations)
        else:
            print("No transactions found for the address.")
    

    Feel free to adapt and expand upon this code to suit your needs. Crafting your payment processor from scratch gives you unparalleled control and customization options, empowering you to tailor it precisely to your requirements. Maybe one day I will put a finished minimalistic payment processor out there.

    Summary

    And with that disappointing note, we conclude for now. But fear not, for knowledge awaits. Here are some additional sources to delve deeper into the world of cybersecurity and anonymity:

    Keep exploring, stay curious, and until next time!

    In case you are from Interpol

    You might be thinking, “Whoa, talking about setting up shop on the dark web sounds sketchy. Should we knock on this guys door?” Hey, I get it! But fear not, my friend. Writing about this stuff doesn’t mean I am up to no good. I am just exploring the possibilities, like any curious entrepreneur would. Plus, remember the “Shiny Flakes” story? Bad actors can do bad stuff anywhere, not just on the dark web.

  • Vaultwarden: A Lightweight, Self-Hosted Password Manager

    Vaultwarden: A Lightweight, Self-Hosted Password Manager

    What is Vaultwarden ?

    According to their GitHub page:

    An alternative server implementation of the Bitwarden Client API, written in Rust and compatible with official Bitwarden clients [disclaimer], perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.

    If you’re unfamiliar with Vaultwarden or Bitwarden, here’s a quick primer: Vaultwarden is a self-hosted password manager that allows you to securely access your credentials via web browsers, mobile apps, or desktop clients. Unlike traditional cloud-based solutions, Vaultwarden is designed for those of us who value control over our data and want a “syncable” password manager without the resource-heavy overhead.

    Since anything that isn’t self-hosted or self-administered is out of the question for me, Vaultwarden naturally caught my attention. Its lightweight design is perfect for a minimal resource setup. Here’s what I allocated to my Vaultwarden instance:

    Alpine LXC

    1 CPU Core

    1 GB RAM

    5 GB SSD Storage

    And let me tell you, this thing is bored. The occasional uptick in memory usage you might notice is mostly me testing backups or opening 20 simultaneous sessions across devices—so not even Vaultwarden’s fault. To put it simply: you could probably run this on a smart toaster, and it would still perform flawlessly.

    Why I Tried Vaultwarden

    Initially, I came across Vaultwarden while exploring the Proxmox VE Helper Scripts website and thought, “Why not give it a shot?” The setup was quick, and I was immediately impressed by its sleek, modern UI. Since Vaultwarden is compatible with Bitwarden clients, you get the added bonus of using the polished Bitwarden desktop app and its functional, albeit less visually appealing, browser extension.

    My main motivation for trying Vaultwarden was to move away from syncing my KeePass database across Nextcloud and iCloud. This process had become tedious, especially when setting up new development environments or trying out new Linux distributions—something I do frequently.

    Each time, I had to manually copy over my KeePass database, which meant logging into Nextcloud to retrieve it—a task that was ironically dependent on a password stored inside KeePass, which I didn’t have access to yet. With Vaultwarden, I can simply open a browser, enter my master password, and access everything instantly.

    Yes, it’s only one or two steps less than my KeePassXC workflow, but sometimes those minor annoyances add up more than they should. Vaultwarden’s seamless syncing across devices has been a breath of fresh air.

    Is KeePassXC Bad? Not at All! Here’s Why I Still Love It

    Over the years, KeePassXC has been an indispensable tool for managing my passwords and SSH keys. Even as new solutions like Vaultwarden (a self-hosted version of Bitwarden) gain popularity, KeePassXC continues to hold its ground, excelling in several areas where others fall short. Here’s a detailed breakdown of why I still rely on KeePassXC and how it outshines alternatives like Vaultwarden and Bitwarden.

    Why KeePassXC Stands Out (in my opinion)

    1. Superior Password Generator

    KeePassXC’s default password generator is leaps and bounds ahead of the competition. Its design is both powerful and intuitive, offering extensive customization without overwhelming the user. You can effortlessly fine-tune the length, complexity, and character set of generated passwords, making it ideal for advanced use cases.

    2. SSH Agent Integration

    If you work with multiple SSH keys (I manage over 100), KeePassXC’s built-in SSH agent is a game-changer. It allows seamless integration and management of SSH keys alongside your passwords, streamlining workflows for developers and sysadmins alike. This feature alone makes KeePassXC a must-have for me.

    3. File and Hidden Text Storage

    Unlike Bitwarden, which doesn’t currently support file storage, KeePassXC offers advanced options for securely storing files and hidden text.

    Why I’m Running KeePassXC and Vaultwarden in Parallel

    While I’ve started using Vaultwarden for some tasks, there are still key features in KeePassXC that I simply can’t live without:

    Local-Only Security:

    KeePassXC keeps everything offline by default, which eliminates the risks of exposing passwords to the internet. Even though I host Vaultwarden behind a VPN for added peace of mind, there’s something inherently reassuring about KeePassXC’s local-first approach.

    Privacy vs. Accessibility:

    Vaultwarden offers enough security features like MFA, WebAuthn or hardwaretoken to safely expose it online, but the idea of having my passwords accessible over the internet still feels unsettling. For that reason, KeePassXC remains my go-to for my most sensitive credentials. I am probably just paranoid, hosting it behind Cloudflare and a firewall with a Client certificate would add sufficient security (on top) where you would not have to worry.

    Unique Features:

    There are small yet critical features in KeePassXC, like its file storage capabilities and SSH agent integration, that Vaultwarden simply lacks at the moment.

    What Vaultwarden Does Well

    To give credit where it’s due, Vaultwarden brings some compelling features to the table. One standout is the reporting feature, which alerts you to compromised passwords. It’s a fantastic tool for staying on top of security best practices, I am also a huge fan of web based tools and I like the UI and UX in general.

    Conclusion

    Both KeePassXC and Vaultwarden have their strengths, and which one you choose ultimately depends on your priorities. For me, KeePassXC remains the gold standard for password management, offering unparalleled functionality for advanced users. Vaultwarden complements it well for “cloud”-based access and reporting, but it still has a long way to go before it can replace KeePassXC in my workflow.

    For now, running both in parallel strikes the perfect balance between security, usability, and convenience. Since I am running Vaultwarden on my Proxmox, which is already handling all my backup tasks, I also do not have to worry about data loss or doing extra work.

  • Unlock the Power of Remote Development with code-server

    Unlock the Power of Remote Development with code-server

    In the fast-paced world of software development, flexibility and efficiency are paramount. Enter code-server, an innovative tool that allows you to run Visual Studio Code (VS Code) in your browser, bringing a seamless and consistent development environment to any device, anywhere.

    Whether you’re working on a powerful desktop, a modest laptop, or even a tablet (pls don’t!), code-server ensures you have access to your development environment at all times. Here’s an in-depth look at what makes code-server a game-changer.

    What is code-server ?

    code-server is an open-source project that enables you to run VS Code on a remote server and access it via your web browser. This means you can:

    • Work on any device with an internet connection.

    • Leverage the power of cloud servers to handle resource-intensive tasks.

    • Maintain a consistent development environment across devices.

    With over 69.2k stars on GitHub, code-server has gained significant traction among developers, teams, and organizations looking for efficient remote development solutions.

    Why would you use code-server ?

    1. Flexibility Across Devices

    Imagine coding on your laptop, switching to a tablet, or even a Chromebook, without missing a beat. With code-server, your development environment follows you wherever you go—seamlessly.

    2. Offloading Performance to the Server

    Running resource-intensive tasks on a server instead of your local machine? Yes, please! Whether you’re working on complex builds or handling large datasets, code-server takes the heavy lifting off your device and onto the server.

    3. Bringing Your Dev Environment Closer to LLMs

    With the rise of large language models (LLMs), working near powerful servers hosting these models has become a necessity. No more downloading terabytes of data just to test integrations locally. Code-server simplifies this by placing your environment right where the action is.

    4. Because I Can! 🥳

    As a coder and IT enthusiast, sometimes the best reason is simply: Because I can! Sure, you could run local VSCode with “Remote Development” extensions or install it directly on a Chromebook—but where’s the fun in that? 😉

    5. Streamlined Backup and File Management

    One of my favorite aspects? Developing directly on a remote system where my regular backup processes already take care of everything. No extra steps, no worries—just peace of mind knowing my work is secure.

    I just did it to do it, I use code-server to manage all my Proxmox scrips and develop little Sysadmin tools. You also get a nice web shell.

    Installation

    Requirements

    Before diving in, make sure your system meets the minimum requirements:

    Linux machine with WebSockets enabled. (this is important to know when you use a reverse proxy)

    • At least 1 GB RAM and 2 vCPUs.

    I think you can get away with 1 CPU, mine is bored most of the time, obviously running resource intensive code will eat more.

    Check out the full requirements here.

    Installation

    There are multiple ways to get started with code-server, but I choose the easiest one:

    Bash
    curl -fsSL https://code-server.dev/install.sh | sh

    This script ensures code-server is installed correctly and even provides instructions for starting it. Never run script like this from the internet before checking it.

    Configuration

    After installation, you can customize code-server for your needs. Explore the setup and configuration guide to tweak settings, enable authentication, and enhance your workflow.

    Bash
    nano ~/.config/code-server/config.yaml

    That is where you will find the password to access code-server and you can also change the port:

    ~/.config/code-server/config.yml
    bind-addr: 127.0.0.1:8080
    password: 5f89a538c9c849b439d0f866
    cert: false

    You can disable auth by commenting out password. Personally I use SSO through Authentik for authentication.

    Now you have an awesome way to code in your browser:

    Resources

    GitHub Repository

    Setup Guide

    Frequently Asked Questions