Category: Blog

Blogpost

  • How to Get Real Trusted SSL Certificates with ACME-DNS in Nginx Proxy Manager

    How to Get Real Trusted SSL Certificates with ACME-DNS in Nginx Proxy Manager

    Today, I’m going to show you how you can obtain real, trusted SSL certificates for your home network or even a public website. Using this method, you can achieve secure HTTPS for your web services with certificates that browsers recognize as valid. Fun fact: the very website you’re reading this on uses this same method!

    This guide focuses on using ACME-DNS with Nginx Proxy Manager (NPM), a popular reverse proxy solution with a user-friendly web interface. Whether you’re setting up a self-hosted website, Nextcloud, or any other service, this approach can provide you with certificates signed by a trusted Certificate Authority (CA) for your home network or the public.

    Prerequisites

    • I am assuming you are on a Debian based Linux distribution (I will use a Debian 12 LXC). This should work an any host supporting Docker though.
    • You should have some knowledge of Docker and Docker Compose and it should be installed. You can find a step by step guide here.
    • You need your own domain. I get mine from Namecheap but any provider works. (I usually change the Nameserver to Cloudflare and manage them there since Namecheap is cheaper to buy)

    Please make sure you have these packages installed:

    Bash
    apt install curl jq nano

    (Jup, I like nano. Feel free to use your editor of choice.)

    Installing Nginx Proxy Manager

    Please refer to the installation guide on the Nginx Proxy Manager Website.

    For our installation we will be using Docker with Docker Compose:

    docker-compose.yml
    services:
      npm:
        image: 'jc21/nginx-proxy-manager:latest'
        restart: unless-stopped
        ports:
          - '443:443'
          - '81:81' # Admin Port
          # - '80:80' # not needed in this setup
        volumes:
          - ./data:/data
          - ./letsencrypt:/etc/letsencrypt
          # - ./custom-syslog.conf:/etc/nginx/conf.d/include/custom-syslog.conf

    I only like to expose port 443, since we will be using ACME-DNS we will not need 80. Port 81 will be exposed for now, but once configured we will remove this too.

    Now just run this command and you will be able to log in via http://your-ip:81 (replace “your-ip” with the actual IP of your machine, you can try http://127.0.0.1:81 if run locally)

    Bash
    docker compose up -d

    The default credentials are:

    Bash
    Email:    [email protected]
    Password: changeme

    Optional:

    I will show you my custom syslog config, but this is beyond the scope of this post, this is optional you do not need this:

    custom-syslog.conf
    log_format proxy_host_logs '$remote_addr - $remote_user [$time_local] '
                               '"$request" $status $body_bytes_sent '
                               '"$http_referer" "$http_user_agent" "$host" '
                               'tag="proxy-host-$host"';
    
    access_log syslog:server=udp://logs.karl:514,tag=proxy-host-$host proxy_host_logs;
    error_log syslog:server=udp://logs.karl:514,tag=proxy-host-$host warn;

    logs.karl is my local DNS record for my rsyslog server. I will make a post about my logging setup and link it here in the future.

    Setting up ACME-DNS

    The official documentation can be found here.

    Simply run this command:

    Bash
    curl -s -X POST https://auth.acme-dns.io/register | jq

    The response should look like this (I used some “X” to anonymize it a little):

    JSON
    {
      "username": "XXXXc1ab-XXXX-XXXX-XXXX-ec893c5ad50e",
      "password": "CkdjW5wqnXXXXXXXXXXXXXXcGZZyznUDkGRuXHdz",
      "fulldomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io",
      "subdomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11",
      "allowfrom": []
    }

    Please take note of your output and copy it to a file or note taking tool for later.

    We will need to edit this a little. If you set this up for your home network it is usually a good idea to use subdomain and a wildcard certificate, this will enable you to secure anything under that subdomain.

    There should be a “data” directory in your current one from the docker command earlier. We will create a JSON config file for Nginx Proxy Manager, you can name it whatever you want.

    Bash
    ls # check if "data" dir exists
    cd data
    nano acme__you_domain.json # use your domain name, but name does not matter

    In this file you will need to paste the config. I suggest using a subdomain like “home”.

    JSON
    {
      "home.your-domain.com": {
        "username": "XXXXc1ab-XXXX-XXXX-XXXX-ec893c5ad50e",
        "password": "CkdjW5wqnXXXXXXXXXXXXXXcGZZyznUDkGRuXHdz",
        "fulldomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io",
        "subdomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11",
        "allowfrom": []
      },
      "*.home.your-domain.com": {
        "username": "XXXXc1ab-XXXX-XXXX-XXXX-ec893c5ad50e",
        "password": "CkdjW5wqnXXXXXXXXXXXXXXcGZZyznUDkGRuXHdz",
        "fulldomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io",
        "subdomain": "XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11",
        "allowfrom": []
      }
    }
    

    It is important to note that with a wildcard like this you can not do something like: “plex.media.home.your-domain.com”, you can only use the specified level of subdomain, if you did want to do a “sub-sub” you would need to use “*.media.home.your-domain.com” and so on.

    A note on the “allowfrom": []“. If you have a static IP that you will always be coming from this is a good idea. Since this guide focuses on SSL for home you most likely have a dynamic IP which will work until it changes, so probably 24h or a week.

    Configuring DNS Records

    You need to edit your local DNS server and edit these in your registrar. I am using Cloudflare

    Cloudflare

    go to “your Domain -> DNS -> Records” there you will need to add a CNAME record.

    In the “Name” field put “_acme-challenge.YOUR-SUBDOMAIN” in our example that would be like you see in the image below. In the “Target” field you put the “fulldomain” from your config, like “XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io“. Leave “Proxy status” on “DNS only”.

    (If you are doing a public and not home only setup you would also add a A, AAAA or CNAME record pointing to your public IP. For home setup you do not need this.)

    Local DNS

    The devices in your network need to know that your reverse proxy aka. Nginx Proxy Manager is handling “*.home.your-domain.com” you need to add this to your local DNS server so whenever someone goes to “*.home.your-domain.com” it is directed to your proxy. Now if you have a Pi-Hole, AdGuard, pfSense, OPNSense or in your Router varies, technically you could even edit the hosts file of each device.
    I am using a Unifi Dream Machine :

    In your dream machine go to: /network/default/settings/routing/dns

    there you create a new entry like so:

    Please use your configured domain and the IP of your system.

    Bringing it all together

    All we need to do now I configure our setup in the Nginx Proxy Manager. Go to your Admin interface at http://your-ip:81/nginx/certificates then click on “Add SSL-Certificate” and choose “Let’s Encrypt”

    There is a lot going on here but I will explain:

    • In “Domain Names” enter the domains you have configured
    • Enter your E-Mail Address
    • Choose “ACME-DNS” in the Provider menu
    • In the API URL enter “https://auth.acme-dns.io”
    • the registration file is the JSON file we created earlier. Add whatever you called it in there, the path “/data/” should be fine if you followed all the steps.
    • Leave propagation empty

    Finally just agree and save.

    Your new certificate will pop up once the loading screen goes away.

    It should look like this:

    By the way, I have a profile image because I used my Gravatar email address for the admin login.

    Securing the Nginx Proxy Manager Admin

    Now that we have a certificate let us use it directly on our admin interface.

    Add a new proxy host. Enter the domain of your choosing (you need to change “your-domain.com”. Since it is accessing itself in the Docker network the hostname is “npm” this is its name from the “docker-compose.yml” at the beginning.

    Under the “SSL” tab just choose your created certificate.

    You do not have to choose the options for Force SSL, HTTP/2 and Block Common Exploits for this to work.

    Okay now press Save and test!

    If it works you can now remove the port from the compose:

    Bash
    docker compose down
    nano docker-compose.yml
    docker-compose.yml
    services:
      npm:
        image: 'jc21/nginx-proxy-manager:latest'
        restart: unless-stopped
        ports:
          - '443:443'
        volumes:
          - ./data:/data
          - ./letsencrypt:/etc/letsencrypt
    Bash
    docker compose up -d --build

    Now you can access your Nginx Proxy Manager admin interface via your new domain with a trusted SSL certificate.

    Conclusion

    Using ACME-DNS with Nginx Proxy Manager is a powerful way to obtain SSL certificates for your home network or website. It simplifies the process of handling DNS challenges and automates certificate issuance for secure HTTPS. You also will no longer have to expose your local services to the internet to get new certificates.

    By following this guide, you’ve gained the tools to secure your online services with minimal hassle. Stay tuned for more tips on managing your self-hosted environment, and happy hosting!

  • Denial-of-Wallet Attacks: Exploiting Serverless

    Denial-of-Wallet Attacks: Exploiting Serverless

    Disclaimer:

    The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk.

    For the full disclaimer, please click here.

    Introduction

    In the fast-paced world of cyber warfare, attackers are always on the hunt for new ways to hit where it hurts – both in the virtual world and the wallet. The latest trend? Denial-of-Wallet (DoW) attacks, a crafty scheme aimed at draining the bank accounts of unsuspecting victims.

    I am assuming you know what serverless is. Otherwise read this first: What is serverless computing?

    Attack Surface

    Serverless setups, touted for their flexibility and scalability, have become prime targets for these digital bandits. But fear not! Here’s your crash course in safeguarding your virtual vaults from these costly exploits.

    What’s a DoW attack, anyway?

    Think of it as the mischievous cousin of the traditional denial-of-service (DoS) onslaught. While DoS attacks aim to knock services offline, DoW attacks have a more sinister agenda: draining your bank account faster than you can say “cloud computing.”

    Unlike their DDoS counterparts, DoW attacks zero in on serverless systems, where users pay for resources consumed by their applications. This means that a flood of malicious traffic could leave you with a bill so hefty, it’d make Scrooge McDuck blush.

    But wait, there’s more!

    With serverless computing, you’re not just outsourcing servers – you’re also outsourcing security concerns. If your cloud provider drops the ball on protection, you could be facing a whole buffet of cyber threats, not just DoW attacks.

    Detecting & Protecting

    Now, spotting a DoW attack isn’t as easy as checking your bank statement. Sure, a sudden spike in charges might raise eyebrows, but by then, the damage is done. Instead, take proactive measures like setting up billing alerts and imposing limits on resource usage. It’s like putting a lock on your wallet before heading into a crowded marketplace.

    And let’s not forget about securing those precious credentials. If an attacker gains access to your cloud kingdom, they could wreak havoc beyond just draining your funds – we’re talking file deletions, instance terminations, the whole nine yards. So buckle up with least privilege services, multi-factor authentication, and service control policies to fortify your defenses.

    In the arms race between cyber crooks and cloud defenders, staying one step ahead is key. So, arm yourself with knowledge, fortify your defenses, and may your cloud budgets remain forever full!

    How to Attack

    This is what you came here for, isn’t it ? Before I go on I would like to remind you of my Disclaimer.

    Cloudflare

    First of all, big shoutout to Cloudflare for actually providing a valuable free tier of services (they do not pay me or anything, I actually like them a lot).

    Basically, they provide serverless functions called “Cloudflare Workers”, their endpoints usually look like this: worker-blah-blah-1337.blah.workers.dev You can also choose your own custom domain, but the default route is still enabled. I recommend you disable it, or else…well stay tuned.

    Here is their own billing example (Source):

    Monthly CostsFormula
    Subscription$5.00
    Requests$27.00 (100,000,000 requests – 10,000,000 included requests) / 1,000,000 _ $0.30
    CPU time$13.40 (7 ms of CPU time per request _ 100,000,000 requests – 30,000,000 included CPU ms) / 1,000,000 * $0.02
    Total$45.40

    They actually mention denial-of-wallet attacks and how you can counter them, or at least lessen the impact.

    Finding Cloudflare Workers

    One of the easiest ways to find endpoints is GitHub using a simple query like this: ?q=workers.dev&type=code or using ?q=workers.dev&type=commits. As I am writign this I found 121.000 lines of code that include workers.dev, let us maybe subtract some duplicates and maybe you end up with 20.000, some of them actually being pretty big companies as well.

    Next easy find is using some Google hackingsite:workers.dev returning 2.230.000 results (some being duplicates).

    Attacking Cloudflare Workers (hypothetically)

    Using a tool like Plow, HTTP(S) benchmarking tool can do about 1.000.000 requeests per 10 seconds on a normal machine using 20 connections. Playing around with these you can probably get a lot more, but it depends on a lot of factores like bandwidth and internet speed etc. So in theory you could cost your target $120 per hour from your home PC/Laptop. If you got 3 of your friends involved you could cost your target almost $500 per hour. Since you are running a script 24/7 that’s costing your target $12000 day or $84000 a week. Now if your’re attacking an enterprise that may not even be that bad for them, but imagine a small company paying 12k every day. As I explained above, there is also no going back, that compute is consumed and will be charged. Depending on if they use something like KV and other services you can multiply these numbers. A pretty common pattern is to have one Worker act as an API gateway, so one request could actually trigger up to 50/100 sub-requests.

    If, by just reading this, you feel bad, then congrats 🎉, you are probably one of the good guys, girls or anything in between.

    Back to reality

    Cloudflare being Cloudflare, they obviously have pretty good protections as is, in my experience better than AWS or Azure. So simply a running tool and hoping for carnage will not get you far.

    Some additional protections Cloudflare provides are:

    Being able to do all this easily for free, including their free DDoS protection should build up a nice barrier against such attacks. Looking at the bigger pricture, it is actually crazy that this can all be done for free, on AWS you would have to pay extra for all of these features and essentially denial-of-wallet yourself (😁).

    Any protection is only good, if it is enabled and configured correctly. I am using the following WAF rule for example:

    (not http.user_agent contains "Mozilla/5.0")

    This basically blocks everything that is not advertising itself as a browser. If you know a little tiny bit about how User Agents work, you know that getting around this rule is super simple. You would just need to write a script like this:

    Python
    import requests
    
    url = 'SOME PROTECTED URL'
    
    headers = {
        'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36',
    }
    
    # run 100 million requests with a timeout of one second
    for i in range(1, 100000000):
        requests.get(url, timeout=1, headers=headers)

    Now my simple filter rule thinks it is a browser and will let it through.

    Check out my 24h WAF statistic:

    As you can see most of the bots and scripts are blocked by this stupid simple rule. I am not showing you the rest of the rules, because I am literally explaining to you how you could get around my defenses, usually not a great idea on a post tagged #blackhat.

    Real world attack

    In a real world attack you will need residential proxies or multiple IPs with high rep. You then write a more advanced tool that autoamtes a browser, otherwise you will be detetcted very quickly. Even better if you use something like undetected_chromedriverfor more success.

    Obviously you also want to add random waits, a script being run every second will light up like a christmas tree:

    Python
    from random import randint
    from time import sleep
    
    sleep(randint(0,5))

    (You could just send as many requests as you want and have your hardware or internet connection add “organic” random waits, this will ultimatley lead to getting you blocked because of too many too fast requests)

    You will need more machines with more residential IPs, as this will be a lot slower. You will slwoly drain your targets wallet this way though. I mean in the end you could have this running on something like a Raspberry Pi costing you next to nothing in electricity and just slowly attacking your target, depending on their setup each single request from your side could be 50 on theirs.

    One other cool trick, which is actually still possbile, is to hijack WordPress websites that have xmlrpc.php enabled. This is called XML-RPC Pingback Attack and is as simple as:

    Bash
    curl -D - "www.vuln-wordpress.com/xmlrpc.php" \
         -d '<methodCall>
                <methodName>pingback.ping</methodName>
                <params>
                  <param>
                    <value>
                      <string>[TARGET HOST]</string>
                    </value>
                  </param>
                  <param>
                    <value>
                      <string>www.vuln-wordpress.com/postchosen</string>
                    </value>
                  </param>
                </params>
              </methodCall>'

    Summary

    As this post is getting longer I decided to end it here. These attacks work on any cloud based “serverless” provider that bills by usage. The key idea is to use as much of a companies “billed by usage” endpoints as possible.

    In theory this can do a lot of damage, in practice you will have to do a little more than just send a billion request, as fast as possible with some script, to an endpoint. I highlighted some ways to get around protections above, but you will most likely have to come up with your own new/custom solution in order to outsmart your target.

    Why Cloudflare ?

    I picked Cloudflare as an example, because I use them for everything and really like them. (Again, I am not paid to say this, I actually like them). This attack works on any other provider as well, actually it will probably work the least on Cloudflare, because of their free DDoS protection.

    Compared to AWS WAF the firewall alone would cost as much as the usage of Cloudflare Workers, so actually getting through the AWS WAF and then using a Lambda function, maybe even one that is reading some data from S3 would be disasterous.

  • Building a static site search with Pagefind

    Building a static site search with Pagefind

    Introduction

    Hey there, web wizards and code conjurers! Today, I’m here to spill the beans on a magical tool that’ll have you searching through your static site like a pro without sacrificing your users’ data to the digital overlords. Say goodbye to the snooping eyes of Algolia and Google, and say hello to Pagefind – the hero we need in the wild world of web development!

    Pagefind

    So, what’s the deal with Pagefind? Well, it’s like having your own personal search genie, but without the need for complex setups or sacrificing your site’s performance. Here’s a quick rundown of its enchanting features straight from the Pagefind spellbook:

    • Multilingual Magic: Zero-config support for sites that speak many tongues.
    • Filtering Sorcery: A powerful filtering engine for organizing your knowledge bases.
    • Custom Sorting Spells: Tailor your search results with custom sort attributes.
    • Metadata Mysticism: Keep track of custom metadata for your pages.
    • Weighted Wand Wielding: Adjust the importance of content in your search results.
    • Section Spellcasting: Fetch results from specific – sections of your pages.
    • Domain Diving: Search across multiple domains with ease.
    • Index Anything Incantation: From PDFs to JSON files, if it’s digital, Pagefind can find it!
    • Low-Bandwidth Brilliance: All this magic with minimal bandwidth consumption – now that’s some serious wizardry!

    Summoning Pagefind

    Now, let’s talk about summoning this mystical tool onto your Astro-powered site. It’s as easy as waving your wand and chanting npx pagefind --site "dist. Poof! Your site’s now equipped with the power of search!

    With a flick of your build script wand, you’ll integrate Pagefind seamlessly into your deployment pipeline. Just like adding a secret ingredient to a potion, modify your package.json build script to include Pagefind’s magic words.

    JSON
      "scripts": {
        "dev": "astro dev",
        "start": "astro dev",
        "build": "astro build && pagefind --site dist && rm dist/pagefind/*.css && cp -r dist/pagefind public/",
        "preview": "astro preview",
        "astro": "astro"
      },
    

    If you are not using Astro.js you will have to replace distwith your build directory. I will also explain why I am making the CSS dissapear.

    Running the command should automagically build your index like so:

    Bash
    [Building search indexes]
    Total:
      Indexed 1 language
      Indexed 19 pages
      Indexed 1328 words
      Indexed 0 filters
      Indexed 0 sorts
    
    Finished in 0.043 seconds
    

    Now my site is not that big, yet but 0.043 seconds is still very fast and if you are pying for build time, also next to nothing. Pagefind being written in Rust is very efficient.

    Getting Cozy with Pagefind’s UI

    Alright, so now you’ve got this powerful search engine at your fingertips. But wait, what’s this? Pagefind’s UI is a bit… opinionated. Fear not, fellow sorcerers! With a dash of JavaScript and a sprinkle of CSS, we’ll make it dance to our tune!

    Weaving a custom UI spell involves a bit of JavaScript incantation to tweak placeholders and buttons just the way we like them. Plus, with a bit of CSS wizardry, we can transform Pagefind’s UI into something straight out of our own enchanting design dreams!

    Astro
    ---import"../style/pagefind.css";---<divclass="max-w-96 flex"><divid="search"></div></div><scriptsrc="/pagefind/pagefind-ui.js"is:inline></script><script>document.addEventListener("astro:page-load",()=>{newPagefindUI({element:"#search",debounceTimeoutMs:500,resetStyles:!0,showEmptyFilters:!1,excerptLength:15,showImages:!1,addStyles:!1,});constsearchInput=document.querySelector<HTMLInputElement>(".pagefind-ui__search-input");constclearButton=document.querySelector<HTMLDivElement>(".pagefind-ui__search-clear");if(searchInput){searchInput.placeholder="Site Search";}if(clearButton){clearButton.innerText="Clear";}});</script>
    • /pagefind/pagefind-ui.js is Pagefind specific JavaScript.In the future I plan to reverse it as there is a lot of uneccessary code in there.
    • I am using astro:page-load as an event listener since I am using view transitions.

    Embrace Your Inner Stylist

    Ah,but crafting a unique style for your search UI is where the real fun begins!With the power of TailwindCSS (or your trusty CSS wand),you can mold Pagefind’s UI to fit your site’s aesthetic like a bespoke wizard robe.

    With a little imagination and a lot of creativity,you’ll end up with a search UI that’s as unique as your magical incantations.

    CSS
    .pagefind-ui__results-area{@applyborderborder-pink-500dark:text-white text-black p-4;@applyabsolutez-50dark:bg-gray-900 bg-white;@applymax-h-96overflow-y-automr-10;}.pagefind-ui__result{@applyborder-tmy-4dark:text-white text-black;}.pagefind-ui__resultmark{@applybg-fuchsia-700text-fuchsia-300;}.pagefind-ui__form{@applyborderdark:border-white border-black;}.pagefind-ui__search-input{@applydark:text-white text-black bg-transparent;}.pagefind-ui__search-input{@applyplaceholder:italicplaceholder:text-slate-400 p-2 border-r border-black;}.pagefind-ui__form{@applymin-w-full;}.pagefind-ui__message{@applyfont-semiboldfirst-letter:text-pink-500;}.pagefind-ui__result-link{@applyfont-boldunderlinetext-blue-500;}.pagefind-ui__result-title{@applymb-1;}.pagefind-ui__result-inner{@applymy-3;}.pagefind-ui__button{@applyborderborder-blackpy-1px-2hover:underlinemt-4;}.pagefind-ui__search-clear{@applymr-2;}

    (@apply is TailwindCSS specific,you can use regular CSS if you please)

    And there you have it,folks – the mystical journey of integrating Pagefind into your static site,complete with a touch of your own wizardly flair!

    custom search ui

    Now go forth,weave your web spells,and may your users’ search journeys be as magical as your coding adventures!🧙✨

    Where to go from here

    I gave you a quick look into building a simple static site search.In my opinion the JavaSript files from Pagefind should be slimmed down to work,in my case for Astro,the CSS should be applied by you and Pagefind should just leave you a simple unstyled search,I am sure they would be happy if someone helped them out by doing this.

    I was thinking about hosting my index on a Cloudflare Worker,then styling my search form however I want and just hooking up the Worker endpoint with the form,basically like a self hosted Algolia.An alternative to Pagefind could be Fuse.js,the drawback is that you would have to build your own index.

    Bonus:

    You can try out my search here: Exploit.to Search

    This post was originally posted on 17 Mar 2024 at on myCybersecurity blog.