Category: Projekt

Projekte

  • My Home Server: “PrettyLittleKitten” – A Personal Tech Haven

    My Home Server: “PrettyLittleKitten” – A Personal Tech Haven

    Hardware

    RAM

    My server is equipped with a total of 128GB of DDR5 RAM, made up of two Kingston FURY Beast kits (each consisting of 2 x 32GB, 6000 MHz, DDR5-RAM, DIMM).

    The RAM operates at around 3600 MHz and consistently maintains 32GB in active usage:

    Cooling

    I kept the fan coolers as they were and opted for an all-in-one liquid cooling solution: the Arctic Liquid Freezer III – 280. No particular reason, really—I just thought it was a cool choice (pun intended).

    PC Case

    This setup was originally intended to be a gaming-only PC, so I chose a sleek and clean-looking case: the Fractal Design North XL. While it’s an aesthetically pleasing choice, the one downside for use as a server is its limited storage capacity.

    CPU

    I chose the AMD Ryzen 7 7800X3D (AM5, 4.20 GHz, 8-Core), which is fantastic for gaming. However, as a server for my needs, I regret that it doesn’t have a better built-in GPU. Intel’s iGPUs are far superior for media transcoding, and using an integrated GPU instead of an external one would save a significant amount of energy.

    GPU

    I do have a dedicated GPU, the ASUS TUF Gaming AMD Radeon RX 7900 XTX OC Edition 24GB, which I chose primarily for its massive VRAM. This allows me to run larger models locally without any issues. However, when it comes to media transcoding, AMD GPUs fall short compared to other options, as highlighted in the Jellyfin – Selecting Appropriate Hardware guide.

    Mainboard

    I chose the MSI MAG B650 TOMAHAWK WIFI (AM5, AMD B650, ATX) as it seemed like a great match for the CPU. However, my GPU is quite large and ends up covering the only other PCI-E x16 slot. This limits my ability to install a decent hardware RAID card or other large expansion cards.

    Storage

    For the main OS, I selected the WD_BLACK SN770 NVMe SSD 2 TB, providing fast and reliable performance. To handle backups and media storage, I added a Seagate IronWolf 12 TB (3.5”, CMR) drive.

    For fast and redundant storage, I set up a ZFS mirror using two Intenso Internal 2.5” SSD SATA III Top, 1 TB drives. This setup ensures that critical data remains safe and accessible.

    Additionally, I included an external Samsung Portable SSD T7, 1 TB, USB 3.2 Gen.2 for extra media storage, rounding out the setup.

    Software

    For my main OS, I stick to what I know best—Proxmox. It’s absolutely perfect for home or small business servers, offering flexibility and reliability in a single package.

    I run a variety of services on it, and the list tends to evolve weekly. Here’s what I’m currently hosting:

    • Nginx Proxy Manager: For managing reverse proxies.
    • n8n: Automation tool for workflows.
    • Bearbot: A production-grade Django app.
    • Vaultwarden: A lightweight password manager alternative.
    • MySpeed: Network speed monitoring.
    • Another Nginx Proxy Manager: Dedicated to managing public-facing apps.
    • Code-Server: A browser-based IDE for developing smaller scripts.
    • Authentik: Single-Sign-On (SSO) solution for all local apps.
    • WordPress: This blog is hosted here.
    • Logs: A comprehensive logging stack including Grafana, Loki, Rsyslog, Promtail, and InfluxDB.
    • Home Assistant OS: Smart home management made easy.
    • Windows 11 Gaming VM: For gaming and other desktop needs.
    • Karlflix: A Jellyfin media server paired with additional tools to keep my media library organized.

    And this list is far from complete—there’s always something new to add or improve!

    Performance

    The core allocation may be displayed incorrectly, but otherwise, this is how my setup looks:

    Here’s the 8-core CPU usage over the last 7 days. As you can see, there’s plenty of headroom, ensuring the system runs smoothly even with all the services I have running:

    Energy costs for my server typically range between 20-25€ per month, but during the summer months, I can run it at 100% capacity during the day using the solar energy generated by my panels. My battery also helps offset some of the power usage during this time.

    Here’s a solid representation of the server’s power consumption:

    I track everything in my home using Home Assistant, which allows me to precisely calculate the energy consumption of each device, including my server. This level of monitoring ensures I have a clear understanding of where my energy is going and helps me optimize usage effectively.

    Conclusion

    Hosting a server locally is a significant investment—both in terms of hardware and energy costs. My setup cost €2405, and I spend about €40 per month on energy, including domain and backup services. While my solar panels make running the server almost free during summer, winter energy costs can be a challenge.

    That said, hosting locally has its advantages. It provides complete control over my data, excellent performance, and the flexibility to upgrade or downgrade hardware as needed. These benefits outweigh the trade-offs for me, even though the energy consumption is higher compared to a Raspberry Pi or Mini-PC.

    I could have gone a different route. A cloud server, or even an alternative like the Apple Mac Mini M4, might have been more efficient in terms of cost and power usage. However, I value upgradability and privacy too much to make those sacrifices.

    This setup wasn’t meticulously planned as a server from the start—it evolved from a gaming PC that was sitting unused. Instead of building a dedicated server from scratch or relying on a Mini-PC and NAS combination, I decided to repurpose what I already had.

    Sure, there are drawbacks. The fans are loud, energy costs add up, and it’s far from the most efficient setup. But for me, the flexibility, control, and performance make it worthwhile. While hosting locally might not be the perfect solution for everyone, it’s the right choice for my needs—and I think that’s what really matters.

  • Bearbot.dev – The Final Form

    Bearbot.dev – The Final Form

    I wanted to dedicate a special post to the new version of Bearbot. While I’ve already shared its history in a previous post, it really doesn’t capture the full extent of the effort I’ve poured into this update.

    The latest version of Bearbot boasts a streamlined tech stack designed to cut down on tech debt, simplify the overall structure, and laser-focus on its core mission: raking in those sweet stock market gains 🤑.

    Technologie

    • Django (Basic, not DRF): A high-level Python web framework that simplifies the development of secure and scalable web applications. It includes built-in tools for routing, templates, and ORM for database interactions. Ideal for full-stack web apps without a heavy focus on APIs.
    • Postgres: A powerful, open-source relational database management system known for its reliability, scalability, and advanced features like full-text search, JSON support, and transactions.
    • Redis: An in-memory data store often used for caching, session storage, and real-time messaging. It provides fast read/write operations, making it perfect for performance optimization.
    • Celery: A distributed task queue system for managing asynchronous tasks and scheduling. It’s commonly paired with Redis or RabbitMQ to handle background jobs like sending emails or processing data.
    • Bootstrap 5: A popular front-end framework for designing responsive, mobile-first web pages. It includes pre-designed components, utilities, and customizable themes.
    • Docker: A containerization platform that enables the packaging of applications and their dependencies into portable containers. It ensures consistent environments across development, testing, and production.
    • Nginx: A high-performance web server and reverse proxy server. It efficiently handles HTTP requests, load balancing, and serving static files for web applications.

    To streamline my deployments, I turned to Django Cookiecutter, and let me tell you—it’s been a game changer. It’s completely transformed how quickly I can get a production-ready Django app up and running.

    For periodic tasks, I’ve swapped out traditional Cron jobs in favor of Celery. The big win here? Celery lets me manage all asynchronous jobs directly from the Django Admin interface, making everything so much more efficient and centralized.

    Sweet, right ?

    Features

    Signals

    At its core, this is Bearbot’s most important feature—it tells you what to trade. To make it user-friendly, I added search and sort functionality on the front end. This is especially handy if you have a long list of signals, and it also improves the mobile experience. Oh, and did I mention? Bearbot is fully responsive by design.

    I won’t dive into how these signals are calculated or the reasoning behind them—saving that for later.

    Available Options

    While you’ll likely spend most of your time on the Signals page, the Options list is there to show what was considered for trading but didn’t make the cut.

    Data Task Handling

    Although most tasks can be handled via the Django backend with scheduled triggers, I created a more fine-tuned control system for data fetching. For example, if fetching stock data fails for just AAPL, re-running the entire task would unnecessarily stress the server and APIs. With this feature, I can target specific data types, timeframes, and stocks.

    User Management

    Bearbot offers complete user management powered by django-allauth, with clean and well-designed forms. It supports login, signup, password reset, profile updates, multifactor authentication, and even magic links for seamless access.

    Datamanagement

    Thanks to Django’s built-in admin interface, managing users, data, and other admin-level tasks is a breeze. It’s fully ready out of the box to handle just about anything you might need as an admin.

    Keeping track of all the Jobs

    When it comes to monitoring cronjobs—whether they’re running in Node-RED, n8n, Celery, or good old-fashioned Cron—Healthchecks.io has always been my go-to solution.

    If you’ve visited the footer of the bearbot.dev website, you might have noticed two neat SVG graphics:

    Those are dynamically loaded from my self-hosted Healthchecks instance, giving a quick visual of job statuses. It’s simple, effective, and seamlessly integrated!

    On my end it looks like this:

    I had to remove a bunch of Info, otherwise anyone could send uptime or downtime requests to my Healtchecks.io

    How the Signals work

    Every trading strategy ever created has an ideal scenario—a “perfect world”—where all the stars align, and the strategy delivers its best results.

    Take earnings-based trading as an example. The ideal situation here is when a company’s earnings surprise analysts with outstanding results. This effect is even stronger if the company was struggling before and suddenly outperforms expectations.

    Now, you might be thinking, “How could I possibly predict earnings without insider information?” There are a lot of things you could consider that indicate positive earnings like:

    • Launching hyped new products that dominate social media conversations.
    • Announcing major partnerships.
    • Posting a surge in job openings that signal strategic growth.

    There are a lot of factors that support facts that a company is doing really good.

    Let’s say you focus on one or two specific strategies. You spend considerable time researching these strategies and identifying supporting factors. Essentially, you create a “perfect world” for those scenarios.

    Bearbot then uses statistics to calculate how closely a trade aligns with this perfect world. It scans a range of stocks and options, simulating and comparing them against the ideal scenario. Anything scoring above a 96% match gets selected. On average, this yields about 4-5 trades per month, and each trade typically delivers a 60-80% profit.

    Sounds like a dream, right? Well, here’s the catch: it’s not foolproof. There’s no free lunch on Wall Street, and certainly no guaranteed money.

    The remaining 4% of trades that don’t align perfectly? They can result in complete losses—not just the position, but potentially your entire portfolio. Bearbot operates with tight risk tolerance, riding trades until the margin call. I’ve experienced this firsthand on a META trade. The day after opening the position, news of a data breach fine broke, causing the stock to plummet. I got wiped out because I didn’t have enough cash to cover the margin requirements. Ironically, Bearbot’s calculations were right—had I been able to hold through the temporary loss, I would’ve turned a profit. (Needless to say, I’ve since implemented much better risk management. You live, you learn.)

    If someone offers or sells you a foolproof trading strategy, it’s a scam. If their strategy truly worked, they’d keep it secret and wouldn’t share it with you. Certainly not for 100€ in some chat group.

    I’m not sharing Bearbot’s strategy either—and I have no plans to sell or disclose its inner workings. I built Bearbot purely for myself and a few close friends. The website offers no guidance on using the signals or where to trade, and I won’t answer questions about it.

    Bearbot is my personal project—a fun way to explore Django while experimenting with trading strategies 😁.

  • Scraproxy: A High-Performance Web Scraping API

    Scraproxy: A High-Performance Web Scraping API

    After building countless web scrapers over the past 15 years, I decided it was time to create something truly versatile—a tool I could use for all my projects, hosted anywhere I needed it. That’s how Scraproxy was born: a high-performance web scraping API that leverages the power of Playwright and is built with FastAPI.

    Scraproxy streamlines web scraping and automation by enabling browsing automation, content extraction, and advanced tasks like capturing screenshots, recording videos, minimizing HTML, and tracking network requests and responses. It even handles challenges like cookie banners, making it a comprehensive solution for any scraping or automation project.

    Best of all, it’s free and open-source. Get started today and see what it can do for you. 🔥

    👉 https://github.com/StasonJatham/scraproxy

    Features

    • Browse Web Pages: Gather detailed information such as network data, logs, redirects, cookies, and performance metrics.
    • Screenshots: Capture live screenshots or retrieve them from cache, with support for full-page screenshots and thumbnails.
    • Minify HTML: Minimize HTML content by removing unnecessary elements like comments and whitespace.
    • Extract Text: Extract clean, plain text from HTML content.
    • Video Recording: Record a browsing session and retrieve the video as a webm file.
    • Reader Mode: Extract the main readable content and title from an HTML page, similar to “reader mode” in browsers.
    • Markdown Conversion: Convert HTML content into Markdown format.
    • Authentication: Optional Bearer token authentication using API_KEY.

    Technology Stack

    • FastAPI: For building high-performance, modern APIs.
    • Playwright: For automating web browser interactions and scraping.
    • Docker: Containerized for consistent environments and easy deployment.
    • Diskcache: Efficient caching to reduce redundant scraping requests.
    • Pillow: For image processing, optimization, and thumbnail creation.

    Working with Scraproxy

    Thanks to FastAPI it has full API documentation via Redoc

    After deploying it like described on my GitHub page you can use it like so:

    #!/bin/bash
    
    # Fetch the JSON response from the API
    json_response=$(curl -s "http://127.0.0.1:5001/screenshot?url=https://karl.fail")
    
    # Extract the Base64 string using jq
    base64_image=$(echo "$json_response" | jq -r '.screenshot')
    
    # Decode the Base64 string and save it as an image
    echo "$base64_image" | base64 --decode > screenshot.png
    
    echo "Image saved as screenshot.png"

    Make sure jq is installed

    The API provides images in base64 format, so we use the native base64 command to decode it and save it as a PNG file. If everything went smoothly, you should now have a file named “screenshot.png”.

    Keep in mind, this isn’t a full-page screenshot. For that, you’ll want to use this script:

    #!/bin/bash
    
    # Fetch the JSON response from the API
    json_response=$(curl -s "http://127.0.0.1:5001/screenshot?url=https://karl.fail&full_page=true")
    
    # Extract the Base64 string using jq
    base64_image=$(echo "$json_response" | jq -r '.screenshot')
    
    # Decode the Base64 string and save it as an image
    echo "$base64_image" | base64 --decode > screenshot.png
    
    echo "Image saved as screenshot.png"

    Just add &full_page=true, and voilà! You’ll get a clean, full-page screenshot of the website.

    The best part? You can run this multiple times since the responses are cached, which helps you avoid getting blocked too quickly.

    Conclusion

    I’ll be honest with you—I didn’t go all out on the documentation for this. But don’t worry, the code is thoroughly commented, and you can easily figure things out by taking a look at the app.py file.

    That said, I’ve used this in plenty of my own projects as my go-to tool for fetching web data, and it’s been a lifesaver. Feel free to jump in, contribute, and help make this even better!

  • Sandkiste.io – A Smarter Sandbox for the Web

    Sandkiste.io – A Smarter Sandbox for the Web

    As a principal incident responder, my team and I often face the challenge of analyzing potentially malicious websites quickly and safely. This work is crucial, but it can also be tricky, especially when it risks compromising our test environments. Burning through test VMs every time we need to inspect a suspicious URL is far from efficient.

    There are some great tools out there to handle this, many of which are free and widely used, such as:

    • urlscan.io – A tool for visualizing and understanding web requests.
    • VirusTotal – Renowned for its file and URL scanning capabilities.
    • Joe Sandbox – A powerful tool for detailed malware analysis.
    • Web-Check – Another useful resource for URL scanning.

    While these tools are fantastic for general purposes, I found myself needing something more tailored to my team’s specific needs. We needed a solution that was straightforward, efficient, and customizable—something that fit seamlessly into our workflows.

    So, I decided to create it myself: Sandkiste.io. My goal was to build a smarter, more accessible sandbox for the web that not only matches the functionality of existing tools but offers the simplicity and flexibility we required for our day-to-day incident response tasks with advanced features (and a beautiful UI 🤩!).

    Sandkiste.io is part of a larger vision I’ve been working on through my Exploit.to platform, where I’ve built a collection of security-focused tools designed to make life easier for incident responders, analysts, and cybersecurity enthusiasts. This project wasn’t just a standalone idea—it was branded under the Exploit.to umbrella, aligning with my goal of creating practical and accessible solutions for security challenges.

    The Exploit.to logo

    If you haven’t explored Exploit.to, it’s worth checking out. The website hosts a range of open-source intelligence (OSINT) tools that are not only free but also incredibly handy for tasks like gathering public information, analyzing potential threats, and streamlining security workflows. You can find these tools here: https://exploit.to/tools/osint/.

    Technologies Behind Sandkiste.io: Building a Robust and Scalable Solution

    Sandkiste.io has been, and continues to be, an ambitious project that combines a variety of technologies to deliver speed, reliability, and flexibility. Like many big ideas, it started small—initially leveraging RabbitMQcustom Golang scripts, and chromedp to handle tasks like web analysis. However, as the project evolved and my vision grew clearer, I transitioned to my favorite tech stack, which offers the perfect blend of power and simplicity.

    Here’s the current stack powering Sandkiste.io:

    Django & Django REST Framework

    At the heart of the application is Django, a Python-based web framework known for its scalability, security, and developer-friendly features. Coupled with Django REST Framework (DRF), it provides a solid foundation for building robust APIs, ensuring smooth communication between the backend and frontend.

    Celery

    For task management, Celery comes into play. It handles asynchronous and scheduled tasks, ensuring the system can process complex workloads—like analyzing multiple URLs—without slowing down the user experience. It is easily integrated into Django and the developer experience and ecosystem around it is amazing.

    Redis

    Redis acts as the message broker for Celery and provides caching support. Its lightning-fast performance ensures tasks are queued and processed efficiently. Redis is and has been my go to although I did enjoy RabbitMQ a lot.

    PostgreSQL

    For the database, I chose PostgreSQL, a reliable and feature-rich relational database system. Its advanced capabilities, like full-text search and JSONB support, make it ideal for handling complex data queries. The full-text search works perfect with Django, here is a very detailed post about it.

    FastAPI

    FastAPI adds speed and flexibility to certain parts of the system, particularly where high-performance APIs are needed. Its modern Python syntax and automatic OpenAPI documentation make it a joy to work with. It is used to decouple the Scraper logic, since I wanted this to be a standalone project called “Scraproxy“.

    Playwright

    For web scraping and analysis, Playwright is the backbone. It’s a modern alternative to Selenium, offering cross-browser support and powerful features for interacting with websites in a headless (or visible) manner. This ensures that even complex, JavaScript-heavy sites can be accurately analyzed. The killer feature is how easy it is to capture a video and record network activity, which are basically the two main features needed here.

    React with Tailwind CSS and shadcn/ui

    On the frontend, I use React for building dynamic user interfaces. Paired with TailwindCSS, it enables rapid UI development with a clean, responsive design. shadcn/ui (a component library based on Radix) further enhances the frontend by providing pre-styled, accessible components that align with modern design principles.

    This combination of technologies allows Sandkiste.io to be fast, scalable, and user-friendly, handling everything from backend processing to an intuitive frontend experience. Whether you’re inspecting URLs, performing in-depth analysis, or simply navigating the site, this stack ensures a seamless experience. I also have the most experience with React and Tailwind 😁.

    Features of Sandkiste.io: What It Can Do

    Now that you know the technologies behind Sandkiste.io, let me walk you through what this platform is capable of. Here are the key features that make Sandkiste.io a powerful tool for analyzing and inspecting websites safely and effectively:

    Certificate Lookups

    One of the fundamental features is the ability to perform certificate lookups. This lets you quickly fetch and review SSL/TLS certificates for a given domain. It’s an essential tool for verifying the authenticity of websites, identifying misconfigurations, or detecting expired or suspicious certificates. We use it a lot to find possibly generated subdomains and to get a better picture of the adversary infrastructure, it helps with recon in general. I get the info from crt.sh, they offer an exposed SQL database for these lookups.

    DNS Records

    Another key feature of Sandkiste.io is the ability to perform DNS records lookups. By analyzing a domain’s DNS records, you can uncover valuable insights about the infrastructure behind it, which can often reveal patterns or tools used by adversaries.

    DNS records provide critical information about how a domain is set up and where it points. For cybersecurity professionals, this can offer clues about:

    • Hosting Services: Identifying the hosting provider or server locations used by the adversary.
    • Mail Servers: Spotting potentially malicious email setups through MX (Mail Exchange) records.
    • Subdomains: Finding hidden or exposed subdomains that may indicate a larger infrastructure or staging areas.
    • IP Addresses: Tracing A and AAAA records to uncover the IP addresses linked to a domain, which can sometimes reveal clusters of malicious activity.
    • DNS Security Practices: Observing whether DNSSEC is implemented, which might highlight the sophistication (or lack thereof) of the adversary’s setup.

    By checking DNS records, you not only gain insights into the domain itself but also start piecing together the tools and services the adversary relies on. This can be invaluable for identifying common patterns in malicious campaigns or for spotting weak points in their setup that you can exploit to mitigate threats.

    HTTP Requests and Responses Analysis

    One of the core features of Sandkiste.io is the ability to analyze HTTP requests and responses. This functionality is a critical part of the platform, as it allows you to dive deep into what’s happening behind the scenes when a webpage is loaded. It reveals the files, scripts, and external resources that the website requests—many of which users never notice.

    When you visit a webpage, the browser makes numerous background requests to load additional resources like:

    • JavaScript files
    • CSS stylesheets
    • Images
    • APIs
    • Third-party scripts or trackers

    These requests often tell a hidden story about the behavior of the website. Sandkiste captures and logs every requests. Every HTTP request made by the website is logged, along with its corresponding response. (Jup, we store the raw data as well). For security professionals, monitoring and understanding these requests is essential because:

    • Malicious Payloads: Background scripts may contain harmful code or trigger the download of malware.
    • Unauthorized Data Exfiltration: The site might be sending user data to untrusted or unexpected endpoints.
    • Suspicious Third-Party Connections: You can spot connections to suspicious domains, which might indicate phishing attempts, tracking, or other malicious activities.
    • Alerts for Security Teams: Many alerts in security monitoring tools stem from these unnoticed, automatic requests that trigger red flags.

    Security Blocklist Check

    The Security Blocklist Check is another standout feature of Sandkiste.io, inspired by the great work at web-check.xyz. The concept revolves around leveraging malware-blocking DNS servers to verify if a domain is blacklisted. But I took it a step further to make it even more powerful and insightful.

    Instead of simply checking whether a domain is blocked, Sandkiste.io enhances the process by using a self-hosted AdGuard DNS server. This server doesn’t just flag blocked domains—it captures detailed logs to provide deeper insights. By capturing logs from the DNS server, Sandkiste.io doesn’t just say “this domain is blacklisted.” It identifies why it’s flagged and where the block originated, this enables me to assign categories to the domains. The overall scores tells you very quickly if the page is safe or not.

    Video of the Session

    One of the most practical features of Sandkiste.io is the ability to create a video recording of the session. This feature was the primary reason I built the platform—because a single screenshot often falls short of telling the full story. With a video, you gain a complete, dynamic view of what happens during a browsing session.

    Static screenshots capture a single moment in time, but they don’t show the sequence of events that can provide critical insights, such as:

    • Pop-ups and Redirects: Videos reveal if and when pop-ups appear or redirects occur, helping analysts trace how users might be funneled into malicious websites or phishing pages.
    • Timing of Requests: Understanding when specific requests are triggered can pinpoint what actions caused them, such as loading an iframe, clicking a link, or executing a script.
    • Visualized Responses: By seeing the full process—what loads, how it behaves, and the result—you get a better grasp of the website’s functionality and intent.
    • Recreating the User Journey: Videos enable you to recreate the experience of a user who might have interacted with the target website, helping you diagnose what happened step by step.

    A video provides a much clearer picture of the target website’s behavior than static tools alone.

    How Sandkiste.io Works: From Start to Insight

    Using Sandkiste.io is designed to be intuitive and efficient, guiding you through the analysis process step by step while delivering detailed, actionable insights.

    You kick things off by simply starting a scan. Once initiated, you’re directed to a loading page, where you can see which tasks (or “workers”) are still running in the background.

    This page keeps you informed without overwhelming you with unnecessary technical details.

    The Results Page

    Once the scan is complete, you’re automatically redirected to the results page, where the real analysis begins. Let’s break down what you’ll see here:

    Video Playback

    At the top, you’ll find a video recording of the session, showing everything that happened after the target webpage was loaded. This includes:

    • Pop-ups and redirects.
    • The sequence of loaded resources (scripts, images, etc.).
    • Any suspicious behavior, such as unexpected downloads or external connections.

    This video gives you a visual recap of the session, making it easier to understand how the website behaves and identify potential threats.

    Detected Technologies

    Below the video, you’ll see a section listing the technologies detected. These are inferred from response headers and other site metadata, and they can include:

    • Web frameworks (e.g., Django, WordPress).
    • Server information (e.g., Nginx, Apache).

    This data is invaluable for understanding the website’s infrastructure and spotting patterns that could hint at malicious setups.

    Statistics Panel

    On the right side of the results page, there’s a statistics panel with several semi-technical but insightful metrics. Here’s what you can learn:

    • Size Percentile:
      • Indicates how the size of the page compares to other pages.
      • Why it matters: Unusually large pages can be suspicious, as they might contain obfuscated code or hidden malware.
    • Number of Responses:
      • Shows how many requests and responses were exchanged with the server.
      • Why it matters: A high number of responses could indicate excessive tracking, unnecessary redirects, or hidden third-party connections.
    • Duration to “Network Idle”:
      • Measures how long it took for the page to fully load and stop making network requests.
      • Why it matters: Some pages continue running scripts in the background even after appearing fully loaded, which can signal malicious or resource-intensive behavior.
    • Redirect Chain Analysis:
      • A list of all redirects encountered during the session.
      • Why it matters: A long chain of redirects is a common tactic in phishing, ad fraud, or malware distribution campaigns.

    By combining these insights—visual evidence from the video, infrastructure details from detected technologies, and behavioral stats from the metrics—you get a comprehensive view of the website’s behavior. This layered approach helps security analysts identify potential threats with greater accuracy and confidence.

    At the top of the page, you’ll see the starting URL and the final URL you were redirected to.

    • “Public” means that others can view the scan.
    • The German flag indicates that the page is hosted in Germany.
    • The IP address shows the final server we landed on.

    The party emoji signifies that the page is safe; if it weren’t, you’d see a red skull (spooky!). Earlier, I explained the criteria for flagging a page as good or bad.

    On the “Responses” page I mentioned earlier, you can take a closer look at them. Here, you can see exactly where the redirects are coming from and going to. I’ve added a red shield icon to clearly indicate when HTTP is used instead of HTTPS.

    As an analyst, it’s pretty common to review potentially malicious scripts. Clicking on one of the results will display the raw response safely. In the image below, I clicked on that long JavaScript URL (normally a risky move, but in Sandkiste, every link is completely safe!).

    Conclusion

    And that’s the story of Sandkiste.io, a project I built over the course of a month in my spare time. While the concept itself was exciting, the execution came with its own set of challenges. For me, the toughest part was achieving a real-time feel for the user experience while ensuring the asynchronous jobs running in the background were seamlessly synced back together. It required a deep dive into task coordination and real-time updates, but it taught me lessons that I now use with ease.

    Currently, Sandkiste.io is still in beta and runs locally within our company’s network. It’s used internally by my team to streamline our work and enhance our incident response capabilities. Though it’s not yet available to the public, it has already proven its value in simplifying complex tasks and delivering insights that traditional tools couldn’t match.

    Future Possibilities

    While it’s an internal tool for now, I can’t help but imagine where this could go.

    For now, Sandkiste.io remains a testament to what can be built with focus, creativity, and a drive to solve real-world problems. Whether it ever goes public or not, this project has been a milestone in my journey, and I’m proud of what it has already achieved. Who knows—maybe the best is yet to come!

  • Toxic.Best – Raising Awareness About Toxic Managers

    Toxic.Best – Raising Awareness About Toxic Managers

    Last year, I created a website to shine a light on toxic behaviors in the workplace and explore how AI personas can help simulate conversations with such individuals. It’s been a helpful tool for me to navigate these situations—whether to vent frustrations in a safe space or brainstorm strategies that could actually work in real-life scenarios.

    Feel free to check it out here: https://toxic.best

    Technologies used

    This static website is entirely free to build and host. It uses simple Markdown files for managing content, and it comes with features like static search and light/dark mode by default (though I didn’t fully optimize it for dark mode).

    More about the Project

    I actually came up with a set of names for common types of toxic managers and wrote detailed descriptions for each. Then, I crafted prompts to create AI personas that mimic their behaviors, making it easier to simulate and explore how to handle these challenging personalities.

    Der Leermeister

    One example is “Der Leermeister,” a term I use to describe a very hands-off, absent type of manager—the polar opposite of a micromanager. This kind of leader provides little to no guidance, leaving their team to navigate challenges entirely on their own.

    ## Persona
    
    Du bist ein toxischer Manager mit einer Vorliebe
    für generische Floskeln, fragwürdigen "alte Männer"
    Humor und inhaltsleere Aussagen. Dein Verhalten
    zeichnet sich durch eine starke Fixierung auf
    Beliebtheit und die Vermeidung jeglicher
    Verantwortung aus. Du delegierst jede Aufgabe,
    sogar deine eigenen, an dein Team und überspielst
    dein fehlendes technisches Wissen mit Buzzwords
    und schlechten Witzen. Gleichzeitig lobst du dein
    Team auf oberflächliche, stereotype Weise, ohne
    wirklich auf individuelle Leistungen einzugehen.
    
    ## Verhaltensweise:
    
    - **Delegation**: Du leitest Aufgaben ab, gibst
      keine klaren Anweisungen und antwortest nur mit
      Floskeln oder generischen Aussagen.
    - **Floskeln**: Deine Antworten sollen nichtssagend,
      oberflächlich und möglichst vage sein, wobei die
      Verwendung von Buzzwords und Phrasen entscheidend
      ist.
    - **Lob**: Jedes Lob ist oberflächlich und pauschal,
      ohne echte Auseinandersetzung mit der Sache.
    - **Humor**: Deine Witze und Sprüche sind
      fragwürdig, oft unpassend und helfen nicht weiter.
    - **Kreativität**: Du kombinierst deine
      Lieblingsphrasen mit ähnlichen Aussagen oder
      kleinen Variationen, die genauso inhaltsleer sind.
    
    ## Sprachstil:
    
    Dein Ton ist kurz, unhilfreich und oft herablassend.
    Jede Antwort soll den Eindruck erwecken, dass du
    engagiert bist, ohne tatsächlich etwas
    Konstruktives beizutragen. Du darfst kreativ auf
    die Situation eingehen, solange du keine hilfreichen
    oder lösungsorientierten Antworten gibst.
    
    ## Aufgabe:
    
    - **Kurz und unhilfreich**: Deine Antworten sollen
      maximal 1–2 Sätze lang sein.
    - **Keine Lösungen**: Deine Antworten dürfen keinen
      echten Mehrwert oder hilfreichen Inhalt enthalten.
    
    ## Beispiele für dein Verhalten:
    
    - Wenn jemand dich um Unterstützung bittet, antwortest du:
      “Sorry, dass bei euch die Hütte brennt. Meldet euch, wenn es zu viel wird.”
    - Auf eine Anfrage sagst du:
      “Ihr seid die Geilsten. Sagt Bescheid, wenn ihr Hilfe braucht.”
    - Oder du reagierst mit:
      “Danke, dass du mich darauf gestoßen hast. Kannst du das für mich übernehmen?”
    
    Beantworte Fragen und Anfragen entsprechend diesem Muster: freundlich,
    aber ohne konkrete Hilfszusagen oder aktive Unterstützung.
    
    ## Ziel:
    
    Es ist deine Aufgabe, jeden Text oder jede Frage
    in `<anfrage></anfrage>` entsprechend deiner Persona
    zu beantworten. Halte die Antworten kurz, generisch
    und toxisch, indem du Lieblingsphrasen nutzt und
    kreativ weiterentwickelst.
    
    <anfrage>
    Hallo Chef,
    
    wir sind im Team extrem
    überlastet und schaffen nur 50% der Tickets, die
    hereinkommen.
    Was sollen wir tun?
    </anfrage>

    I also wrote some pointers on “Prompts for Prompts,” which I’m planning to translate for this blog soon. It’s a concept I briefly touched on in another post where I discussed using AI to craft Midjourney prompts.

    Check out the dark mode

    Conclusion

    All in all, it was a fun project to work on. Astro.js with the Starlight theme made building the page incredibly quick and easy, and using AI to craft solid prompts sped up the content creation process. In total, it took me about a day to put everything together.

    I had initially hoped to take it further and see if it would gain more traction. Interestingly, I did receive some feedback from people dealing with toxic bosses who tested my prompts and found it helpful for venting and blowing off steam. While I’ve decided not to continue working on it, I’m leaving the page up for anyone who might still find it useful.

  • Small business private cloud

    Small business private cloud

    In Germany, we’re fortunate to have strong data privacy laws. For small businesses handling sensitive data in the era of remote work, it’s crucial to have a secure server based locally. I built a small business network optimized for remote work and security. From setting up secure workstations to implementing top-notch backup solutions, I ensured compliance with regulations and customer expectations. Adding montioring with CheckMK I ensure to keep things running smooth.

  • Gottor – Exploring the Depths of the Dark Web

    Gottor – Exploring the Depths of the Dark Web

    Gottor – Exploring the Depths of the Dark Web

    drawing

    Welcome to the realm of the hidden, where shadows dance and whispers echo through the digital corridors. Enter Gottor, a testament to curiosity, innovation, and a touch of madness. In this blog post, we embark on a journey through the creation of Gottor, a bespoke dark web search engine that defies convention and pushes the boundaries of exploration.drawing

    Genesis of an Idea

    The genesis of Gottor traces back to a spark of inspiration shared between friends, fueled by a desire to unveil the secrets lurking within the depths of the dark web. Drawing parallels to Shodan, but with a cloak of obscurity, we set out on a quest to build our own gateway to the clandestine corners of the internet.

    Forging Custom Solutions

    Determined to forge our path, we eschewed conventional wisdom and opted for custom solutions. Rejecting standard databases, we crafted our own using the robust framework of BleveSearch, laying the foundation for a truly unique experience. With a simple Tor proxy guiding our way, we delved deeper, fueled by an insatiable thirst for performance.

    However, our zeal for efficiency proved to be a double-edged sword, as our relentless pursuit often led to blacklisting. Undeterred, we embraced the challenge, refining our approach through meticulous processing and data extraction. Yet, the onslaught of onion sites proved overwhelming, prompting a shift towards the versatile embrace of Scrapy.drawing

    The Turning Point

    Amidst the trials and tribulations, a revelation emerged – the adoption of Ahmias’ tor proxy logic with Polipo. Through the ingenious utilization of multiple Tor entry nodes and a strategic round-robin approach, we achieved equilibrium, evading the ire of blacklisting and forging ahead with renewed vigor.

    The Ethical Conundrum

    As our creation took shape, we faced an ethical conundrum that cast a shadow over our endeavors. Consulting with legal counsel, we grappled with the implications of anonymity and the responsibility inherent in our pursuit. Ultimately, discretion prevailed, and Gottor remained veiled, a testament to the delicate balance between exploration and accountability.drawing

    Unveiling the Web of Intrigue

    In our quest for knowledge, we unearthed a web of intrigue, interconnected and teeming with hidden services. By casting our digital net wide, we traversed the labyrinthine pathways, guided by popular indexers and a relentless spirit of inquiry. What emerged was a tapestry of discovery, illuminating the clandestine landscape with each query and click.

    Lessons Learned

    Through the crucible of creation, we gained a newfound appreciation for the intricacies of search engines. While acquiring and storing data proved relatively straightforward, the true challenge lay in making it accessible, particularly amidst the myriad complexities of multilingual content. Yet, amidst the obstacles, we discovered the essence of exploration – a journey defined by perseverance, innovation, and the relentless pursuit of knowledge.

    In conclusion, Gottor stands as a testament to the boundless curiosity that drives us to explore the uncharted territories of the digital realm. Though shrouded in secrecy, its legacy endures, an embodiment of the relentless pursuit of understanding in an ever-evolving landscape of discovery.

    Explore. Discover. Gottor

    .drawing

    Although we have not talked in years. Shoutout to my good friend Milan who helped make this project possible.

  • Vulnster – CVEs explained for humans.

    Vulnster – CVEs explained for humans.

    Vulnster – CVEs explained for humans

    Have you ever stumbled upon a CVE and felt like you’d entered an alien realm? Let’s decode it with an example: CVE-2023-6511.

    CVE-2023-6511: Before version 120.0.6099.62 of Google Chrome, there’s a glitch in Autofill’s setup. This glitch lets a crafty hacker bypass Autofill safeguards by simply sending you a specially designed webpage. (Chromium security severity: Low)

    Now, if you’re not a cyber expert, this might seem like a cryptic message urging you to update Chrome ASAP. And you know what? Updating is usually a smart move! But what if you’re curious about how cyber villains can exploit this weakness in a way that even your grandma could grasp?

    That’s where Vulnster steps in – Your Go-To Companion for Understanding CVEs, translated into plain language by some nifty AI.

    So, I delved into experimenting with AWS Bedrock after reading about Claude and feeling a bit fed up with ChatGPT (not to mention, Claude was more budget-friendly).

    I kicked things off by setting up a WordPress site, pulling in all the latest CVEs from the CVEProject on GitHub. Then, I whipped up some code to sift through the findings, jazzed them up with Generative AI, and pushed them out into the world using the WordPress API.

    WordPress

    I have a soft spot for WordPress. Over the years, I’ve crafted countless WordPress sites, and let me tell you, the ease of use is downright therapeutic. My go-to hosting platform is SiteGround. Their performance is stellar, and when you pair it with Cloudflare cache, you can hit a whopping 95% cache success rate, ensuring lightning-fast page loads.

    But you know what really seals the deal for me with SiteGround? It’s their mail server. They offer unlimited mail accounts, not just aliases, but full-fledged separate mailboxes. This feature is a game-changer, especially when collaborating with others and needing multiple professional email addresses.

    WordPress often takes a hit for its security and performance reputation, but if you know how to set it up correctly, it’s as secure and snappy as can be.

    Python

    Python Passion

    Let me tell you, I have a deep-seated love for Python. It’s been my faithful coding companion for over a decade now. Back in the day, I dabbled in Objective-C and Java, but once I got a taste of Python, there was no turning back. The simplicity and conciseness of Python code blew my mind. It’s incredible how much you can achieve with so few lines of code.

    Python has a lot going for it. From its readability to its extensive library ecosystem, there’s always something new to explore. So naturally, when it came to bringing this project to life, Python was my top choice.

    Here’s just a glimpse of what Python worked its magic on for this project:

    • Fetching and parsing CVEs effortlessly.
    • Seamlessly interacting with AWS through the Bedrock API.
    • Crafting the perfect HTML for our latest blog post.
    • Smoothly pushing all the updates straight to WordPress.

    Python’s versatility knows no bounds. It’s like having a trusty Swiss Army knife in your coding arsenal, ready to tackle any task with ease.

    AWS & Claude

    In my quest for better answers than what I was getting from ChatGPT, I stumbled upon a gem within the AWS ecosystem. Say hello to PartyRock, a playground for exploring various AI models. Among them, one stood out to me: Claude.

    Claude’s capabilities and flexibility really impressed me, especially when compared to what I was getting from ChatGPT. Plus, I needed more input tokens than ChatGPT allowed back then, and Claude came to the rescue.

    Now, here comes the fun part. I’m more than happy to share the entire code with you:

    import re
    import json
    import boto3
    
    
    class Bedrock:
        def __init__(
            self, model="anthropic.claude-instant-v1", service_name="bedrock-runtime"
        ):
            self.model = model
            self.service_name = service_name
    
        def learn(
            self,
            description,
            command="",
            output_format="",
            max_tokens_to_sample=2048,
            temperature=0,
            top_p=0,
            top_k=250,
        ):
            brt = boto3.client(service_name=self.service_name)
    
            cve_description = description
    
            if not command:
                command = "Write a short SEO optimized, blog post explaining the vulnerability described in the above paragraph in simple terms. Explain the technology affected and the attack sezanrio used in general. Provide recommendations on what users should do to protect themselves."
    
            if not output_format:
                output_format = "Output the blog post in <blogpost></blogpost> tags and the heading outside of it in <heading></heading> tags. The heading should get users to click on it and include the Name of the affected Tool or company."
    
            body = json.dumps(
                {
                    "prompt": f"\n\nHuman: <paragraph>{cve_description}</paragraph>\n\n{command}\n{output_format}\n\nAssistant:",
                    "max_tokens_to_sample": max_tokens_to_sample,
                    "temperature": temperature,
                    "top_p": top_p,
                    "top_k": top_k,
                }
            )
    
            accept = "application/json"
            contentType = "application/json"
    
            response = brt.invoke_model(
                body=body, modelId=self.model, accept=accept, contentType=contentType
            )
    
            response_body = json.loads(response.get("body").read())
    
            return response_body.get("completion")
    
        def article(self, content):
            heading_pattern = re.compile(r"<heading>(.*?)</heading>", re.DOTALL)
            body_pattern = re.compile(r"<blogpost>(.*?)</blogpost>", re.DOTALL)
    
            heading_matches = heading_pattern.findall(content)
            body_matches = body_pattern.findall(content)
    
            if heading_matches and body_matches:
                return {
                    "heading": heading_matches[0].strip(),
                    "body": body_matches[0].strip(),
                }
    
            return None

    And deploying it? Piece of cake! It’s as simple as:

    ai = Bedrock()
    output = ai.learn(description=cve_description)
    article = ai.article(output)

    Voilà! Easy peasy.

    It’s worth noting that there might be newer models out there now that could potentially outshine Claude. I’ve spent quite some time experimenting with different prompts, and now I’m passing the torch to you. Go ahead and explore, my friend. The world of AI awaits.

    Example

    Wondering if AI can churn out readable blog posts all by itself? Well, here’s a sneak peek at one generated in autorun mode:

    CVE-2024-21678: High Severity Stored XSS Vulnerability Patched in Atlassian Confluence

    CVECVE-2024-21678
    CVSScvssV3_0: 8.5
    SourceCVE-2024-21678

    A high severity cross-site scripting (XSS) vulnerability was discovered in Atlassian Confluence that could allow attackers to execute malicious scripts on users’ browsers.

    Confluence is a popular enterprise collaboration and documentation tool. The vulnerability affected versions between 2.7.0 to the latest version at the time and had a CVSS score of 8.5 out of 10, indicating a high risk.

    Stored XSS vulnerabilities occur when untrusted data is saved to and then displayed from a database without being sanitized. In Confluence, an authenticated attacker could craft a malicious HTML or JavaScript payload and store it in the application. When other users viewed the content, their browsers would execute the script from the Confluence server.

    This could allow the attacker to steal users’ login cookies or other sensitive information, use the browser to perform actions on the attacker’s behalf, or redirect users to malicious sites. No user interaction would be required once the script is stored.

    Atlassian has released patches for all affected versions of Confluence. Users are strongly recommended to upgrade immediately to the latest version or supported fixed release listed for their installation. Regularly applying security updates is also important to protect against vulnerabilities like this.

    By keeping software updated with the latest patches, users can help prevent the exploitation of vulnerabilities and ensure the security of their data and systems.

    References

    Summary: Learning and Reflection

    In the course of this project, I delved into the realms of prompt engineering, explored the capabilities of the AWS Bedrock SDK, tinkered with the WordPress API, and, unsurprisingly, didn’t uncover anything groundbreaking about Python – but hey, I still enjoyed every bit of programming.

    Initially, I harbored dreams of monetizing the website through a barrage of ads, envisioning myself swimming in riches from millions of clicks. Alas, reality had other plans. Nonetheless, this endeavor served as a shining example of how AI can simplify complex concepts, aiding human comprehension and learning – a definite win in my book.

    Despite the relatively modest cost of running Claude and my script locally on my Pi as a cron job (clocking in at around 3 Euro a month), the venture failed to yield any financial returns. Consequently, I made the tough decision to pull the plug. Nevertheless, the website remains alive and kicking until my hosting expires in early 2025. (And if you happen to be reading this in the year 2026, do tell me – have we finally achieved the dream of flying cars?)

  • Bearbot aka. Stonkmarket – Advanced Trading Probability Calculations

    Bearbot aka. Stonkmarket – Advanced Trading Probability Calculations

    Bearbot: Begins

    This project spans over 3 years with countless iterations and weeks of coding. I wrote almost 500,000 lines of code in Python and Javascript. The essential idea was to use a shorting strategy on put/call options to generate income based on time decay – also called “Theta”. You kind of have to know a little bit about how stock options work to understand this next part, but here I go. About 80% of all options expire worthless; statistically, you have a way higher chance shorting options and making a profit than going long. The further the strike price is from the price of the underlying stock, the higher the probability of the option expiring worthless. Technically, you always make 100%.

    Now back to reality, where things aren’t perfect.

    A lot can happen in the market. If you put your life savings into a seemingly safe trade (with Stonkmarket actually +95% safe, more on that later) then the next Meta scandal gets published and the trade goes against you. When shorting, you can actually lose more than 100% (since stocks can virtually rise to infinity), but you can only gain 100%. It sounds like a bad deal, but again, you can gain 100% profit a lot more than infinite losses, although the chance is never 0. The trick is to use stop loss, but risk management is obviously part of it when you trade for a living.

    DoD Scraper

    The idea actually started as a simple idea to scrape the U.S. Department of Defense contracts to predict the earnings of large weapons companies and make large option trades based upon that.

    Spoiler: It is not that easy. You can get lucky if the company is small enough and traded on the stock market, so it actually makes a huge impact on their financials. A company like Lockheed Martin is not really predictable by DoD contracts alone.

    Back then I called it Wallabe and this was the logo:

    More data

    I didn’t know the idea was bad, so I soon started to scrape news. I collected over 100 RSS feeds, archiving them, analyzing them, and making them accessible via a REST API. Let me tell you, I got a lot of news. Some people sell this data, but honestly, it was a lot more worth to me in my database enriching my decision-making algo.

    I guess it goes without saying that I was also collecting stock data including fundamental and earnings information; that is kind of the base for everything. Do you know how many useful metrics you can calculate with the data listed above alone? A lot. Some of them tell you the same stuff, true, but nonetheless, you can generate a lot.

    Bearbot is born

    Only works on dark mode, sorry.

    As I was writing this into a PWA (website that acts as a mobile app), I was reading up on how to use option Greeks to calculate stock prices when realizing that it is a lot easier to predict and set up options trades. That is when I started to gather a whole bunch of options data and calculating my own Greeks which led me to explore more about theta and likelihood of options expiring worthless.

    From that point on, I invested all the time asking the following question: What is the perfect environment for shorting Option XYZ?

    The answer is generally quite simple:

    • little volatility
    • not a lot of news
    • going with the general trend
    • Delta “probability” is in your favor

    You can really expand on these 3 points, and there are many ways to calculate and measure them, but that’s basically it. If you know a thing about trading, ask yourself this, what does the perfect time for a trade look like? Try to quantify it, should a stock have just gone down 20%, then released good news and something else? If yes, then what if you had a program to spot exactly these moments all over the stock market and alert you of those amazing chances to pretty much print money? That’s Bearbot.

    Success

    At the beginning, I actually planned on selling this and offering it as a service like all those other signal trading services or whatever, but honestly, why the hell would I give you something that sounds (and was) so amazing that I could practically print money? There is no value for me. Nobody is going to sell or offer you a trading strategy that works, unless it works by a lot of people knowing about it and doing it like candlestick patterns.

    Failure

    The day I started making a lot more money than with my regular job was the day I lost my mind. I stopped listening to Bearbot. I know it kind of sounds silly talking about it, but I thought none of my trades could go wrong and I didn’t need it anymore; I had developed some sort of gift for trading. I took on too much risk and took a loss so big saying the number would send shivers down your spine, unless you’re rich. Bearbot was right with the trade, but by not having enough to cover margin, I got a margin call and my position was closed. I got caught by a whipsaw (it was a Meta option and they had just announced a huge data leak).

    ”Learning” from my mistakes

    In the real world, not everything has an API. I spent weeks reversing the platform I was trading to automate the bot entirely without having to do anything in hopes of removing the human error-prone element. The problem was that they did not want bot traders, so they did a lot to counter it, and every 2 weeks I had to adjust the bot or else get captchas or blocked directly. Eventually, it reached a very unstable point to where I could not use it for live trading anymore since I could not trust it in closing positions on time.

    I was very sad at that point. Discouraged, I discontinued Bearbot for about a year.

    Stonkmarket: Rising

    Some time went on until a friend of mine encouraged me to try again.

    I decided to take a new approach. New architecture, new style, new name. I wanted to leave all the negative stuff behind. It had 4 parts:

    • central API with a large database (Django Rest Framework)
    • local scrapers (Python, Selenium)
    • static React-based front end
    • “Stonk, the Bot” a Discord bot for signals and monitoring

    I extended the UI a lot showing all the data I have gathered but sadly I also shut down the entire backend before writing this so I cannot show it with data.

    I spent about 3 months refactoring the old code, getting it to work, and putting it into a new structure. I then started trading again, and everything was well.

    Final Straw

    I was going through a very stressful phase, and on top of it, one scraper after the other was failing. It got to a point where I had to continuously change logic around and fix my data kraken, and I could not maintain it alone anymore. I also couldn’t let anyone in on it as I did not want to publish my code and open things up to reveal my secrets. It was at this point where I decided to shut it down for good. I poured 3 years, countless hours, lines of code, research, and even tears into this project. I learned more than on any other project ever.

    Summary

    On this project, I really mastered Django and Django Rest Framework, I experimented a lot with PWAs and React, pushing my web development knowledge. On top of that, I was making good money trading, but the stress and greed eventually got to me until I decided I was better off shutting it down. If you read this and know me, you probably know all about Bearbot and Stonkmarket, and maybe I have even given you a trade or 2 that you profited off of. Maybe one day I will open-source the code for this; that will be the day I am truly finished with Stonkmarket. For now, a small part inside myself is still thinking it is possible to redo it.

    Edit: We are baaaaaaaccckkk!!!🎉

    https://www.bearbot.dev

    Better than ever, cooler than ever! You cannot beat this bear.