BBOT (short for Bee·bot) is a powerful, multipurpose Python-based scanner designed to automate recon, bug bounty hunting, and attack surface management (ASM). Inspired by tools like Spiderfoot but modernized for today’s needs, BBOT delivers speed, modularity, and scalability for cybersecurity professionals and hobbyists alike.
With native support for multiple targets, extensive output options, and seamless integration with popular APIs, BBOT is more than a tool-it’s a full-fledged recon framework that adapts to your workflow.
Why BBOT?
Reconnaissance is the foundation of offensive security. BBOT streamlines this critical phase with:
Subdomain enumeration that consistently outperforms other tools
Web spidering and email harvesting
Light and aggressive web scanning presets
YAML-driven customization with modular architecture
Support for over a dozen output formats including Neo4j, CSV, JSON, and Splunk
BBOT accepts a wide range of target types, including:
Domains (e.g. evilcorp.com)
IP ranges (e.g. 1.2.3.0/24)
URLs, emails, organizations, usernames
Even mobile app package names and file paths
Define scope via command-line or config files to keep scans focused and efficient.
Output Options
BBOT can export scan data to:
Neo4j, Elasticsearch, and Splunk for advanced querying
Slack, Discord, and Microsoft Teams for real-time alerts
SQL databases and CSV/JSON files for storage and analysis
Security and Dependencies
BBOT supports API key configuration for services like Shodan, VirusTotal, and SecurityTrails. Keys can be added to your ~/.config/bbot/bbot.yml file or passed directly via the command line.
All dependencies are auto-installed, and Ansible scripts are provided for streamlined environment setup.
Python API for Developers
Use BBOT as a library for custom applications. Both synchronous and asynchronous scanning are supported:
from bbot.scanner import Scanner
scan = Scanner("evilcorp.com", presets=["subdomain-enum"])
Community & Contributions
BBOT thrives on community contributions-from module ideas to code enhancements. Check out the developer docs to get involved.
Final Thoughts
BBOT isn’t just another recon tool. It’s a flexible, extensible framework built for modern offensive security workflows. Whether you’re working on bug bounties or managing enterprise attack surfaces, BBOT gives you the power to automate and innovate your reconnaissance efforts.
reNgine is a powerful open-source web reconnaissance and vulnerability scanning suite designed for penetration testers, bug bounty hunters, and cybersecurity teams. It brings together the best of automation, intelligence, and flexibility to streamline your reconnaissance workflow.
Why Use reNgine?
Traditional recon tools often lack the scalability and customization modern security teams need. reNgine addresses these gaps with:
Highly configurable YAML-based scan engines
Continuous monitoring with alerts via Discord, Slack, and Telegram
GPT-powered vulnerability reports and attack surface suggestions
Real-time subscanning and advanced recon data filtering
Database-backed recon with natural language-like queries
Installation Steps
Clone the repository: git clone https://github.com/yogeshojha/rengine && cd rengine
Configure the environment in .env (set admin credentials, PostgreSQL password, etc.)
Set concurrency levels based on your system’s RAM
Run the installer: sudo ./install.sh
For full setup on Windows or Mac, check the official documentation.
Core Features
Subdomain Discovery: Find alive domains, filter intelligently by HTTP status or keywords
Vulnerability Scanning: Integrated tools like Nuclei, Dalfox, CRLFuzzer, and misconfigured S3 checks
Role-Based Access Control: Assign users as Sys Admin, Pen Tester, or Auditor
Project Dashboard: Separate scopes for bug bounty, internal testing, or client projects
PDF Reporting: Fully customizable reports with branding, executive summaries, and GPT integration
Enterprise Features
Organizations can benefit from reNgine’s support for multiple users, periodic scans, and detailed recon data analytics. With support for integrations like HackerOne and robust tooling for data import/export, reNgine fits seamlessly into team workflows.
Security and Community
reNgine is backed by a passionate open-source community. You can contribute via pull requests, suggest features, or help with documentation. It uses the GPL-3.0 license and emphasizes secure practices like version-controlled vulnerability reporting and role isolation.
Final Thoughts
If you’re serious about recon, reNgine is a must-have. It blends automation with deep analysis, helping you stay ahead in a fast-evolving threat landscape. From hobbyists to professional red teams, reNgine delivers value at every level.
If you’ve ever stumbled upon a string of encrypted or encoded text and thought, “What the heck is this?”, then Ciphey is about to become your favorite cybersecurity companion. Created by Bee and supported by a passionate community, Ciphey is a fully automated decryption, decoding, and cracking tool powered by artificial intelligence and natural language processing. And the best part? You don’t need to know what the encryption is – Ciphey figures it out for you!
Purpose and Real-World Use Cases
Ciphey is built for speed, intelligence, and accessibility. Whether you’re playing CTFs, analyzing suspicious payloads, or just curious about encrypted content, Ciphey helps you by:
Automatically detecting and decoding unknown encrypted inputs
Supporting over 50 cipher types and hashes, including Base64, Caesar, Vigenère, XOR, and Morse
Providing quick solutions without requiring deep cryptography knowledge
Serving as a smart pre-analysis tool in digital forensics or penetration testing
Installation and Setup
Installing Ciphey is straightforward across major platforms:
Python: python3 -m pip install ciphey --upgrade
Docker: docker run -it --rm remnux/ciphey
Homebrew: brew install ciphey
MacPorts: sudo port install ciphey
For full installation instructions and platform-specific help, check the official guide.
Core Features and Commands
Ciphey stands out due to its AI-based logic and blazing speed. Key features include:
AI-Powered Cipher Detection: Uses AuSearch to infer the encryption type
Natural Language Processing: Smart recognition of when text becomes readable plaintext
Multi-Language Support: Currently supports English and German
Support for Hashes: Something many competitors don’t offer
Speed: Most decryptions take less than 3 seconds
Example usage:
ciphey -t "EncryptedInput" – standard usage
ciphey -f file.txt – decrypt contents of a file
ciphey -t "Input" -q – quiet mode without progress or noise
Why Ciphey Beats the Competition
Compared to tools like CyberChef or Katana, Ciphey offers several advantages:
No need to manually configure decoding steps
Faster and more accurate at determining encryption methods
Supports hashes and encryption formats that others miss
Built with performance in mind using a C++ core
Real-world tests show Ciphey decrypts 42-layer Base64 strings in under 2 seconds, while CyberChef requires user setup and runs much slower-or crashes on large files!
Security Considerations
Ciphey is designed to be safe for educational and CTF use. However:
Always use it in a secure, isolated environment when analyzing potentially malicious content
Be cautious of decoded outputs-review carefully before executing or sharing
Community and Contributions
Ciphey is proudly open-source under the MIT license. Contributions are welcomed and well-documented. Whether you’re adding new ciphers, fixing bugs, or improving documentation, there’s room for everyone. Join the vibrant community on Discord or explore the contribution guide.
Conclusion
Ciphey is a brilliant example of how automation, AI, and smart design can make cybersecurity tools more accessible and powerful. Whether you’re a beginner trying to understand your first CTF challenge or a seasoned analyst working on encoded threat intel, Ciphey can save you time and headaches. Install it, run it, and let Ciphey handle the mystery of “what kind of encryption is this?”
Fast, smart, and made by hackers for hackers – Ciphey is a tool you’ll want in your arsenal.
Discover Sn1per: Your All-in-One Pentest and Recon Tool
In the world of cybersecurity, time is critical. Sn1per, developed by @1N3, is a powerful and comprehensive automated pentesting framework designed to streamline attack surface management, reconnaissance, and vulnerability assessment in one cohesive platform. Whether you’re an ethical hacker, a red teamer, or a security analyst, Sn1per helps you uncover hidden risks and misconfigurations quickly and efficiently.
Why Sn1per Matters
Sn1per shines in automating and orchestrating powerful open-source and commercial tools to scan, identify, and prioritize vulnerabilities across your infrastructure. It supports external and internal scans and is structured to mirror real-world attacker behaviors.
Real-World Use Cases
Attack surface discovery and mapping
Automated vulnerability scanning across networks and web apps
Red teaming and penetration testing engagements
Security posture assessments
Continuous monitoring of external assets
Installation Made Easy
Sn1per is versatile and can be deployed in several ways:
Linux Installation (Kali, Ubuntu, Debian, Parrot):
git clone https://github.com/1N3/Sn1per
cd Sn1per
bash install.sh
AWS AMI (EC2 Instance):
Available via the AWS Marketplace for easy cloud deployment.
Docker Installation:
Run via Docker Compose or directly with:
sudo docker compose up
sudo docker run --privileged -it sn1per-kali-linux /bin/bash
Core Features
Sn1per includes a wide range of scanning and reporting modes:
NORMAL: Full port scan and reconnaissance
STEALTH: Low-noise scanning to evade detection
NUKE: Complete auditing with brute-force, OSINT, recon, and workspace management
DISCOVER: Subnet enumeration and scanning
WEBSCAN: HTTP/S application scanning via Burp Suite and Arachni
MASSVULNSCAN: Vulnerability scanning across multiple targets using OpenVAS
sniper -t target.com -o -re # Normal scan with OSINT and recon
sniper -f targets.txt -m nuke # Nuke mode on multiple targets
sniper -t target.com -m stealth # Stealth mode
Integrations
Sn1per integrates seamlessly with major tools and platforms:
Burp Suite Professional
OWASP ZAP
Metasploit
OpenVAS and Nessus
Slack (alerts)
Shodan, Censys, Hunter.io APIs
Security and Operational Considerations
Sn1per is a powerful tool intended for authorized use only. Misuse can result in legal or ethical violations. Always ensure you’re operating in an approved environment, such as a lab or during a sanctioned assessment.
Dependencies vary by installation method and mode. Shell, Python, and external scanners may require additional configuration for full functionality.
Sn1per Enterprise
For enterprise users, Sn1per offers a commercial edition with advanced reporting, dashboards, and management features. Perfect for large-scale infrastructure monitoring and compliance assessments.
Conclusion
Sn1per is not just another recon script-it’s a powerful and extensible platform for conducting advanced penetration tests, vulnerability scans, and continuous security monitoring. Whether you’re targeting a single host or a massive enterprise network, Sn1per provides the automation and insight needed to stay ahead of threats.
The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk. Read it have a laugh and never do this.
After that, some companies reached out to me asking where to even get started. There are thousands of possible variations of certain domains, so it can feel overwhelming. Most people begin with dnstwist, a really handy script that generates hundreds or thousands of lookalike domains using statistics. Dnstwist also checks if they are already pointing to a server via DNS, which helps you identify if someone is already trying to abuse a typosquatted domain.
While this is great for finding typosquatter domains that already exist, it doesn’t necessarily help you find and register them before someone else does (at least, not in a targeted way).
On a few pentests where I demonstrated the risks of typosquatting, I registered a domain, set up a catch-all rule to redirect emails to my address—intercepting very sensitive information—and hosted a simple web server to collect API tokens from automated requests. To streamline this process, I built a small script to help me (and now you) get started with defensive domain registration.
Wow, there are still a lot of typo domains available for my business website 😅.
While longer domains naturally have a higher risk of typos, I don’t have enough traffic to justify the cost of defensively registering them. Plus, my customers don’t send me sensitive information via email—I use a dedicated server for secure uploads and file transfers. (Yes, it’s Nextcloud 😉).
typosquatterpy is a Python script that generates common typo domain variations of a given base domain (on a QWERTZ keyboard) using OpenAI’s API and checks their availability on Strato. This tool helps in identifying potential typo-squatted domains that could be registered to protect a brand or business.
⚠️ Disclaimer: This project is not affiliated with Strato, nor is it their official API. Use this tool at your own risk!
🛠️ Installation
To use typosquatterpy, you need Python and the requests library installed. You can install it via pip:
pip install requests
📖 Usage
Run the script with the following steps:
Set your base domain (e.g., example) and TLD (e.g., .de).
Replace api_key="sk-proj-XXXXXX" with your actual OpenAI API key.
Run the script, and it will:
Generate the top 10 most common typo domains.
Check their availability using Strato’s unofficial API.
Example Code Snippet
base_domain="karlcom"tld=".de"typo_response=fetch_typo_domains_openai(base_domain,api_key="sk-proj-XXXXXX")typo_domains_base=extract_domains_from_text(typo_response)typo_domains= [domain.split(".")[0].rstrip(".") + tld for domain in typo_domains_base]is_domain_available(typo_domains)
Output Example
✅karicom.de❌karlcomm.de✅krlcom.de
⚠️ Legal Notice
typosquatterpy is not affiliated with Strato and does not use an official Strato API.
The tool scrapes publicly available information, and its use is at your own discretion.
Ensure you comply with any legal and ethical considerations when using this tool.
Conclusion
If you’re wondering what to do next and how to start defensively registering typo domains, here’s a straightforward approach:
Generate Typo Domains – Use my tool to create common misspellings of your domain, or do it manually (with or without ChatGPT).
Register the Domains – Most companies already have an account with a registrar where their main domain is managed. Just add the typo variations there.
Monitor Traffic – Keep an eye on incoming and outgoing typo requests and emails to detect misuse.
Route & Block Traffic – Redirect typo requests to the correct destination while blocking outgoing ones. Most commercial email solutions offer rulesets for this. Using dnstwist can help identify a broad range of typo domains.
Block Outgoing Requests – Ideally, use a central web proxy. If that’s not possible, add a blocklist to browser plugins like uBlock, assuming your company manages it centrally. If neither option works, set up AdGuard for central DNS filtering and block typo domains there. (I wrote a guide on setting up AdGuard!)
We’ve all been there—no exceptions, literally all of us. You’re at a party, chatting up a total cutie, the vibes are immaculate, and then she hits you with the: “Show me your GitHub contributions chart.” She wants to see if you’re really about that open-source life.
Panic. You know you are mid at best, when it comes to coding. Your chart is weak and you know it.
You hesitate but show her anyway, hoping she’ll appreciate you for your personality instead. Wrong! She doesn’t care about your personality, dude—only your commits. She takes one look, laughs, and walks away.
Defeated, you grab a pizza on the way home (I’m actually starving writing this—if my Chinese food doesn’t arrive soon, I’m gonna lose it).
Anyway! The responsible thing to do would be to start contributing heavily to open-source projects. This is not that kind of blog though. Here, we like to dabble in the darker arts of IT. Not sure how much educational value this has, but here we go with the disclaimer:
Disclaimer:
The information provided on this blog is for educational purposes only. The use of hacking tools discussed here is at your own risk. Read it have a laugh and never do this.
Quick note: This trick works on any gender you’re into. When I say “her” just mentally swap it out for whoever you’re trying to impress. I’m only writing it this way because, that’s who I would personally want to impress.
Intro
I came across a LinkedIn post where someone claimed they landed a $500K developer job—without an interview—just by writing a tool that fakes GitHub contributions. Supposedly, employers actually check these charts and your public code.
Now, I knew this was classic LinkedIn exaggeration, but it still got me thinking… does this actually work? I mean, imagine flexing on your friends with an elite contribution chart—instant jealousy.
Of course, the golden era of half-a-mil, no-interview dev jobs is long gone (RIP), but who knows? Maybe it’ll make a comeback. Or maybe AI will just replace us all before that happens.
I actually like Copilot, but it still cracks me up. If you’re not a programmer, just know that roasting your own code is part of the culture—it’s how we cope, but never roast my code, because I will cry and you will feel bad. We both will.
The Setup
Like most things in life, step one is getting a server to run a small script and a cronjob on. I’m using a local LXC container in my Proxmox, but you can use a Raspberry Pi, an old laptop, or whatever junk you have lying around.
Oh, and obviously, you’ll need a GitHub account—but if you didn’t already have one, you wouldn’t be here.
Preparation
First, you need to install a few packages on your machine. I’m gonna assume you’re using Debian—because it’s my favorite (though I have to admit, Alpine is growing on me fast):
You’re almost done prepping. Now, you just need to clone one of your repositories. Whether it’s public or private is up to you—just check your GitHub profile settings:
If you have private contributions enabled, you can commit to a private repo.
f not, just use a public repo—or go wild and do both.
The Code
Let us test our setup before we continue:
gitclonehttps://github.com/YourActualGithubUser/YOUR_REPO_OF_CHOICEgitaddcounter.pygitcommit-m"add a counter"gitpush
Make sure to replace your username and repo in the command—don’t just copy-paste like a bot. If everything went smoothly, you should now have an empty counter.py file sitting in your repository.
Of course, if you’d rather keep things tidy, you can create a brand new repo for this. But either way, this should have worked.
The commit message will vary.
Now the code of the shell script:
gh_champ.sh
#!/bin/bash# Define the directory where the repository is located# this is the repo we got earlier from git cloneREPO_DIR="/root/YOUR_REPO_OF_CHOICE"# random delay to not always commit at exact timeRANDOM_DELAY=$((RANDOM %20+1))DELAY_IN_SECONDS=$((RANDOM_DELAY *60))sleep"$DELAY_IN_SECONDS"cd"$REPO_DIR"||exit# get current time and overwrite fileecho"print(\"$(date)\")">counter.py# Generate a random string for the commit messageCOMMIT_MSG=$(tr-dc A-Za-z0-9 </dev/urandom |head-c16)# Stage the changes, commit, and pushgitaddcounter.py>/dev/null2>&1gitcommit-m"$COMMIT_MSG">/dev/null2>&1gitpushoriginmaster>/dev/null2>&1
Next, you’ll want to automate this by setting it up as a cronjob:
1710-20/2***/root/gh_champ.sh
I personally like usingcrontab.guru to craft more complex cron schedules—it makes life easier.
This one runs at minute 17 past every 2nd hour from 10 through 20, plus a random 1-20 minute delay from our script to keep things looking natural.
And that’s it. Now you just sit back and wait 😁.
Bonus: Cronjob Monitoring
I like keeping an eye on my cronjobs in case they randomly decide to fail. If you want to set up Healthchecks.io for this, check out my blog post.
Looks bonita 👍 ! With a chart like this, the cuties will flock towards you instead of running away.
Jokes aside, the whole “fake it till you make it” philosophy isn’t all sunshine and promotions. Sure, research suggests that acting confident can actually boost performance and even trick your brain into developing real competence (hello, impostor syndrome workaround!). But there’s a fine line between strategic bluffing and setting yourself up for disaster.
Let’s say you manage to snag that sweet developer job with nothing but swagger and a well-rehearsed GitHub portfolio. Fast forward to your 40s—while you’re still Googling “how to center a div” a younger, hungrier, and actually skilled dev swoops in, leaving you scrambling. By that age, faking it again isn’t just risky; it’s like trying to pass off a flip phone as the latest iPhone.
And yeah, if we’re being honest, lying your way into a job is probably illegal (definitely unethical), but hey, let’s assume you throw caution to the wind. If you do manage to land the gig, your best bet is to learn like your livelihood depends on it—because, well, it does. Fake it for a minute, but make sure you’re building real skills before the curtain drops.
Got real serious there for a second 🥶, gotta go play Witcher 3 now, byeeeeeeeeee 😍
EDIT
There has been some development in this space. I have found a script that let’s you commit messages with dates attached so you do not have to wait an entire year to show off: https://github.com/davidjan3/githistory
Seven to five years ago, I was absolutely obsessed with the idea of beating the stock market. I dove headfirst into the world of investing, devouring books, blogs, and whatever information I could get my hands on. I was like a sponge, soaking up everything. After countless hours of research, I came to one clear conclusion:
To consistently beat the market, I needed a unique edge—some kind of knowledge advantage that others didn’t have.
It’s like insider trading, but, you know, without the illegal part. My plans to uncover obscure data and resources online that only a select few were using. That way, I’d have a significant edge over the average trader. In hindsight, I’m pretty sure that’s what big hedge funds, especially the short-selling ones, are doing—just with a ton more money and resources than I had. But I’ve always thought, “If someone else can do it, so can I.” At the end of the day, those hedge fund managers are just people too, right?
Around that time, I was really into the movie War Dogs. It had this fascinating angle that got me thinking about analyzing the weapons trade, aka the “defense” sector.
Here’s the interesting part: The United States is surprisingly transparent when it comes to defense spending. They even publicly list their contracts online (check out the U.S. Department of Defense Contracts page). The EU, on the other hand, is a completely different story. Getting similar information was like pulling teeth. You’d basically need to lawyer up and start writing formal letters to access anything remotely useful.
The Idea
Quite simply: Build a tool that scrapes the Department of Defense contracts website and checks if any of the publicly traded companies involved had landed massive new contracts or reported significantly higher income compared to the previous quarter.
Based on the findings, I’d trade CALL or PUT options. If the company performed poorly in the quarter or year, I’d go for a PUT option. If they performed exceptionally well, I’d opt for a CALL, banking on the assumption that these contracts would positively influence the next earnings report.
Theoretically, this seemed like one of those obvious, no-brainer strategies that had to work. Kind of like skipping carbs at a buffet and only loading up on meat to get your money’s worth.
Technologie
At first, I did everything manually with Excel. Eventually, I wrote a Python Selenium script to automate the process.
Here’s the main script I used to test the scraping:
// Search -> KEYWORD// https://www.defense.gov/Newsroom/Contracts/Search/KEYWORD/// -------// Example:// https://www.defense.gov/Newsroom/Contracts/Search/Boeing/// ------------------------------------------------------------// All Contracts -> PAGE (momentan bis 136)// https://www.defense.gov/Newsroom/Contracts/?Page=PAGE// -------// Example:// https://www.defense.gov/Newsroom/Contracts/?Page=1// https://www.defense.gov/Newsroom/Contracts/?Page=136// -------------------------------------------------------// Contract -> DATE// https://www.defense.gov/Newsroom/Contracts/Contract/Article/DATE// -------// https://www.defense.gov/Newsroom/Contracts/Contract/Article/2041268/// ---------------------------------------------------------------------// Select Text from Article Page// document.querySelector(".body")// get current link// window.location.href// ---> Save Company with money for each day in db// https://www.defense.gov/Newsroom/Contracts/Contract/Article/1954307/varCOMPANY_NAME="The Boeing Co.";var comp_money =0;var interesting_div = document.querySelector('.body')var all_contracts = interesting_div.querySelectorAll("p"),i;var text_or_heading;var heading;var text;var name_regex =/^([^,]+)/gm;var price_regex =/\$([0-9]{1,3},*)+/gm;var price_contract_regex =/\$([0-9]{1,3},*)+ (?<=)([^\s]+)/gm;var company_name;var company_article;for (i =0; i < all_contracts.length; ++i) { text_or_heading = all_contracts[i];if (text_or_heading.getAttribute('id') !="skip-target-holder") {if (text_or_heading.getAttribute('style')) { heading = text_or_heading.innerText; } else { text = text_or_heading.innerText; company_name = text.match(name_regex) contract_price = text.match(price_regex) contract_type = text.match(price_contract_regex)try { contract_type = contract_type[0]; clean_type = contract_type.split(' '); contract_type = clean_type[1]; } catch(e) { contract_type ="null"; }try { company_article = company_name[0]; } catch(e) { company_article ="null"; }try { contract_amount = contract_price[0];if (company_article ==COMPANY_NAME){ contract_amount = contract_amount.replace("$","") contract_amount = contract_amount.replace(",","") contract_amount = contract_amount.replace(",","") contract_amount = contract_amount.replace(",","") contract_amount =parseInt(contract_amount, 10) comp_money = contract_amount + comp_money } } catch(e) { contract_amount ="$0"; } console.log("Heading : "+ heading); console.log("Text : "+ text); console.log("Company Name : "+ company_article); console.log("Awarded : "+ contract_amount) console.log("Contract Type: "+ contract_type); } }}console.log(COMPANY_NAME);console.log(new Intl.NumberFormat('en-EN', { style: 'currency', currency: 'USD' }).format(comp_money));// --> Save all Links to Table in Databasefor (var i =1; i >=136; i++) {var url ="https://www.defense.gov/Newsroom/Contracts/?Page="+ ivar page_links = document.querySelector("#alist > div.alist-inner.alist-more-here")var all_links = page_links.querySelectorAll("a.title") all_links.forEach(page_link=> {var contract_date =Date(Date.parse(page_link.innerText))var contracvt_link = page_link.href });}
The main code is part of another project I called “Wallabe“.
The stack was the usual:
Python: The backbone of the project, handling the scraping logic and data processing efficiently.
Django: Used for creating the web framework and managing the backend, including the database and API integrations.
Selenium & BeautifulSoup: Selenium was used for dynamic interactions with web pages, while BeautifulSoup handled the parsing and extraction of relevant data from the HTML.
PWA (“mobile app”): Designed as a mobile-only Progressive Web App to deliver a seamless, app-like experience without requiring actual app store deployment.
I wanted the feel of a mobile app without the hassle of actual app development.
One of the challenges I faced was parsing and categorizing the HTML by U.S. military branches. There are a lot, and I’m sure I didn’t get them all, but here’s the list I was working with seven years ago (thanks, JROTC):
I tried to revive this old project, but unfortunately, I can’t show you what the DoD data looked like anymore since the scraper broke after some HTML changes on their contracts website. On the bright side, I can still share some of the awesome UI designs I created for it seven years ago:
Imagine a clean, simple table with a list of companies on one side and a number next to each one showing how much they made in the current quarter.
How it works
Every day, I scrape the Department of Defense contracts and calculate how much money publicly traded companies received from the U.S. government. This gives me a snapshot of their revenue before quarterly earnings are released. If the numbers are up, I buy CALL options; if they’re down, I buy PUT options.
The hardest part of this process is dealing with the sheer volume of updates. They don’t just release new contracts—there are tons of adjustments, cancellations, and modifications. Accounting for these is tricky because the contracts aren’t exactly easy to parse. Still, I decided it was worth giving it a shot.
Now, here’s an important note: U.S. defense companies also make a lot of money from other countries, not just the U.S. military. In fact, the U.S. isn’t even always their biggest contributor. Unfortunately, as I mentioned earlier, other countries are far less transparent about their military spending. This lack of data is disappointing and limits the scope of the analysis.
Despite these challenges, I figured I’d test the idea on paper and backtest it to see how it performed.
Conclusion
TL;DR: Did not work.
The correlation I found between these contracts and earnings just wasn’t there. Even when the numbers matched and I got the part right that “Company made great profit,” the market would still turn around and say, “Yeah, but it’s 2% short of what we expected. We wanted +100%, and a measly +98% is disappointing… SELLLL!”
The only “free money glitch” I’ve ever come across is what I’m doing with Bearbot, plus some tiny bond tricks that can get you super small monthly profits (like 0.10% to 0.30% a month).
That said, this analysis still made me question whether everything is truly priced in or if there are still knowledge gaps to exploit. The truth is, you never really know if something will work until you try. Sure, you can backtest, but that’s more for peace of mind. Historical data can’t predict the future. A drought killing 80% of cocoa beans next year is just as possible as a record harvest. Heck, what’s stopping someone from flying to Brazil and burning down half the coffee fields to drive up coffee bean prices? It’s all just as unpredictable as them not doing that (probably, please don’t).
What I’m saying is, a strategy that’s worked for 10 years can break tomorrow or keep working. Unless you have insider info that others don’t, it’s largely luck. Sometimes your strategy seems brilliant just because it got lucky a few times—not because you cracked the Wall Street code.
I firmly believe there are market conditions that can be exploited for profit, especially in complex derivatives trading. A lot of people trade these, but few really understand how they work, which leads to weird price discrepancies—especially with less liquid stocks. I also believe I’ve found one of these “issues” in the market: a specific set of conditions where certain instruments, in certain environments, are ripe for profit with minimal probability if risk (which means: high risk that almost never materializes). That’s Bearbot.
Anyway, long story short, this whole experiment is part of what got Bearbot started. Thanks for reading, diamond hands 💎🙌 to the moon, and love ya ❤️✌️! Byeeeee!
Five years ago, a younger and more optimistic Karl, with dreams of cracking the European equivalent of the Powerball, formed a bold thesis:
“Surely the Eurojackpot isn’t truly random anymore. It must be calculated by a machine! And since machines are only capable of generating pseudorandom numbers, I could theoretically simulate the system long enough to identify patterns or at least tilt the odds in my favor by avoiding the least random combinations.“
This idea took root after I learned an intriguing fact about computers: they can’t generate true randomness. Being deterministic machines, they rely on algorithms to create pseudorandom numbers, which only appear random but are entirely predictable if you know the initial value (seed). True randomness, on the other hand, requires inputs from inherently unpredictable sources, like atmospheric noise or quantum phenomena—things computers don’t have by default.
Python: The backbone of the project. Python’s versatility and extensive library support made it the ideal choice for building the bot. It handled everything from script automation to data parsing. You can learn more about Python at python.org.
Selenium: Selenium was crucial for automating browser interactions. It allowed the bot to navigate Lotto24 and fill out the lottery forms. If you’re interested in web automation, check out Selenium’s documentation here.
I was storing the numbers in an SQLite database, don’t ask me why, I think I just felt like playing with SQL.
The Plan
The plan was simple. I researched Eurojackpot strategies and created a small program to generate lottery numbers based on historical data and “winning tactics.” The idea? Simulate the lottery process 50 billion times and identify the numbers that were “randomly” picked most often. Then, I’d play the top X combinations that showed up consistently.
At the time, I was part of a lottery pool with a group of friends, which gave us a collective budget of nearly €1,000 per run. To streamline the process (and save my sanity), I wrote a helper script that automatically entered the selected numbers on the lottery’s online platform.
If you’re curious about the code, you can check it out here. It’s not overly complicated:
In the end, I didn’t win the Eurojackpot (yet 😉). But for a while, I thought I was onto something because I kept winning—kind of. My script wasn’t a groundbreaking success; I was simply winning small amounts frequently because I was playing so many combinations. It gave me the illusion of success, but the truth was far less impressive.
A friend later explained the flaw in my thinking. I had fallen for a common misunderstanding about probability and randomness. Here’s the key takeaway: every possible combination of numbers in a lottery—no matter how “patterned” or “random” it seems—has the exact same chance of being drawn.
For example, the combination 1-2-3-4-5 feels unnatural or “unlikely” because it looks ordered and predictable, while 7-23-41-56-88 appears random. But both have the same probability of being selected in a random draw. The fallacy lies in equating “how random something looks” with “how random it actually is.”
Humans are naturally biased to see patterns and avoid things that don’t look random, even when randomness doesn’t work that way. In a lottery like Eurojackpot, where the numbers are drawn independently, no combination is more or less likely than another. The randomness of the draw is entirely impartial to how we perceive the numbers.
So while my script made me feel like I was gaming the system, all I was really doing was casting a wider net—more tickets meant more chances to win small prizes, but it didn’t change the underlying odds of hitting the jackpot. In the end, the only real lesson I gained was a better understanding of randomness (and a lighter wallet).
This little experiment wasn’t meant to encourage cheating—far from it. It actually began as a casual conversation with a colleague about just how “cheatable” online tests can be. Curiosity got the better of me, and one thing led to another.
If you’ve come across my earlier post, “Get an A on Moodle Without Breaking a Sweat!” you already know that exploring the boundaries of these platforms isn’t exactly new territory for me. I’ve been down this road before, always driven by curiosity and a love for tinkering with systems (not to mention learning how they work from the inside out).
This specific tool, the LinkedIn-Skillbot, is a project I played with a few years ago. While the bot is now three years old and might not be functional anymore, I did test it back in the day using a throwaway LinkedIn account. And yes, it worked like a charm. If you’re curious about the original repository, it was hosted here: https://github.com/Ebazhanov/linkedin-skill-assessments-quizzes. (Just a heads-up: the repo has since moved.)
Important Disclaimer: I do not condone cheating, and this tool was never intended for use in real-world scenarios. It was purely an experiment to explore system vulnerabilities and to understand how online assessments can be gamed. Please, don’t use this as an excuse to cut corners in life. There’s no substitute for honest effort and genuine skill development.
Technologies
This project wouldn’t have been possible without the following tools and platforms:
Python: The backbone of the project. Python’s versatility and extensive library support made it the ideal choice for building the bot. It handled everything from script automation to data parsing. You can learn more about Python at python.org.
Selenium: Selenium was crucial for automating browser interactions. It allowed the bot to navigate LinkedIn, answer quiz questions, and simulate user actions in a seamless way. If you’re interested in web automation, check out Selenium’s documentation here.
LinkedIn (kind of): While LinkedIn itself wasn’t a direct tool, its skill assessment feature was the target of this experiment. This project interacted with LinkedIn’s platform via automated scripts to complete the quizzes.
How it works
To get the LinkedIn-Skillbot up and running, I had to tackle a couple of major challenges. First, I needed to parse the Markdown answers from the assessment-quiz repository. Then, I built a web driver (essentially a scraper) that could navigate LinkedIn without getting blocked—which, as you can imagine, was easier said than done.
Testing was a nightmare. LinkedIn’s blocks kicked in frequently, and I had to endure a lot of waiting periods. Plus, the repository’s answers weren’t a perfect match to LinkedIn’s questions. Minor discrepancies like typos or extra spaces were no big deal for a human, but they threw the bot off completely. For example:
"Is earth round?" ≠ "Is earth round ?"
That one tiny space could break everything. To overcome this, I implemented a fuzzy matching system using Levenshtein Distance.
Levenshtein Distance measures the number of small edits (insertions, deletions, or substitutions) needed to transform one string into another. Here’s a breakdown:
Insertions: Adding a letter.
Deletions: Removing a letter.
Substitutions: Replacing one letter with another.
For example, to turn “kitten” into “sitting”:
Replace “k” with “s” → 1 edit.
Replace “e” with “i” → 1 edit.
Add “g” → 1 edit.
Total edits: 3. So, the Levenshtein Distance is 3.
Using this technique, I was able to identify the closest match for each question or answer in the repository. This eliminated mismatches entirely and ensured the bot performed accurately.
Here’s the code I used to implement this fuzzy matching system:
I also added a failsafe mode that searches for an answer in all documents possible. If it can’t be found, the bot quits the question and lets you answer it manually.
Conclusion
This project was made to show how easy it is to cheat on online tests such as the LinkedIn skill assessments. I am not sure if things have changed in the last 3 years, but back then it was easily possible to finish almost all of them in the top ranks.
I have not pursued the cheating of online exams any further as I found my time to be used better on other projects. However, it did teach me a lot about fuzzy matching of strings and, back then, web scraping as well as getting around bot detection mechanisms. These are skills that have helped me a lot in my cybersecurity career thus far.
Ah, Moodle quizzes. Love them or hate them, they’re a staple of modern education. Back in the day, when I was a student navigating the endless barrage of quizzes, I created a little trick to make life easier. Now, I’m sharing it with you—meet the Moodle Solver, a simple, cheeky tool that automates quiz-solving with the help of bookmarklets. Let’s dive into the how, the why, and the fine print.
Legally, I am required to clarify that this is purely a joke. I have never used this tool, and neither should you. This content is intended solely for educational and entertainment purposes.
I should note that this code is quite old and would need a lot of tweaking to work again.
What is Moodle Solver?
The Moodle Solver is a set of JavaScript scripts you can save as bookmarklets. These scripts automate the process of taking Moodle quizzes, saving you time, clicks, and maybe a bit of stress.
The basic idea:
Do a random first attempt on a quiz to see the correct answers.
Use the scripts to save those answers.
Automatically fill in the correct answers on the next attempt and ace the quiz.
How It Works
Step 1: Do the Quiz (Badly)
Most Moodle quizzes give you two or more attempts. On the first attempt, go in blind—pick random answers without worrying about the outcome. If you’re feeling adventurous, I even have a script that fills in random answers for you (not included in the repo, but it’s out there).
Why do this? Because Moodle shows you the correct answers on the review page after the first try. That’s where the magic happens.
Once you’re on the review page, it’s time to run the get_answers_german.js script. This script scans the page, identifies the correct answers, and saves them to your browser’s localStorage.
One caveat: The script is written in German (a throwback to my school days), so you might need to modify it for your language. Moodle’s HTML structure might also change over time, but a little tweaking should do the trick.
Step 3: Nail the Second Attempt
When you’re ready for your second attempt, use the set_answers.js script. This script fills in all the correct answers for you. Want to go full automation? Use autosubmit.js to submit the quiz with a randomized timer, so it doesn’t look suspicious. After all, no teacher will believe you aced a 50-question quiz in 4 seconds.
Bonus Features
Got the answers from a friend or Google? No problem. The fallback_total.js script lets you preload question-answer pairs manually. Simply format them like this:
Swap out the default questions and answers in the script, save it as a bookmarklet, and you’re good to go.
Why Bookmarklets?
Bookmarklets are incredibly convenient for this kind of task. They let you run JavaScript on any webpage directly from your browser’s bookmarks bar. It’s quick, easy, and doesn’t require you to mess around with browser extensions. It is also really sneaky in class 😈
To turn the Moodle Solver scripts into bookmarklets, use this free tool.
Convert to Bookmarklets: Use the guide linked above to save each script as a bookmarklet in your browser.
Test and Tweak: Depending on your Moodle setup, you might need to adjust the scripts slightly (e.g., to account for language or HTML changes).
The Fine Print
Let’s be real: This script is a bit cheeky. Use it responsibly and with caution. The goal here isn’t to cheat your way through life—it’s to save time on tedious tasks so you can focus on learning the stuff that matters.
That said, automation is a skill in itself. By using this tool, you’re not just “solving Moodle quizzes”—you’re learning how to script, automate, and work smarter.
Wrapping Up
The Moodle Solver is a lighthearted way to make Moodle quizzes less of a hassle. Whether you’re looking to save time, learn automation, or just impress your friends with your tech skills, it’s a handy tool to have in your back pocket.
Good luck out there, and remember: Work smarter, not harder! 🚀
Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.