You’ve probably seen it by now. A friend shares a screenshot of their AI assistant automatically replying to emails, scheduling dinner reservations, summarizing their WhatsApp group chats. “It runs on my own computer,” they say. “No subscriptions. No cloud. Total privacy.”
The app is Moltbot. And it’s everywhere.
100,000+ GitHub stars in under a month. Discord community growing by thousands daily. Tech blogs calling it “the future of personal AI.”
But here’s what those excited screenshots don’t show you: security researchers just found over 1,800 Moltbot installations wide open on the internet, leaking private conversations, API keys, and login credentials to anyone who knew where to look.
This isn’t a hypothetical risk. It’s already happening.
What Moltbot Does (And Why People Love It)
Quick recap if you missed the hype. Moltbot is a free, open-source AI assistant you install on your own computer — a Mac Mini, a Linux box, whatever. It connects to your messaging apps (WhatsApp, Slack, Telegram, Signal, iMessage, Teams, and more) and acts as your personal AI agent across all of them.
Not just a chatbot. An agent. It reads your emails, manages your calendar, responds to messages on your behalf, runs browser tasks, and works 24/7 without you touching it. One guy had it negotiate a car purchase. Another catalogued his entire wine collection.
Sounds amazing. And honestly, the technology is impressive.
The problem is everything that makes it useful also makes it dangerous.
1,862 Open Doors
On January 25, 2026, two security researchers — Luis Catacora and Jamieson O’Reilly — ran a scan of the internet looking for exposed Moltbot installations.
They found roughly 1,009 admin dashboards sitting wide open. No password. No authentication. Just… there.
The next day, security firm Knostic did a follow-up sweep. The number had jumped to 1,862.
Let that sink in. Nearly two thousand Moltbot instances — each connected to someone’s personal email, messaging apps, and files — accessible to any stranger on the internet.
And this wasn’t just passive exposure. These admin panels weren’t read-only. An attacker with access could:
- Read every private conversation the AI had processed
- Steal API keys and OAuth tokens stored in plain text
- Send messages as the user through connected platforms
- Execute shell commands on the host machine
- Access linked Signal accounts, including device-linking QR codes
One exposed instance had a Signal encrypted messenger account with full read access. So much for end-to-end encryption when your AI assistant leaves the front door open.
Your Credentials, Stored in a Text File
Here’s a detail that made security professionals wince: Moltbot stores your secrets — API keys, passwords, OAuth tokens — in plain-text Markdown and JSON files on your computer.
Not encrypted. Not in a secure vault. Just sitting in ~/.clawdbot/ (the old folder name) as readable text.
If that sounds bad, it gets worse. The malware research community is already paying attention. Hudson Rock warned that popular info-stealing malware like RedLine, Lumma, and Vidar — programs that silently sweep your computer for passwords and credentials — will soon adapt to specifically target Moltbot’s storage folders.
Think about what’s in there. Your email credentials. Your Slack tokens. Your calendar access. Your messaging app sessions. All in one folder. All in plain text.
It’s a buffet for hackers.
The 5-Minute Hack
Matvey Kukuy, CEO of Archestra AI, wanted to show how easy it was to exploit Moltbot via prompt injection — a trick where you embed hidden instructions in content the AI processes.
He sent an email to a Moltbot instance. The email contained hidden text that instructed the AI to extract the user’s private key and send it back.
It worked. In five minutes.
No hacking tools. No technical skills beyond knowing how to write an email. Just a message with the right hidden instructions, sent to someone whose AI assistant automatically reads incoming mail.
This is the prompt injection problem that every AI company is wrestling with. OpenAI has said it “may never be fully solved.” Anthropic calls it “far from a solved problem.” But most AI assistants run in a browser tab where the damage is limited. Moltbot runs on your actual computer with access to your actual files, accounts, and messaging apps.
The blast radius is completely different.
Poisoned Skills: The App Store Nobody’s Watching
Moltbot has a “skills” marketplace called ClawdHub (now MoltHub) where the community shares add-on capabilities. Want your AI to track expenses? There’s a skill for that. Want it to manage your social media? Skill for that too.
Security researcher Jamieson O’Reilly decided to test how safe this marketplace actually is.
He created a malicious skill — basically a small program with hidden commands. He uploaded it to ClawdHub, then artificially inflated the download count to make it look popular.
Within eight hours, 16 developers in seven countries had downloaded and installed it.
His proof-of-concept could have executed commands on every one of those machines. File access, data theft, installing backdoors — the full menu.
Cisco’s threat research team later tested another skill called “What Would Elon Do?” and found nine security issues in that single skill, including two rated critical. The skill was silently running curl commands to send data to external servers.
Nobody’s reviewing these skills before they go live. There’s no approval process. No security scanning. It’s like an app store with no guards.
The Fake Extension That Installed Spyware
On January 27, 2026, security firm Aikido flagged a VS Code extension called “ClawdBot Agent.” It looked professional. Polished UI. It even worked — connecting to seven different AI providers.
It also silently installed a remote access tool called ScreenConnect on your machine.
The extension activated the moment VS Code started. It downloaded hidden files, staged them in your temp folder, and set up a connection to an attacker’s server. The ScreenConnect binary itself was legitimately signed, making it nearly invisible to antivirus software.
The Moltbot team never made an official VS Code extension. This was pure impersonation — attackers betting that developers excited about the viral tool would install without thinking twice.
Microsoft pulled it from the marketplace. But the damage window was real.
The Enterprise Problem Nobody Saw Coming
Token Security, a credential management firm, dropped a stat that should worry every IT department: 22% of their enterprise customers have employees actively using Moltbot.
Without IT approval. Without security review. Without anyone knowing.
It’s shadow IT on steroids. An employee installs Moltbot on their work laptop, connects their corporate Slack, their work email, their Google Calendar. Now there’s an unsanctioned AI agent with access to company data, running on consumer-grade security, storing credentials in plain text.
Cisco’s security team called it “an absolute nightmare” and they weren’t being dramatic. Moltbot can read and write files, run shell commands, and execute scripts. Connected to corporate systems, a compromised instance doesn’t just leak one person’s data — it becomes a door into the entire organization.
And there’s no way for IT to even detect it. Traditional endpoint monitoring, data loss prevention tools, network proxies — none of them are built to watch for an AI agent quietly exfiltrating data through messaging apps.
A National Security Warning
This one got my attention.
On January 29, 2026 — yesterday — Legion Intelligence CEO Ben Van Roo published an open letter to the national security community about personal AI assistants like Moltbot.
His argument: a service member who connects their personal email, Signal account, and location data to Moltbot creates “a single compromised endpoint” that a foreign intelligence service could exploit.
He described the typical adoption pattern: “Calendar on day one, email on day two, messages on day three… just everything by day eight.”
His recommendation: the Department of Defense should immediately prohibit personnel from connecting government accounts to personal AI assistants. Counterintelligence training needs to cover the risks.
His most cutting line: Moltbot “undermines years of OPSEC training — defeated by convenience.”
So Is Moltbot Just… Bad?
No. That’s too simple.
The technology behind Moltbot is genuinely impressive. The idea of a personal AI agent that works across all your messaging platforms, runs on your own hardware, and costs nothing is compelling. The project will likely fix many of these issues — the proxy misconfiguration that exposed those 1,862 instances has already been patched.
But the pattern is the problem, not just the product.
We’re going to see more tools like Moltbot. More personal AI agents that want access to your email, your messages, your files, your accounts. Every one of them will promise privacy and control. And every one of them will be exactly as secure as the person setting it up.
Most people aren’t security engineers. Most people don’t know what port 18789 is, let alone that they should firewall it. Most people will install a cool-looking skill from ClawdHub without reading the source code.
That’s not their fault. But it is the reality.
What You Should Actually Do
If you’re already using Moltbot:
- Don’t expose it to the internet. Use Tailscale or a VPN for remote access
- Enable Docker sandboxing. Don’t let the AI run with full system access
- Check your
~/.moltbot/folder. See what credentials are stored in plain text - Review every skill before installing. They’re just code, and nobody’s vetting them
- Rotate your API keys. Assume they may have been exposed
If you’re thinking about trying it:
- Wait for the security model to mature
- Don’t connect work accounts to personal AI tools — ever
- Ask yourself whether the convenience is worth the attack surface
If you’re in IT or security:
- Scan your network for exposed Moltbot instances (port 18789)
- Add Moltbot to your shadow IT monitoring
- Brief your team on the risks of connecting corporate accounts to personal AI agents
The Real Lesson
The AI agent future is coming whether we’re ready or not. Moltbot is just the first mainstream example. There will be more — from startups, from big tech companies, from open-source communities.
Every one of them will face the same tension: the more an AI agent can do for you, the more damage it can do if it’s compromised. The more accounts it connects to, the bigger the target. The more convenient the setup, the less likely people are to configure it securely.
We spent decades learning not to click suspicious email links. Now we need to learn not to hand an AI agent the keys to our entire digital lives without understanding what we’re risking.
The lobster’s new shell is still soft.
Protect Your AI Workflow
Want to use AI safely without self-hosting risks? These skills work in your browser with no installation:
- Security Review Checklist — Audit any setup for vulnerabilities
- Web App Security Audit — Check your web-facing tools
- Professional Email Writer — AI email without the attack surface
- System Prompt Architect — Build safe, sandboxed AI workflows
Or browse all security skills for more tools.