21,000 OpenClaw instances. Exposed gateway tokens. Two weeks. That’s what security researchers just found scanning the internet for misconfigured AI agents. And if your OpenClaw setup is one of them, every API key your agent has access to is sitting there for anyone to grab.
TL;DR
Thousands of OpenClaw deployments are running with gateway tokens exposed to the public internet. An exposed token means full access to everything the agent can do, including any API keys loaded in its environment. This post covers what happened, how to check if you’re affected, and how scoped secrets prevent the worst-case scenario even when things go wrong.
What Happened
The Hacker News reported this week on a wave of OpenClaw security issues, remote code execution vulnerabilities, leaked tokens, and thousands of instances reachable from the open internet. The headline number: 21,000 exposed instances discovered in just 14 days.
The core problem isn’t an OpenClaw bug. It’s configuration.
OpenClaw’s gateway uses a token for authentication. By default, it binds to localhost. But plenty of people change that, they want to access their agent from their phone, from another machine, or through a reverse proxy. They bind to 0.0.0.0, open a port, and forget that the gateway token is now the only thing between the internet and their AI agent.
That token is often:
- Hardcoded in a
docker-compose.ymlfile - Set to something simple during initial setup
- Visible in the Control UI URL:
http://your-server:18789/?token=YOUR_TOKEN - Never rotated
If someone finds your instance (and Shodan1 makes that trivial), they get:
- Full control of your AI agent
- Access to every API key in the agent’s environment
- The ability to execute arbitrary commands in the sandbox
- Your Anthropic/OpenAI credits
This Isn’t Just an OpenClaw Problem
The same week, CrowdStrike’s 2026 Global Threat Report dropped a stat that should make every developer uncomfortable: 82% of attacks in 2025 were malware-free. Attackers aren’t writing viruses anymore. They’re stealing credentials and logging in.
Average breakout time, from initial access to lateral movement, is now 29 minutes. One intrusion at a law firm went from first access to data exfiltration in 4 minutes.
AI agents make this worse. They’re credential-rich targets that often run 24/7 with broad permissions. An exposed OpenClaw instance isn’t just a misconfigured server, it’s a loaded gun with API keys for every service the agent connects to.
How to Check If You’re Exposed
Takes about 30 seconds:
1. Check what your gateway is bound to:
# If you see 0.0.0.0, your gateway is listening on all interfaces
docker logs openclaw-gateway 2>&1 | grep "listening on"
2. Check if it’s reachable from outside:
# From a different machine or your phone's mobile connection
curl -s http://YOUR_SERVER_IP:18789/health
If that returns anything, your gateway is public.
3. Check your gateway token isn’t weak:
Look at your docker-compose.yml or .env file. If your OPENCLAW_GATEWAY_TOKEN is something like test, changeme, or openclaw, change it now.
The Fix: Defense in Depth
Closing the port is step one. But it’s not enough. The real question is: what happens when (not if) something goes wrong?
Lock Down the Gateway
# docker-compose.yml — bind to localhost only
environment:
OPENCLAW_GATEWAY_BIND: localhost
ports:
- "127.0.0.1:18789:18789" # Only accessible from the host
If you need remote access, use a VPN or SSH tunnel, not an open port.
Use Strong, Rotated Tokens
Generate a real token:
openssl rand -hex 32
Put it in your .env, not inline in docker-compose.yml. Rotate it periodically.
Limit the Blast Radius with Scoped Secrets
Here’s where most setups fail. Even if you lock down the gateway perfectly, you’re one misconfiguration away from exposure. The question becomes: when something breaks, how bad is it?
If your agent has a flat .env file with every API key you own, your OpenAI key, your Stripe key, your email credentials, your billing API, then any breach means total compromise. Everything leaks.
Scoped secrets flip this model. Instead of giving the agent everything:
- Store keys in API Stronghold with zero-knowledge encryption, the server can’t read them even if it’s breached
- Create a deployment profile that maps only the keys the agent needs
- Set key exclusions at the group level, email keys, billing keys, anything sensitive gets blocked server-side
- Inject at runtime with the CLI, no
.envfiles on disk
# Agent startup — only gets the 3 keys it's authorized for
eval $(api-stronghold-cli deployment env-file production --stdout)
# EMAIL_API_KEY, BILLING_KEY, etc. are never transmitted
If someone compromises the agent, they get the scoped keys, not your entire credential set. The excluded keys never touched the agent’s environment in the first place.
Monitor What Your Agent Accesses
Every key fetch through API Stronghold is logged: who accessed what, when, from where. If an unauthorized fetch shows up in the audit log, you know immediately.
Compare that to a .env file, where you have zero visibility into whether the credentials were read, copied, or exfiltrated.
The Bigger Picture: AI Agents Are the New Attack Surface
This isn’t going away. AI agents are getting more capable and more connected every month. Kali Linux just integrated Claude AI through MCP for natural-language command execution. Crypto trading bots are managing exchange API keys worth millions.
The pattern from CrowdStrike’s report is clear: attackers follow the credentials. And right now, AI agents are where the credentials are, running 24/7, often with minimal security, and frequently with more access than they need.
The 21,000 exposed instances are the ones researchers found. The ones running with weak tokens behind a NAT? Those get found too. It just takes longer.
What to Do Right Now
If you’re running OpenClaw (or any AI agent with API keys):
- Check your gateway binding,
localhostonly, or behind a VPN - Rotate your gateway token, use
openssl rand -hex 32 - Audit your agent’s API keys, does it really need all of them?
- Scope your secrets, use API Stronghold’s CLI to inject only what’s needed at runtime
- Kill the
.envfile, secrets in memory only, fetched from an encrypted vault
The exposed instances prove that even security-conscious developers make configuration mistakes. Scoped secrets ensure those mistakes don’t turn into total compromise.
Lock down your agent’s API keys →
📚 Related Reading
- Securing OpenClaw: How to Give Your AI Agent Only the API Keys It Needs, Full walkthrough of scoped secrets with Docker isolation
- OpenClaw’s Credential Leak Problem: Keeping Keys Out of the LLM Context, The other leak vector: credentials in the AI’s context window
- Securing Crypto AI Agents, When leaked keys mean drained wallets
- The Silent Killer of Developer Productivity, How credential sharing habits create these risks in the first place
- Zero-Knowledge Encryption for Enterprise, Why your vault provider shouldn’t be able to read your keys
Running an AI agent with API keys? Start with scoped secrets, it takes 30 minutes to set up and protects you when everything else fails.