TL;DR
Security researchers found that 7% of OpenClaw skills expose credentials through the LLM context window and output logs. The fix: never pass API keys through the agent. Use API Stronghold to inject scoped secrets at runtime, outside the context window, so keys never touch the model.
The Research: 283 Skills Leaking Credentials
Recent security research has uncovered a systemic problem in the OpenClaw ecosystem. Researchers scanning ClawHub’s roughly 4,000 skills found that 283 of them — about 7% — contain flaws that expose API keys, passwords, and other credentials.
The root cause? Skill authors are treating AI agents like local scripts. They write SKILL.md files that instruct the agent to handle secrets directly — passing API keys through the LLM’s context window and logging them in plaintext output.
This isn’t a bug in OpenClaw itself. It’s a pattern problem: developers are embedding credential handling into the agent’s instruction set, which means your secrets flow through the model provider’s infrastructure.
Why This Matters: Three Attack Vectors
1. Credentials in the Context Window
When a skill tells the agent to “use this API key,” that key becomes part of the prompt sent to the model provider. It exists in their logs, their memory, and potentially in the model’s context for the duration of the session. Anyone with access to those logs — or any prompt injection attack — can extract the key.
2. Indirect Prompt Injection
Researchers also demonstrated how attackers can embed malicious payloads in documents the agent processes — Google Docs, Slack messages, emails. Once the agent reads a compromised document, the attacker can instruct it to exfiltrate credentials, create unauthorized integrations, or install backdoors.
If your API keys are in the agent’s context window, prompt injection gives an attacker direct access to those keys.
3. Malicious Skills
Beyond accidental leaks, researchers found 76 skills containing deliberately malicious payloads — designed for credential theft, backdoor installation, and data exfiltration. If you install one of these skills and your secrets are accessible to the agent, the attacker gets everything.
The Fix: Keep Secrets Out of the Agent
The solution isn’t to stop using AI agents. It’s to change how secrets reach your applications.
The principle is simple: API keys should never pass through the LLM context window.
Instead of embedding keys in skill instructions or passing them to the agent directly, inject them into the runtime environment where the agent’s tools can use them — but the model itself never sees them.
This is exactly what API Stronghold’s scoped secrets are designed for.
How Scoped Secrets Work
-
Store keys in an encrypted vault — not in
.envfiles, not in skill configurations, not anywhere the agent can read them as text. -
Create a deployment profile with only the keys the agent needs — an OpenClaw instance running home automation doesn’t need your Stripe API key. Map only the relevant keys to the deployment profile and assign it to a user group.
-
Inject at runtime — use the CLI to generate an environment file from the scoped deployment profile when the agent starts. The keys exist in process memory, not in the prompt.
# Generate .env with only the keys mapped to this deployment profile
api-stronghold-cli deployment env-file openclaw-home .env
# Start OpenClaw — keys are in env vars, not in the context window
openclaw start
Or load secrets directly into the shell without writing a file:
eval $(api-stronghold-cli deployment env-file openclaw-home --stdout)
openclaw start
The agent’s tools read from environment variables. The LLM never sees the key values. Prompt injection can’t extract what isn’t in the context.
Key Exclusion Rules
API Stronghold also supports exclusion rules — explicitly blocking sensitive keys from being pulled into an agent’s environment:
- Billing keys (Stripe, payment processors) — an AI agent should never touch these
- Email credentials — prevents an agent from sending unauthorized messages
- Infrastructure keys (AWS root, database admin) — limit blast radius
Even if a malicious skill tries to access these keys, they simply don’t exist in the agent’s environment.
Zero-Knowledge Encryption: Why It Matters Here
The credential leak problem gets worse when you consider where secrets are stored. If your secrets manager can decrypt your keys, a breach of that service exposes everything.
With zero-knowledge encryption, your secrets are encrypted before they leave your device. API Stronghold never has access to your plaintext keys — not during storage, not during sync, not ever.
This matters for AI agent security because it adds a layer that prompt injection, malicious skills, and even a compromise of the secrets manager itself can’t bypass.
Practical Setup: Securing OpenClaw in 5 Minutes
If you’re running OpenClaw today, here’s how to lock it down:
1. Install the CLI
macOS / Linux:
curl -fsSL https://www.apistronghold.com/cli/install.sh | sh
Windows (Command Prompt):
curl -fsSL https://www.apistronghold.com/cli/install.cmd -o install.cmd && install.cmd && del install.cmd
2. Authenticate
For interactive use (opens your browser):
api-stronghold-cli login
For automation (CI/CD, containers), use an API user token:
api-stronghold-cli auth api-user --token <YOUR_TOKEN>
3. Create a scoped deployment profile
In the API Stronghold dashboard, create a deployment profile (e.g., openclaw-assistant) and map only the keys this agent needs. Then create a user group and assign the deployment profile to it so access is locked down.
See the CLI docs for the full command reference.
4. Generate environment file and start the agent
api-stronghold-cli deployment env-file openclaw-assistant .env
openclaw start
Or inject secrets directly without writing a file:
eval $(api-stronghold-cli deployment env-file openclaw-assistant --stdout)
openclaw start
That’s it. Your keys are injected as environment variables. The LLM context window never sees them.
For a full walkthrough with Docker isolation, see our OpenClaw Docker Quickstart.
What Skill Authors Should Do
If you’re publishing skills to ClawHub:
- Never reference API keys in SKILL.md — don’t instruct the agent to handle, display, or log credentials
- Read from environment variables — design your skill’s tools to pull keys from
process.env, not from the agent’s context - Document which keys are needed — so users can create appropriate scoped groups
- Never hardcode credentials — this should go without saying, but 283 skills suggest otherwise
The Bigger Picture
AI agents are becoming more capable and more integrated into development workflows. The OpenClaw credential leak findings aren’t unique to OpenClaw — any AI agent that handles secrets through its context window has this problem.
The pattern that solves it is consistent:
- Isolate the agent — run it in a container or VM
- Scope the secrets — give the agent only what it needs
- Inject at runtime — keep keys out of the context window
- Encrypt at rest — use zero-knowledge encryption so a breach of the vault doesn’t expose plaintext keys
API Stronghold gives you all four. Get started with the CLI or see our pricing plans.