This is a hands-on guide to running the API Stronghold credential proxy in front of OpenClaw. By the end, your OpenClaw agent will use fake API keys that are worthless if extracted, while the proxy transparently injects real credentials on every request.
If you want the security rationale first, read China’s CNCERT Just Warned About Your AI Agent. This post is the implementation.
What You’ll Set Up
OpenClaw (Docker, host networking)
│
├── xAI/Grok calls ──→ http://127.0.0.1:8900/generic/*
├── Gemini calls ────→ http://127.0.0.1:8900/gemini/*
│
└── API Stronghold Proxy (localhost:8900)
├── Strips fake auth
├── Injects real API key
├── Forwards to upstream (api.x.ai, generativelanguage.googleapis.com)
└── HMAC-signs and logs every call
The agent sees GEMINI_API_KEY=fake-key and XAI_API_KEY=fake-key. If a prompt injection extracts those values, the attacker gets strings that authenticate against nothing.
Prerequisites
- OpenClaw running in Docker (this guide uses
docker compose) - An API Stronghold account (sign up free)
- API keys you want to proxy (we’ll use xAI/Grok and Google Gemini as examples)
Step 1: Install the CLI
curl -fsSL https://www.apistronghold.com/cli/install.sh | sh
Verify:
api-stronghold-cli --version
# api-stronghold-cli version 1.0.7
Step 2: Authenticate
For servers (headless, no browser), create an API user in the API Stronghold dashboard under Settings > API Users, then authenticate with the token:
api-stronghold-cli auth api-user --token <YOUR_API_USER_TOKEN>
For local machines with a browser:
api-stronghold-cli login
Verify authentication:
api-stronghold-cli auth status
Step 3: Add Your API Keys to the Vault
In the API Stronghold dashboard, add each API key your agent uses. For each key, configure the provider settings so the proxy knows how to route and authenticate.
Example: Google Gemini (built-in provider)
| Setting | Value |
|---|---|
| Name | Google Gemini API Key |
| Provider | gemini |
That’s it. The proxy already knows Gemini’s base URL (generativelanguage.googleapis.com), auth mode (query parameter ?key=), and env var hint (GEMINI_API_KEY). No manual provider config needed.
Gemini is unusual — it passes the API key as a ?key= URL parameter instead of a header. The proxy handles this automatically: it strips ?key=fake-key from the inbound request and injects ?key=REAL_KEY before forwarding upstream.
Example: xAI (Grok)
| Setting | Value |
|---|---|
| Name | xAI Grok API Key |
| Provider | xai |
| Base URL | https://api.x.ai |
| Auth Header | Authorization |
| Auth Format | Bearer |
| Env Var Hint | XAI_API_KEY |
Built-in providers (no config needed): OpenAI, Anthropic, Google/Gemini, Cohere, Mistral, Groq, Together, DeepSeek, Perplexity. The proxy has these preconfigured with the correct base URL, auth header, and auth format — just set the provider name when creating the key. Google/Gemini includes query-parameter auth (?key=) support out of the box.
Verify your keys are in the vault:
api-stronghold-cli key list
Step 4: Start the Proxy
api-stronghold-cli proxy start --port 8900 --ttl 86400
You’ll see a startup banner:
=== API Stronghold Proxy ===
Listening: http://127.0.0.1:8900
Session: 6db73c24-7e92-4a02-ac5b-76e473044ad4
Expires: 2026-03-18T18:12:29.701Z
Routes:
/gemini/* -> https://generativelanguage.googleapis.com (Google Gemini API Key)
/xai/* -> https://api.x.ai (xAI Grok API Key)
Env var suggestions for agents:
GEMINI_API_KEY=fake-key
XAI_API_KEY=fake-key
Health: http://127.0.0.1:8900/health
Press Ctrl+C to stop.
The proxy creates a time-limited session (max 24 hours), decrypts your keys locally, and holds them in memory. On shutdown, it zeros the key material and revokes the session server-side.
Test it:
curl http://127.0.0.1:8900/health
# {"routes":["gemini","xai"],"sessionId":"...","status":"ok","uptime":"5s"}
Step 5: Configure OpenClaw
Two things need to change: the provider base URLs in openclaw.json and the API key environment variables in docker-compose.yml.
Docker Networking
The proxy binds to 127.0.0.1. For the OpenClaw container to reach it, the simplest approach is host networking:
# docker-compose.yml
services:
openclaw-gateway:
image: ${OPENCLAW_IMAGE:-openclaw:local}
network_mode: host # shares host network, can reach 127.0.0.1:8900
environment:
GEMINI_API_KEY: fake-key
XAI_API_KEY: fake-key
# ... other env vars unchanged
Remove any ports: section (not needed with host networking — ports bind directly to the host).
Alternative: Bridge networking with --bind (v1.0.7+)
If you prefer Docker bridge networking, start the proxy with --bind 0.0.0.0:
api-stronghold-cli proxy start --port 8900 --ttl 86400 --bind 0.0.0.0
This generates an X-Proxy-Token that all requests must include. You’d need to configure OpenClaw to send that header, which requires a custom proxy wrapper. Host networking is simpler for single-tenant deployments.
Provider Base URLs
Update openclaw.json to point your providers at the proxy. Add or modify the models.providers section:
{
"models": {
"providers": {
"google": {
"baseUrl": "http://127.0.0.1:8900/gemini",
"models": []
},
"xai": {
"baseUrl": "http://127.0.0.1:8900/xai/v1",
"api": "openai-completions",
"models": [
{
"id": "grok-4",
"name": "Grok 4",
"reasoning": false,
"input": ["text"],
"contextWindow": 131072,
"maxTokens": 8192
}
]
}
}
}
}
The base URL pattern is http://127.0.0.1:8900/<route-prefix>. The proxy strips the prefix and forwards the rest of the path to the upstream API.
Important: The route prefix in the URL must match the provider name (or alias) shown in the proxy’s startup banner. If your key was created with provider name generic instead of xai, the route will be /generic/*, and the base URL should be http://127.0.0.1:8900/generic/v1.
Restart OpenClaw
docker compose down && docker compose up -d
Verify the container can reach the proxy:
docker exec openclaw-openclaw-gateway-1 curl -s http://127.0.0.1:8900/health
Step 6: Set Up Auto-Restart (systemd)
The proxy session expires after the TTL (max 24 hours). A systemd user service handles auto-restart:
# ~/.config/systemd/user/api-stronghold-proxy.service
[Unit]
Description=API Stronghold credential proxy for OpenClaw
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
ExecStart=/home/claw/.local/bin/api-stronghold-cli proxy start --port 8900 --ttl 86400
Restart=always
RestartSec=10
Environment=HOME=/home/claw
[Install]
WantedBy=default.target
Enable and start:
systemctl --user daemon-reload
systemctl --user enable api-stronghold-proxy.service
systemctl --user start api-stronghold-proxy.service
When the session expires, the proxy exits, systemd restarts it within 10 seconds, and a fresh session is created automatically. Check status anytime:
systemctl --user status api-stronghold-proxy
journalctl --user -u api-stronghold-proxy --since "1 hour ago"
Step 7: Verify It’s Working
Send a message to your OpenClaw agent that triggers an LLM call through one of the proxied providers. Then check the proxy logs:
journalctl --user -u api-stronghold-proxy --since "5 min ago" --no-pager
You should see entries like:
[18:45:07] xai POST /v1/chat/completions -> 200 (2788ms)
[18:45:08] xai POST /v1/chat/completions -> 200 (779ms)
For the full audit trail with HMAC-signed usage events:
# List active sessions
api-stronghold-cli proxy sessions --status active
# View usage events
api-stronghold-cli proxy usage <session-id>
What’s Protected
With this setup, a prompt-injected agent cannot exfiltrate real credentials because it never has them:
| What the agent sees | What the proxy holds |
|---|---|
GEMINI_API_KEY=fake-key | AIzaSy... (real key, in memory only) |
XAI_API_KEY=fake-key | xai-sMU4... (real key, in memory only) |
http://127.0.0.1:8900/gemini | https://generativelanguage.googleapis.com |
If the agent outputs fake-key in a URL, a message, or a file, the attacker gets nothing useful. The token doesn’t authenticate against any real API. The proxy is the only thing that holds real credentials, and it only listens on localhost.
Proxying Skills That Call APIs Directly
Some OpenClaw skills call APIs directly using their own SDK rather than going through OpenClaw’s provider system. For example, the Nano Banana Pro image generation skill uses the google-genai Python SDK, which constructs its own URLs to generativelanguage.googleapis.com. Setting the provider baseUrl in openclaw.json doesn’t affect these calls.
To route these through the proxy, you need two things:
1. Modify the skill script to accept a custom endpoint
The google-genai SDK supports a base_url option in http_options. Add a few lines to check for a GEMINI_API_ENDPOINT environment variable:
# In generate_image.py, replace the client initialization:
# Before (direct to Google):
client = genai.Client(api_key=api_key)
# After (proxy-aware):
api_endpoint = os.environ.get("GEMINI_API_ENDPOINT")
if api_endpoint:
client = genai.Client(
api_key=api_key,
http_options={"base_url": api_endpoint},
)
else:
client = genai.Client(api_key=api_key)
If the skill is bundled with OpenClaw, place the modified script in your workspace directory to override it:
~/.openclaw/workspace/skills/nano-banana-pro/scripts/generate_image.py
The workspace copy takes priority over the bundled version.
2. Set the endpoint in skill env config
In openclaw.json, configure the skill’s environment variables to use the proxy:
{
"skills": {
"entries": {
"nano-banana-pro": {
"env": {
"GEMINI_API_KEY": "fake-key",
"GEMINI_API_ENDPOINT": "http://127.0.0.1:8900/gemini"
}
}
}
}
}
The skill now sends all requests to the proxy with fake-key. The proxy strips it, injects the real key, and forwards to Google. You can verify it’s working in the proxy logs:
[19:07:35] gemini POST /v1beta/models/gemini-3-pro-image-preview:generateContent -> 200 (20116ms)
This pattern works for any skill that calls an API via an SDK with configurable endpoints. The general approach:
- Find where the SDK constructs its HTTP client
- Add an env var to override the base URL
- Set the env var to point at the proxy route
- Set the API key env var to
fake-key
Adding More Providers
To proxy additional providers (Anthropic, OpenAI, etc.):
- Add the key to the API Stronghold vault with the correct provider config
- Restart the proxy (
systemctl --user restart api-stronghold-proxy) - Update the provider’s
baseUrlinopenclaw.jsonto point at the new route - Set the corresponding env var to
fake-keyindocker-compose.yml - Restart OpenClaw
The proxy’s startup banner shows all active routes — use those to set the correct base URLs.
Troubleshooting
Proxy not reachable from container:
Check that the gateway container uses network_mode: host. With bridge networking, the container can’t reach 127.0.0.1 on the host.
“Config invalid: models.providers.google.models” error:
When adding a new provider to openclaw.json, include "models": [] even if you’re not defining custom models. OpenClaw validates this field.
Route shows as /generic/* instead of /xai/*:
The route prefix comes from the provider name set when the key was created in the vault. Update the provider name in the dashboard, or use the /generic prefix in your base URL.
Session expired, agent getting errors:
The systemd service auto-restarts the proxy. Check systemctl --user status api-stronghold-proxy to verify it recovered. If the service isn’t enabled, run systemctl --user enable api-stronghold-proxy.
API Stronghold provides zero-knowledge encrypted credential storage with session-based proxy access for AI agents. Get started free or read the docs.