You spin up an AI coding agent. It asks for your OpenAI key. You paste it in.
That key now lives in the agent’s memory, its config files, its logs, and potentially every sub-agent it delegates to. A prompt injection can extract it. A rogue MCP server can exfiltrate it. Verbose logging pipelines can ship it to a third-party observability platform before you even notice.
You gave the agent a real credential, and now you have zero control over where it goes.
Why Agent Credential Management Is Broken
AI agents need real credentials to call external APIs. That’s not negotiable. An agent that can’t authenticate to OpenAI, Anthropic, or your internal services can’t do its job.
The problem is how we give them those credentials. The current playbook looks like this:
- Paste the key into a
.envfile - Set it as an environment variable
- Hard-code it in the agent’s config
Every one of these approaches puts plaintext keys inside the agent’s runtime. The agent process can read them. Anything the agent process touches can read them.
The attack surfaces are real and documented:
Prompt injection → tool call exfiltration. An attacker embeds instructions in content the agent processes. The agent, following those instructions, calls an HTTP tool to send your API key to an external endpoint. We covered how this works in detail in our post on prompt injection attacks.
MCP server compromise. A malicious or compromised MCP server can read environment variables from its host process, including every API key you loaded for the session.
Agent memory and context leaks. Keys pasted into agent config persist in memory. Some agents serialize their context to disk for resumption. Some forward their full context to sub-agents. Your key travels with every hop.
Verbose logging. Many agent frameworks log HTTP requests by default, including headers. Your Authorization: Bearer sk-... shows up in log files, observability dashboards, and error tracking services.
The common thread: if the agent can see the key, anything that compromises the agent can steal the key.
The Phantom Key Pattern
What if the agent never sees the real key at all?
The concept is simple. Instead of giving the agent your actual API key, you give it two things:
- A fake key (literally any string;
fake-keyworks) - A localhost base URL pointing to a local reverse proxy
The agent makes its normal API calls, thinking it’s talking to OpenAI or Anthropic. But those requests hit your local proxy first. The proxy strips the fake auth header, injects the real API key, and forwards the request upstream to the actual provider.
Agent (fake-key) → localhost:8900/openai/v1/chat/completions
↓
Proxy (inject real key)
↓
api.openai.com/v1/chat/completions
The agent never sees, stores, or transmits the real credential. It can’t leak what it doesn’t have.
The critical insight: the proxy runs on your machine, not in the agent’s sandbox. The agent has no way to read the proxy’s memory or intercept the real key injection. Even a fully compromised agent (prompt-injected, memory-dumped, logs exfiltrated) can only reveal the fake key, which is worthless.
5 Minutes to Zero-Trust AI Agents
Here’s the setup, start to finish.
Step 1: Store your keys in API Stronghold
Your API keys are encrypted client-side with AES-256-GCM before they leave your machine. The server never sees plaintext.
# Install the CLI
curl -fsSL https://www.apistronghold.com/cli/install.sh | sh
api-stronghold-cli login
Add your keys through the dashboard or the CLI.
Step 2: Start the proxy
api-stronghold-cli proxy start
That’s it. The CLI creates a session, decrypts your keys locally, builds a route table, and starts a reverse proxy on 127.0.0.1:8900.
You’ll see a startup banner like this:
=== API Stronghold Proxy ===
Listening: http://127.0.0.1:8900
Session: sess_abc123
Expires: 2026-03-06T15:00:00Z
Routes:
/openai/* -> https://api.openai.com (My OpenAI Key)
/anthropic/* -> https://api.anthropic.com (My Anthropic Key)
Env var suggestions for agents:
OPENAI_API_KEY=fake-key
OPENAI_BASE_URL=http://127.0.0.1:8900/openai
ANTHROPIC_API_KEY=fake-key
ANTHROPIC_BASE_URL=http://127.0.0.1:8900/anthropic
Health: http://127.0.0.1:8900/health
Press Ctrl+C to stop.
Step 3: Point your agent at the proxy
Set two environment variables. The key value doesn’t matter; the proxy ignores it.
Claude Code:
ANTHROPIC_API_KEY=fake-key \
ANTHROPIC_BASE_URL=http://127.0.0.1:8900/anthropic \
claude
Cursor:
In Cursor settings, set:
OPENAI_API_KEY→fake-keyOPENAI_BASE_URL→http://127.0.0.1:8900/openai
Custom agents / scripts:
import openai
client = openai.OpenAI(
api_key="fake-key",
base_url="http://127.0.0.1:8900/openai/v1",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
)
Step 4: The proxy handles the rest
Every request flows through the same pipeline:
- Agent sends request to
localhost:8900/{provider}/...with fake auth - Proxy matches the provider prefix to a route
- Proxy strips the inbound
Authorization/x-api-keyheader - Proxy injects the real API key in the correct format for that provider
- Proxy forwards the request upstream and streams the response back
The agent gets its API response. It never touches the real key.
Customize your session
The proxy supports several flags for fine-tuning:
api-stronghold-cli proxy start \
--port 8900 \ # Local port (default: 8900)
--ttl 3600 \ # Session TTL in seconds (60-86400, default: 3600)
--providers openai,anthropic # Filter to specific providersSessions auto-expire after the TTL. Keys are wiped from memory on shutdown.
Beyond LLMs: Any API, Any Provider
The proxy isn’t limited to LLM providers. With the providerConfig feature, you can route any API through it.
Set providerConfig on any key in the dashboard with:
baseUrl: the upstream API host (e.g.,https://api.stripe.com)authHeader: which header carries the key (e.g.,Authorization)authFormat:"bearer"(addsBearerprefix) or"raw"(value only)
Example: route Stripe API calls through the proxy so your billing agent never sees your Stripe secret key.
Agent → localhost:8900/stripe/v1/charges → Proxy → api.stripe.com/v1/charges
Or a GitHub PAT for a code review agent:
Agent → localhost:8900/github/api/repos → Proxy → api.github.com/api/repos
The agent gets API access. You keep the credentials. Any API that uses header-based authentication works with the proxy.
Built-in provider support ships for OpenAI, Anthropic, Google, Cohere, Mistral, Groq, Together, DeepSeek, and Perplexity with no configuration needed. For everything else, providerConfig has you covered.
What You Get: Security Properties
Localhost-only. The proxy binds to 127.0.0.1. It’s not reachable from the network. No open ports, no attack surface from other machines.
Session-bound. Keys are decrypted into memory only for the duration of the session. Kill the proxy, keys are gone. No disk persistence.
Time-limited. Sessions auto-expire with a configurable TTL (default: 1 hour, max: 24 hours). Even if you forget to shut down, the session self-destructs.
HMAC-signed audit trail. Every proxied request is signed with HMAC-SHA256 using the session token. The canonical signing string includes the request ID, timestamp, provider, HTTP method, and path. You get a cryptographically verifiable audit trail of every API call the agent made.
Per-call logging. The proxy logs every request to stderr with provider, method, path, status code, and latency. You see exactly what the agent is doing in real time:
[14:32:01] openai POST /v1/chat/completions -> 200 (342ms)
[14:32:05] anthropic POST /v1/messages -> 200 (891ms)
Graceful shutdown. On Ctrl+C, the proxy flushes pending usage events to the server, revokes the session, and wipes keys from memory. Clean exit every time.
Your Agents Don’t Need Your Keys
Every time you paste an API key into an agent’s config, you’re trusting that agent, and everything it touches, to keep that key safe. That’s a bet you don’t need to make.
The proxy pattern eliminates credential exposure from the agent runtime entirely. Your keys stay on your machine, in your control, with a full audit trail of how they were used.
curl -fsSL https://www.apistronghold.com/cli/install.sh | sh
api-stronghold-cli login
api-stronghold-cli proxy start
Three commands. Zero keys exposed.
Your AI agents don’t need your keys. They need a proxy.