Cursor, Copilot, Windsurf, Claude Code — they all read your .env, hardcode secrets in generated code, and don't think twice about it. The fix isn't "be more careful." It's making it structurally impossible for real keys to end up in your codebase.
Set up once, vibe code forever. Takes about 5 minutes.
These tools trained on billions of lines of public code where api_key = "sk-..." is the most common pattern. They're not trying to compromise you — they're just predicting the next token.
AI assistants read your .env files for context. Your secrets become part of the conversation — logged, cached, and potentially sent to third-party APIs.
You say "connect to Stripe." The model writes `sk_live_...` inline because that's what its training data looks like. You ship it without noticing.
Even if you catch it and delete the line, the key lives in your git history forever. Bots scrape public repos in real-time.
AI agents running MCP tools or executing code can read every file in your project. One prompt injection and your secrets are exfiltrated.
Every week, developers post about AI assistants leaking their credentials.
"If I had a nickel for every time I tried to have AI fix an auth issue and it just disabled auth or hardcoded an API key."
— r/webdev
"$2500 in stolen charges and his takeaway is 'glad I learned this early.' That's a case study in why code review exists."
— r/webdev
"Cursor just read my .env and put my OpenAI key directly in the fetch call. I didn't notice until the PR review."
— r/cursor
"This is a classic mistake, not even AI specific. AI just makes it easier to ship fast and skip security checks."
— r/webdev
Three layers of protection. Each one is useful on its own. Together, they make it structurally impossible for keys to leak through AI-generated code.
Drop a rules file in your project root. Cursor reads .cursor/rules, Claude Code reads CLAUDE.md, Copilot reads .github/copilot-instructions.md. This isn't bulletproof, but it dramatically reduces accidental hardcoding.
Your .env file is a plaintext liability. Every tool that can read files can read it. Store your secrets in API Stronghold's zero-knowledge vault and pull them at runtime with the CLI.
# Install the CLI
npm install -g api-stronghold-cli
# Login and pull secrets to .env (encrypted in transit, decrypted locally)
api-stronghold-cli auth login
api-stronghold-cli env pull --profile production -o .env
# Your .env is populated at runtime — never committed to git
echo ".env" >> .gitignore Your secrets are encrypted client-side with AES-256. The server never sees plaintext. Learn how zero-knowledge encryption works.
The gold standard: your code gets a local proxy URL and a fake key. The proxy injects the real credential at runtime. Your AI assistant, your codebase, and your git history never see the real key — even if the AI reads every file in your project.
# Start the local credential proxy
api-stronghold-cli proxy start
# Your app uses the proxy URL — real keys never appear in your project
OPENAI_BASE_URL=http://127.0.0.1:8900/openai
OPENAI_API_KEY=fake-key-proxy-handles-it
# Even if Cursor reads this .env, it only sees a localhost URL and a dummy key The proxy decrypts secrets in memory and forwards them to the upstream API. Nothing is written to disk. How the phantom token pattern works.
Almost nothing. You still vibe code the same way — your secrets just aren't exposed anymore.
sk_live_... in generated code process.env.OPENAI_API_KEY Use agent identity tokens for your pipelines — scoped access, auto-expiring, independently revocable.
- name: Pull secrets
run: |
api-stronghold-cli auth api-user \
--token ${{ secrets.AS_AGENT_TOKEN }}
api-stronghold-cli env pull \
--profile production -o .env RUN npm i -g api-stronghold-cli
RUN api-stronghold-cli auth api-user \
--token $AS_AGENT_TOKEN
RUN api-stronghold-cli env pull \
--profile production -o .env Each CI pipeline gets its own agent identity — revoke one pipeline's access without affecting others.
Telling your AI to "make sure all security measures are taken" doesn't work. Set up infrastructure that makes leaking keys structurally impossible — then go back to shipping.
Why Cursor, Copilot, and Claude keep hardcoding secrets — the training data problem explained.
The phantom token pattern — agents get fake keys, the proxy injects real ones at runtime.
.env files are plaintext liabilities. Here's what to use instead.