Someone published a fake Claude Code installer. Developers downloaded it. An infostealer ran silently on their machines and exfiltrated every credential it could find: .env files, SSH keys, browser-stored tokens, AWS credentials, the works. By the time anyone noticed, the keys were already in someone else’s hands.
This is not a hypothetical. In April 2026, threat actors stood up convincing fake GitHub repositories and download pages mimicking Claude Code, Anthropic’s official CLI tool. The campaign targeted developers specifically, because developers are the most credential-dense targets on the internet.
How the Attack Works
The setup is straightforward. Attackers create a GitHub repository with a name like claude-code-installer or anthropic-claude-cli. They fill it with plausible-looking code, add a README with install instructions, and seed links to it on forums, Discord servers, Reddit threads, and SEO-optimized landing pages. The pages look right. The URLs look close enough.
A developer finds the repo through a search, a Slack link, or a Reddit post. They follow the install instructions. What they actually run is a dropper: a script that installs a stripped-down version of the real tool (to avoid immediate suspicion) while silently installing an infostealer in the background.
The infostealer does not announce itself. It runs a one-time sweep or establishes persistence, harvests everything it can reach, and ships the data to an attacker-controlled server. No pop-ups, no slowdowns, no obvious signs.
What gets swept up: every .env file the tool can find, ~/.aws/credentials, everything under ~/.ssh/, browser-stored passwords and session tokens, macOS keychain data, and any config files that match known patterns for API keys and secrets. All of it gets zipped and sent out over HTTPS, which means it blends into normal network traffic.
Developers are the target because their machines have access to everything. Production databases, deploy pipelines, cloud accounts, third-party APIs. The attacker does not need to breach a server. They just need one developer to run the wrong install script.
What an Infostealer Actually Reads
Most developers do not think about how many credentials are sitting on their machine at any given time. Here is what a typical infostealer targets:
Environment files. .env, .env.local, .env.development, .env.production. Any variant. These are the primary target because they concentrate credentials in one place. Developers use them for convenience; attackers love them for exactly the same reason.
AWS and cloud credentials. ~/.aws/credentials holds long-lived access keys for every profile the developer has configured. GCP stores application default credentials under ~/.config/gcloud/. Azure stores tokens under ~/.azure/. Infostealers know these paths. They check them first.
SSH private keys. Everything under ~/.ssh/. An SSH key to a production server is more useful to an attacker than a database password, because it often has no expiry and bypasses most application-layer auth.
Browser credential stores. Chrome, Firefox, and Safari all store saved passwords and session cookies in local SQLite databases. Chrome’s login data is encrypted with a key tied to the OS user account, but running under that user’s context, the infostealer can decrypt it. Session cookies for GitHub, AWS console, Google Cloud Console, and every other service the developer uses are right there.
Git configuration. ~/.gitconfig and per-repo .git/config files sometimes contain tokens for credential helpers, private registry auth, or GitHub PATs stored by git credential managers.
Application config files. Tools like Stripe CLI, Vercel CLI, Netlify CLI, and dozens of others store auth tokens in ~/.config/<tool>/ or ~/.toolname. Infostealers enumerate these directories.
Put it all together: a developer laptop is not a workstation. It is a credential vault that walks around, connects to coffee shop Wi-Fi, and occasionally runs things from GitHub without thinking too hard.
Why Developer Machines Are the New Attack Surface
A production server in 2026 is locked down. IMDSv2, no direct SSH from the internet, secrets pulled from Vault or Secrets Manager at runtime, no stored credentials on disk. Production environments have gotten significantly better.
Developer laptops have not kept pace. They still hold long-lived keys. They still use .env files. And they connect to the same production systems that the hardened servers connect to.
The blast radius of a compromised developer machine is often larger than the blast radius of a compromised server. The server has one role. The developer has accounts in ten systems. Their machine has credentials for the production database, the staging environment, the S3 bucket with customer data, the Stripe account, the GitHub org, and the CI/CD pipeline.
Social engineering works here because developers install things constantly. New tools, new frameworks, new CLIs, new AI assistants. The cognitive overhead of vetting every install is unrealistic. Attackers know this. The fake-Claude-Code campaign follows a well-worn playbook: fake npm packages have been delivering malware for years. Fake VS Code extensions, fake Homebrew taps, fake PyPI packages. The delivery mechanism changes; the exploit is always the same. Get a developer to run something.
The Claude Code angle is particularly effective because the real Claude Code is new, widely discussed, and not yet in every developer’s muscle-memory for “how I install this.” When a tool is familiar, people go to the official source automatically. When it is new, they search. And search results can be manipulated.
One compromised developer machine is enough to get into the company. From there, the attacker moves laterally at their leisure.
The .env Problem
The .env file is convenient. It keeps secrets out of source code. It lets different environments have different configurations. Every major framework supports it. It is everywhere.
It is also a single file that contains every API key the developer has configured locally. And most of those keys are long-lived.
When an infostealer harvests a .env file, those credentials do not expire. The attacker is not racing against a short window. They can sit on the keys for days, weeks, or months before using them. They can sell them. They can use them quietly over time to avoid triggering rate-limit alerts. They have time.
Rotating after discovery is important, but it is not sufficient. If the keys were used in the window between theft and discovery, damage is already done. And detection can take a long time. Infostealers that run once and exfiltrate to a one-time server are hard to catch. No ongoing process, no persistent network connection, no easily spotted indicator.
The other problem: keys in .env files tend to be over-scoped. Developers give themselves broad access locally because it is easier. Full admin on the AWS account. All scopes on the GitHub token. Full access on the Stripe key. The principle of least privilege gets applied to production systems but forgotten for local development. So when those keys are stolen, the attacker gets full access, not limited access.
One .env file can span multiple services and clouds. It might have: an OpenAI key, a Stripe key, a GitHub PAT, an AWS access key ID and secret, a Twilio auth token, a database connection string with embedded credentials, a Slack bot token, and a SendGrid API key. That is eight services, potentially with production-level access, in one 30-line file.
That is the jackpot. That is exactly what the fake Claude Code campaign was designed to collect.
What Phantom Tokens Change
The .env problem is not just a hygiene problem. It is an architectural one. The file needs to exist. The developer’s tooling needs credentials to run. As long as the .env holds actual credentials, it is a target.
Phantom tokens change the architecture. Here is how it works: instead of putting a real API key in .env, you put a phantom token. At runtime, when your application makes an API call, the request goes through API Stronghold’s proxy. The proxy resolves the phantom token to the real credential and forwards the request. The real credential never lives on the developer machine.
The token in the .env is a lookup key. It has no value outside the proxy. An infostealer that sweeps the machine and grabs the .env gets a token that it cannot use. There is no way to resolve that token to a real credential without going through the proxy, and the proxy enforces authentication, rate limiting, and access controls.
Real credentials stay in API Stronghold’s vault. They are not on disk anywhere on the developer machine. They do not travel over the local network. They do not end up in git history or log files.
Even if the fake Claude Code attack succeeds, even if the infostealer runs and exfiltrates the .env file completely, the harvest is worthless. The attacker has a list of tokens that do not work. They cannot use them to hit your Stripe account, your AWS environment, or your OpenAI org. There is nothing to rotate because there is nothing exposed.
This is not a monitoring solution or a detection solution. It removes the target. You cannot steal a credential that was never there.
What You Should Do Right Now
Verify any Claude Code download. The official repo is github.com/anthropic/claude-code. Installation instructions are at the official Anthropic documentation. If you found a Claude Code installer somewhere else, do not run it. If you already ran something and are not certain it was the official package, treat your machine as potentially compromised.
Check for infostealer indicators. Look for unexpected processes running under your user account. Check outbound network connections with netstat or lsof -i. Look for recently created files in /tmp or your home directory with unusual names. Check your shell history for commands you did not run. None of these are definitive, but they give you something to work with.
Audit your .env files. Open them. Count the live credentials. Ask yourself: if this file were exfiltrated right now, what could someone do with it? How many services? What access levels? This exercise tends to be uncomfortable.
Rotate any keys that were on a machine that ran unknown software. Do not wait for confirmation of compromise. If there is any doubt, rotate. Rotating an API key takes minutes. Cleaning up after an account breach takes much longer.
Consider whether those keys need to be live values at all. This is the longer-term question. Phantom tokens are a direct answer to the .env problem. The keys your application needs to run do not have to sit in plaintext on your machine.
API Stronghold replaces long-lived credentials in local .env files with phantom tokens. If an infostealer grabs your .env, it gets nothing usable. Try it free at https://www.apistronghold.com.