← Back to Blog
· 9 min read · API Stronghold Team

Offboarding Revokes Passwords. It Leaves API Keys Wide Open.

Cover image for Offboarding Revokes Passwords. It Leaves API Keys Wide Open.

The engineer submitted their laptop on Friday. By Monday, their API keys had already been used to pull 40,000 customer records from your production database.

That’s not a hypothetical. It’s a pattern that plays out regularly, quietly, and usually goes undetected for months.

The offboarding process got the laptop back. It cut the badge. It closed the Slack account. But somewhere in the noise of someone’s last day, nobody thought to ask: what API keys did this person have, and are any of them still active?

The offboarding checklist is a lie

Standard IT offboarding hits the obvious targets. Disable Okta. Kill VPN access. Revoke GitHub. Suspend the Google Workspace account. Some companies have this down to a tight, well-rehearsed process. They’re proud of it.

The problem is that most of those checklists were written before 2018. Before Stripe became a core payment processor that engineers access directly. Before Twilio replaced call centers. Before AWS handed out IAM keys like business cards. Before every team spun up their own OpenAI API account.

SSO covers a lot of ground, but it doesn’t cover everything. A lot of SaaS products still issue long-lived API keys that have no binding to your identity provider. When you kill someone’s Okta account, the key they created in the Stripe dashboard six months ago keeps working. Stripe doesn’t know or care that they’re gone.

This gap has only gotten worse with the rise of AI agents. Engineers now routinely set up background processes, scheduled jobs, and LLM-powered automations that run on their personal API keys. The job that calls OpenAI every hour doesn’t stop when HR sends the termination paperwork.

Most offboarding checklists don’t have a line item for “enumerate every API key this person ever created.” They don’t have a process for auditing CI/CD secrets. They don’t have someone assigned to check SaaS billing for usage spikes after departure.

The checklist isn’t lying maliciously. It just hasn’t kept up.

Where orphaned keys actually hide

Ask a security team where ex-employee credentials might exist, and they’ll probably say “we revoked their GitHub access.” That’s one place. There are about a dozen more.

Local .env files. Engineers routinely copy environment files to personal machines for remote work, debugging at home, or “just in case” reference. That file doesn’t expire when the laptop gets returned. The engineer still has their personal MacBook. The .env is still there.

Personal GitHub forks. When an engineer forks an internal repo to their personal account, any hardcoded credentials in that fork stay in their account. Some forks are private; some aren’t. Either way, the credentials live outside your perimeter.

CI/CD secrets. GitHub Actions, CircleCI, Terraform Cloud, and every other pipeline tool stores secrets tied to whoever set them up. These secrets often outlast the person who created them by years. Nobody audits them until something breaks.

SaaS integrations set up directly. Engineers frequently connect tools to third-party services without going through a formal IT request. A developer who set up a Datadog integration using their personal API key, or wired a webhook to a Slack app they owned, has left behind a live credential that your security team doesn’t know exists.

Postman and Insomnia collections. These tools store API credentials in exported collection files. Those files get shared, backed up, committed to repositories, and emailed around. A Postman collection with a production key is a credential store that nobody manages.

Notion docs and internal wikis. “Here’s the API key for our test environment” is a sentence that exists in thousands of internal wikis right now. Test keys sometimes have production scope. Even when they don’t, they persist long after the person who wrote them has left.

None of these are exotic attack vectors. They’re normal parts of how engineers work. The issue isn’t that engineers are careless. It’s that the tools and workflows haven’t been designed with credential lifecycle in mind.

The numbers nobody tracks

The average engineer tenure in tech is around two to three years. The average API key rotation cycle, across most organizations, is never.

Do the math on a 50-person engineering team. If each engineer has 8 to 12 active API keys across the services they touch, and turnover runs at 20% annually, you’re generating 80 to 120 orphaned credentials every year. That’s before you count contractors, whose access is often even less controlled.

The IBM Cost of a Data Breach report puts the average time-to-detection for credential misuse at 197 days. That’s over six months from the moment a key is used maliciously to the moment someone notices. Six months of potential data exfiltration, billing fraud, or infrastructure access.

That number isn’t surprising if you think about how detection usually works. Someone notices an anomaly in billing. Or a customer complains. Or a security alert fires weeks after the fact. The key was active, the usage looked plausible, and nobody was watching for it specifically.

The financial exposure compounds over time. A key that sits dormant costs nothing. A key that gets used for six months before detection can mean real damage: leaked customer records, unexpected SaaS charges, compromised infrastructure, compliance violations. The cost isn’t in the key itself; it’s in the detection lag.

Three scenarios where this goes wrong

The contractor with production access. A freelance developer was brought in to work on payment flow improvements. They needed Stripe access and were given a key scoped to production. The engagement ended. Offboarding happened through a staffing agency, which meant the internal IT team didn’t go through the usual process. Eight months later, the Stripe key was still active. The contractor had moved on, but the key hadn’t.

Nobody attempted misuse in this case. The key was discovered during a routine audit. But “nobody happened to misuse it” is not a security posture.

The departing senior engineer. An engineer at a fintech company got a competing job offer and gave notice. During the standard two-week notice period, they used their still-active API key to pull a large dataset from the production database. The key had full read access. The exfiltration looked like normal usage patterns. Detection came 200 days later, when a customer reported seeing their data in a competitor’s product.

The engineer’s laptop had been returned. Their Okta was disabled. Their GitHub access was gone. The API key was still valid.

The ML engineer’s background job. An ML engineer was laid off as part of a broader reduction in force. They had set up an automated pipeline that called the OpenAI API every few hours to run text classification on a data stream. The pipeline ran on a key under their personal account. Offboarding didn’t touch it because nobody knew it existed.

Four months later, the company noticed a $3,000 charge on their OpenAI invoice for usage they couldn’t account for. The pipeline had been running the whole time. No malicious intent; just an automation that kept going because nothing stopped it.

What a real offboarding audit looks like

This isn’t a single checklist item. It’s a process that needs to happen before someone’s last day, not after.

1. Maintain a credential inventory tied to people. Every API key your organization issues should have an owner: a specific person, not just a team. When that person is in the offboarding process, the inventory gives you a starting point. Without it, you’re searching blind.

2. Audit active keys against your current employee list. Pull active credentials from every SaaS your team uses, then cross-reference against your HR system. Anyone who’s been offboarded in the last 12 months should have no active keys. If they do, revoke them and investigate whether those keys were used after departure.

3. Check CI/CD secrets explicitly. GitHub Actions secrets, CircleCI environment variables, Terraform Cloud variables: these need their own audit. They’re often set up once and forgotten. Run through your pipeline configurations and confirm that every secret maps to an active employee or a service account with documented ownership.

4. Cross-reference SaaS billing for post-departure spikes. If a key is still active and being used after someone leaves, the usage shows up somewhere. Run a billing audit for your top-spend SaaS tools and look for unusual patterns in the 30 days following any departure. Spikes that start on a Monday after a Friday departure date are worth investigating.

5. Tag ownership at creation time. This is the upstream fix. When an engineer creates an API key, require them to tag it with their user ID and a purpose. Some platforms support this natively; others require a wrapper or a policy enforced by your security tooling. Key ownership metadata is what makes every subsequent audit step cheaper.

None of these steps require specialized tooling. They require process and ownership. The audit work gets easier once you’ve done it once; the inventory that doesn’t exist yet is the hard part.

Why rotation alone doesn’t fix this

Key rotation is commonly cited as the answer to long-lived credentials. It’s not wrong, but it’s incomplete.

Rotation solves the “key lives forever” problem. If you rotate every key every 90 days, a leaked key has a bounded lifetime. That’s better than never rotating.

What rotation doesn’t solve: the key was copied before the rotation. An engineer who copied a key to their personal machine three months ago has a copy that’s now invalid. But if they copied it this week, and rotation happens next month, they have 30 days. And if the rotation process isn’t airtight, they might receive the new key too.

The more fundamental issue is that rotation is still a manual process at most organizations. It requires someone to generate a new key, update it everywhere it’s used, verify nothing broke, and retire the old one. Teams skip it. Keys with “temporary” labels end up in production for three years.

The actual fix is to stop issuing long-lived static credentials in the first place. Session-scoped tokens that expire automatically change the math entirely. If every token expires after a few hours, an offboarded engineer’s credentials go stale on their own. There’s no rotation schedule to maintain because the credentials have a natural end of life.

This isn’t a new concept. JWT access tokens, OAuth flows, and similar mechanisms have worked this way for years. The gap is that plenty of API integrations still rely on static keys because that’s what the platform issues by default, and migrating away requires effort nobody budgets for.

Short-lived credentials reduce the blast radius of any departure to hours instead of months. That gap, from 197 days to a few hours, is the real win.


The offboarding credential gap isn’t a people problem. It’s an architecture problem.

The fix isn’t a better checklist. It’s a system that makes long-lived credentials unnecessary. When tokens expire automatically and ownership is tracked at creation, a departing engineer’s access goes away on its own. No audit required. No six-month detection window.

API Stronghold issues phantom tokens with automatic expiry, ties every credential to an owner, and generates a blast radius report so you always know your exposure. See what your current key sprawl looks like with a 14-day free trial at https://www.apistronghold.com.

Keep your API keys out of agent context

One vault for all your credentials. Scoped tokens, runtime injection, instant revocation. Free for 14 days, no credit card required.

Get posts like this in your inbox

AI agent security, secrets management, and credential leaks. One email per week, no fluff.

Your CI pipeline has permanent keys sitting in env vars right now. Scoped, expiring tokens fix that in an afternoon.

One vault for all your API keys

Zero-knowledge encryption. One-click sync to Vercel, GitHub, and AWS. Set up in 5 minutes — no credit card required.