Your ChatGPT plugin just got compromised. In the next 60 seconds, an attacker drains your Stripe account, clones your GitHub repos, and dumps your customer database. Why? Because that innocent Slack summarizer you built last month has admin access to everything.
Agents Inherit Whatever You Give Them
Here’s how it usually goes. You spin up a new AI agent. It needs a Slack token to read threads. Maybe a database connection to look up customer records. You’re in a hurry, so you point it at the same .env file your backend uses. It works. You ship it.
That .env file has your Stripe secret key. Your GitHub token. Your OpenAI key with billing access. Your production database password. The agent doesn’t need any of that to summarize Slack threads, but it has access to all of it now. Because nobody told it otherwise.
This isn’t a bug. It’s a design choice made under time pressure. The problem is that nobody goes back and audits it.
ChatGPT plugins are the same story. When you authorize a plugin, you’re handing it credentials. The plugin itself decides what to do with them. You’re trusting the plugin’s code, its dependencies, its update pipeline, and whoever maintains it, all at once.
What Is the Blast Radius?
The blast radius of an agent or plugin is everything an attacker can reach if that agent gets compromised. According to the Cloud Security Alliance, roughly 80% of organizations deploying AI agents have no real-time visibility into what those agents are doing. Three breach vectors drive most incidents: prompt injection, supply chain attacks on dependencies, and exploited bugs.
This isn’t hypothetical. In one incident, attackers compromised Trivy’s CI pipeline and stole GitHub PATs from organizations that had trusted the security scanning tool in their build process. The tool was legitimate, widely used, doing exactly what it was supposed to do. But because teams had granted it broad repository access, the breach radius extended far beyond any single project.
Compromise the tool. Inherit the credentials. That’s the whole playbook.
Your Slack summarizer runs the same risk. It’s a small tool, handy for catching up on long channels. But if it’s running against your shared backend .env, a supply chain vulnerability, a malicious update, or a single exploited bug gives an attacker the same credential set it has. In the next few minutes, they have:
- Your Stripe secret key (payment data, ability to create charges, cancel subscriptions)
- Your GitHub personal access token (read/write access to your private repos)
- Your production database connection string (full read/write on every table)
- Your OpenAI key (they can run up your bill or use it for their own purposes)
Your Slack summarizer just became a full backend breach. Same attack pattern as Trivy, same outcome.
To reduce blast radius before an incident, scan your dependencies with tools like Snyk or Dependabot to catch vulnerable packages early. The OWASP Top 10 for LLM Applications lists excessive agency and insecure plugin design as two of the top risks for AI systems, both stemming directly from over-scoped credentials.
This is the real risk with AI agents. The agent itself may be simple and benign. But if its credential scope is unlimited, a single injection vector turns it into a skeleton key.
How to Audit Your Own Blast Radius
Start with a simple inventory. List every AI agent and plugin connected to your systems. For most teams, this list is longer than expected once you actually write it down.
For each one, document:
- Which credentials does it have access to? (env vars, mounted secrets, config files)
- What permissions do those credentials carry? (read-only vs. write, scoped vs. admin)
- What is the actual job this agent needs to do?
Then compare. If the agent’s job is to summarize Slack threads and it has access to your Stripe key, that’s a mismatch. Flag it. That’s a high or critical blast radius.
The key question for every credential is: if an attacker had this key, what’s the worst they could do? Write that down. If the answer is “breach production” or “drain Stripe revenue,” you have a problem to fix.
You can build tooling to automate this analysis, scanning your secrets management system to map credentials to agents and flagging scope mismatches. But even a manual spreadsheet review will surface risks that have been sitting quietly in your stack for months. A 10-minute review session often reveals that your “simple” Slack bot has database admin rights.
The Fix: One Agent, One Scoped Key Set
The principle is simple. Each agent gets its own credentials, scoped to exactly what it needs, nothing more.
The Slack summarizer gets a read-only Slack token. That’s it. No database access. No Stripe. No GitHub. If it gets compromised, the attacker gets read access to your Slack workspace. That’s bad, but it’s a contained bad. You revoke the token, you’re done.
Compare that to the shared .env scenario. Same compromise, but now you’re rotating Stripe keys, invalidating GitHub tokens, cycling your database password, and hoping the attacker didn’t already use the window they had.
Scoped credentials aren’t a new concept. The principle has been in security practice for decades under names like “least privilege” and “need to know.” What’s changed is that AI agents have made it much easier to accidentally violate it, because agents feel like small tools but they run with the same credential access as your full backend.
The mechanics of scoping vary by platform. For agents using the OpenAI API, you can create separate API keys per agent and restrict what each key can access via usage policies. For agents using your own backend services, you create separate service accounts with permission scopes that match the agent’s role. For secret management, you use groups: the Slack summarizer is in the “slack-readonly” group, which can only access the Slack read token. The database admin agent is in its own isolated group.
API Stronghold handles this with credential groups. You create a group, assign only the keys that agent needs, and the server enforces it. When the agent requests a secret, it can only retrieve secrets its group is authorized to see. There’s no way for it to accidentally pick up a Stripe key that’s sitting next to its Slack token. See how this works end to end: apistronghold.com/blog/securing-openclaw-ai-agent-with-scoped-secrets
This Is an Afternoon Fix, Not a Security Project
The reason most teams don’t scope their agent credentials is that it sounds like work. A proper secrets management overhaul, service account restructuring, all of it feels like a Q2 project.
It isn’t. Here’s what one agent actually takes:
| Step | What you do | Time |
|---|---|---|
| 1 | Create a new credential group or service account | 5 min |
| 2 | Generate scoped keys (e.g., read-only Slack token, no DB access) | 10 min |
| 3 | Update agent config to use new credentials only | 5 min |
| 4 | Test the agent still works with the scoped keys | 10 min |
| 5 | Remove old broad-access credentials from the agent’s config | 5 min |
That’s 35 minutes for one agent, including testing. Teams with straightforward setups (one or two agents, standard secrets manager) routinely finish a single agent in under an hour. Teams with five or more agents running against a shared .env will want to block out a half-day and work through them in priority order.
Pick the highest-risk agent first: the one with the widest blast radius. For most stacks, that’s whatever agent touches your payment processor or production database. Scope it down. Then do the next one. Work through all of them in a single sitting.
The goal isn’t perfection. The goal is getting your worst-case blast radius from “full backend breach” to “contained, recoverable incident.”
If you’ve shipped an AI agent or plugin without auditing its credentials, you have a blast radius problem. Every day it runs with broad access is a day a prompt injection or supply chain attack can reach your full stack.
Pull up your list of agents now. For each one, ask what it can read, write, and delete. If the answer is ‘more than it needs’, fix that first.
API Stronghold’s blast radius report maps exactly what each agent can reach and rates the risk. The 14-day free trial includes the full report plus scoped credentials and automatic rotation. Start free →