The Vercel Breach Explains Why Identity Attack Path Management Can’t Wait

Read Time

8 mins

Published

Apr 21, 2026

Share

How a Compromised AI Tool Became a Supply Chain Attack Path — And Why IAM Alone Can’t Stop It

TL;DR

  • Vercel was breached after an employee connected an AI tool (Context.ai) to their corporate Google Workspace via OAuth. When Context.ai was compromised, that trust relationship became a direct attack path into Vercel’s identity infrastructure.
  • This is a structural identity risk problem, not just an OAuth permissions problem. The Clean Source Principle explains why: the security of any resource is only as strong as the security of every resource that has control over it.
  • Every AI tool granted OAuth access to a corporate identity system is a non-human identity (NHI) with delegated rights. If that tool is compromised, attackers inherit those rights — regardless of how it was provisioned.
  • AI tool adoption is creating new identity attack paths faster than traditional IAM governance can track. Enterprises must map and eliminate those paths before adversaries exploit them.
  • The organizations that fare best will not be those with the most AI in their SOC. They will be those who have already eliminated the attack paths that allow a single compromised token to gain control of critical assets.

The news

News of a major breach at Vercel emerged this week. The industry’s early commentary is already framing it at the wrong level of abstraction.

Vercel, the cloud platform behind Next.js, confirmed that an attacker compromised a third-party AI tool, Context.ai, and used an OAuth token to access a Vercel employee’s enterprise Google Workspace account. From there, the attacker reached additional Vercel internal environments. The group claiming responsibility, operating under the ShinyHunters name, is reportedly asking $2 million for the stolen data and has described it as the foundation for “the largest supply chain attack ever.” Whether that claim is credible or not, the attack chain itself deserves analysis.

The commentary already circulating frames this as a permissions problem: the employee granted “Allow All” OAuth access to a consumer AI tool using a corporate account, and that overly broad permission set enabled the breach. The lesson, in this framing, is least privilege: AI tools should request only what they need, and employees should not grant sweeping access to third-party applications.

That framing is not wrong. It is just not the right level of analysis.

What actually happened

The attack chain matters more than any single link in it.

An infostealer, reportedly Lumma Stealer, compromised a Context.ai employee’s endpoint through a Roblox cheat download in February 2026. That infection appears to have led to the exfiltration of OAuth tokens from Context.ai’s systems. One of those tokens belonged to a Vercel employee who had signed up for Context.ai’s AI Office Suite using their corporate Vercel account, granting it “Allow All” OAuth permissions against Vercel’s enterprise Google Workspace.

The attacker used that token to access the Vercel employee’s Workspace account, then moved laterally into other Vercel environments. Environment variables, including API keys, tokens, and database credentials stored without a sensitive designation, were all potentially exposed. Vercel’s CEO described the attacker as moving with “surprising velocity and in-depth understanding of Vercel’s systems.”

The Clean Source Principle, applied

The security of any resource is no stronger than the security of every other resource that has control over it.

This is the Clean Source Principle, and it is why domain controllers receive so much attention in Active Directory environments; everything that can modify a domain controller can, by extension, modify anything the domain controls. The control relationship creates a dependency that, in turn, creates an attack path.

The Vercel breach is the same principle expressed in a different identity layer.

Read more about the Clean Source Principle

The moment the Vercel employee granted Context.ai “Allow All” OAuth permissions against Vercel’s Google Workspace, Context.ai’s infrastructure became a clean source dependency for Vercel’s identity environment. The security of Vercel’s Workspace was now partly a function of Context.ai’s AWS infrastructure security and the endpoint security of Context.ai’s employees.

That dependency was not modeled. It was not created through an IT provisioning workflow or a vendor risk assessment. It was created by an employee clicking through an OAuth consent screen. No identity team mapped what it meant for Vercel’s attack graph. No one identified the trust edge it introduced. The path was open before anyone knew to look for it.

Non-human identities are outpacing identity governance

Context.ai was not simply a software tool in this scenario. It was a non-human identity, an agent with delegated rights to act inside Vercel’s enterprise identity infrastructure, backed by infrastructure Vercel neither controlled nor monitored.

Every AI tool employees connect to corporate accounts is, in this structural sense, a non-human identity. It holds a token. It carries permissions. It operates inside trust relationships that extend into the enterprise environment. And the security of its underlying infrastructure is entirely outside the enterprise’s control.

The scale of this problem is already significant and accelerating. Enterprises today manage millions of NHIs, service accounts, automation workflows, workload identities, and AI agents. The vast majority carry excessive privileges. A substantial portion of recent notable breaches have been traced to compromised NHIs as the actual attack vector, rather than the initial foothold.

The Vercel case fits this pattern precisely. The attacker did not compromise a human. They compromised the infrastructure of a third-party tool and then walked the trust relationship that a human had extended to that tool. The NHI was a vital part of the broader attack path.

The permissions framing is necessary but not sufficient

Least privilege matters. If the Vercel employee had granted scoped permissions rather than “Allow All,” the blast radius of the Context.ai compromise would have been smaller. That is true, and organizations should act on it. But least privilege governance does not tell you which AI tools employees have connected to corporate identity systems. It does not model what an adversary can reach from any given OAuth token. It does not identify the highest-risk trust dependencies in an environment, the ones where a third-party tool’s compromise creates a direct path to a critical asset.

What most enterprises have today is a governance model designed for human identities, extended incrementally to cover some NHIs, operating in environments where AI tool adoption by individual employees is outpacing any review process. The attack surface is not just what IT has provisioned. It is every trust edge created outside IT’s visibility.

Identity Access Management answers the question “who has intended access.” Identity Attack Path Management answers a different question: “What happens when that access is abused, and what can an adversary reach from it?”In an environment where AI agents are forming trust relationships faster than governance can track, the relevant question is not whether a tool was approved. It is what attack paths the tool’s compromise enables, and whether any of those paths reach something critical.

Same attack paths, higher velocity

Vercel described the attacker as moving with “surprising velocity and in-depth understanding” of their systems.

This is consistent with what adversaries are capable of when they gain a foothold in a dense identity environment with access to the underlying graph. Modern enterprise identity, including users, service accounts, AI agents, OAuth tokens, and cross-platform trust relationships, contains enough relational information that a sophisticated adversary with a single compromised credential can identify the fastest path to a high-value asset with extraordinary speed. That speed is compressing further as adversaries incorporate AI into their operational workflow.

This is the core argument for proactive attack path management over detection-first strategies. Detection and response are predicated on having time to intervene between initial access and impact. When adversaries move at machine speed through a pre-existing identity graph, that window collapses. By the time the detection fires, the path has already been walked.

The organizations that fare best will not be those with the most AI in their SOC. They will be those who have already eliminated the attack paths that allow a single compromised token to gain control of critical assets.

What defenders should take from this

The Vercel breach will generate significant commentary about AI tool risk and OAuth governance. Most of it will focus on permissions hygiene and third-party vendor assessments. Both are necessary; neither addresses the structural question.

The more important question is: what does the identity attack graph look like today, including the trust edges created by every AI tool employees have connected to corporate systems? Where do those edges lead? What attack paths exist right now that an adversary, moving at machine speed through a compromised token, could follow to reach something critical?

If an organization cannot answer those questions, it is governing intended access. It is not governing what happens when that access is abused.

That is the gap Identity Attack Path Management is designed to close. The Vercel breach illustrates precisely why closing it cannot wait for the next incident.

Details of the breach are still emerging. The attack chain above is drawn from publicly reported information and will be updated if material facts change.

Jared Atkinson

Chief Technology Officer

As Chief Technology Officer at SpecterOps, Jared Atkinson leads the research and development organization with a focus on understanding real-world adversary tradecraft. His team is responsible for expanding the BloodHound graph across platforms and identities, translating attacker behavior into actionable attack-path insights. He drives the creation of new BloodHound use cases that help defenders reduce identity risk by constraining attacker movement.

Ready to get started?

Book a Demo