Vercel – the company behind Next.js and the infrastructure powering a disproportionate share of modern web products – was reportedly breached. A hacker is demanding $2 million for the stolen data. The stolen data includes the kind of thing that doesn’t recover from disclosure: internal docs, customer metadata, source context.
The detail that matters is not the ransom. It is the entry point.
The breach started because an employee granted an AI tool unrestricted access to Google Workspace. Not phishing. Not a stolen password. Not an unpatched server. A consent screen. A checkbox. A tool that was supposed to help with productivity was given the permissions of a senior admin, and from there the blast radius did the rest.
The new compromise path is a single OAuth screen
For a decade, the security industry has trained everyone to fear the obvious attack surfaces: passwords, VPNs, unpatched software, social engineering of IT. All of that still matters. But the attackers have already moved.
When an employee installs an AI tool and clicks through the Workspace consent prompt, the typical scope request looks harmless at a glance – “read and manage Drive files,” “read emails,” “access calendar.” Each line, individually, sounds like a feature. Taken together, that is nearly everything Vercel’s employees use to run the company.
That grant doesn’t require MFA each time it’s used. It doesn’t show up on a login anomaly dashboard the way a new human login does. It doesn’t expire unless the workspace admin explicitly kills it. And, crucially, it is now a high-privilege identity whose credentials live inside a third-party product that was not designed with an enterprise threat model.
In other words: the AI integration is the user, and the user has admin.
Why AI tools specifically break the threat model
Security programs are built around humans. They log human behavior, detect deviations from human patterns, require MFA for sensitive human actions, and use rate limits tuned to human reaction times.
AI integrations are none of those things. They authenticate once, act continuously, produce machine-speed traffic, and appear to the log system as a single, consistent “user” performing a stream of legitimate API calls. When an attacker compromises that integration – through a vulnerability in the tool itself, a supply-chain incident, or a leaked service credential – every subsequent action blends into normal operation.
This is not a theoretical threat model anymore. It is Vercel, live, this week.
The uncomfortable part: most companies already have this problem
If you are running an engineering org right now, the question you need to answer in the next 48 hours is not could this happen to us. It is how many AI tools already have workspace-wide OAuth grants, and who approved them.
The honest answers look something like this at most mid-size tech companies
Between three and a dozen AI tools have broad Workspace scopes. At least one was installed by someone who is no longer at the company. Most were approved in a Slack thread by a manager who had no security context. None appear in the SSO audit the way a human account would. Nobody has a lifecycle policy for non-human identities that reflects how AI tools actually behave.
That is the default state of AI adoption in 2026. Vercel just made the bill for that default visible.
What a real response looks like
The fixes are not mysterious. They are boring, and the reason they haven’t been implemented yet is that AI adoption has been moving faster than AI governance – deliberately, because slowing it down is politically expensive.
Scope-only access, always. An AI tool should be granted a single shared drive, a single mailbox, a single calendar – not the entire workspace. If the tool refuses to function under a limited scope, that is a signal it is the wrong tool.
Non-human identity lifecycle. Every AI integration should be treated as a service account with an owner, a rotation schedule, a review cadence and an expiration date. “Set and forget” consent grants are the bug.
Separate logging for agents. Split AI integration logs out from human logs and monitor them against a baseline specific to machine-speed, machine-shape behavior. A sudden Drive enumeration across 50,000 documents is fine for a human admin doing a migration, and catastrophic for a marketing chatbot.
Quarterly AI access review. Legal, security and the tool owner walk through the list. Tools nobody can defend get revoked. Tools with over-broad scopes get downgraded or replaced.
None of this is exotic. It is just the stuff every company will now be retrofitting, post-incident, at the speed of compliance rather than at the speed of risk.
The pattern the industry is about to learn the hard way
Every major security paradigm shift of the last two decades has been marked by a single widely-publicized breach that forced the conversation. Target did that for third-party vendor access. Okta did it for identity provider compromise. SolarWinds did it for software supply chain.
Vercel, whether it wanted to or not, may now be that example for AI integration risk. The next twelve months of enterprise security roadmaps will include a line item that did not exist on the last one: AI integration governance. A year from now, every vendor RFP will have a section for it. Cyber insurers will underwrite around it. Regulators will reference it.
One employee. One consent screen. One checkbox labeled “allow access to your Workspace.” That is all it took.
Related reading
More Stories
The Quietest Cost of Using AI Every Day Is the One Nobody Is MeasuringApr 22, 2026
The Real Reason OpenAI’s New Image Model Is a Threat to Midjourney (It Isn’t Quality)Apr 22, 2026
Jeff Bezos Just Quietly Built a Fourth Frontier AI Lab – and Nobody Has Seen ItApr 22, 2026
Amazon Doesn’t Need a Winning AI Model. It Needs Every Winner to Run on AWSApr 22, 2026
Leave a Reply