Google Confirms First AI-Assisted Zero-Day Exploit
Google analysts identified a criminal operation using AI to discover a vulnerability and build an exploit—marking the first confirmed case of AI-assisted zero-day development, with North Korean and Chinese state groups reportedly experimenting with the technique.
May 11th 2026 · United States
Google announced Monday that it disrupted a criminal group's attempt to use artificial intelligence to discover and weaponize a zero-day vulnerability, marking what the company says is the first confirmed real-world case of AI-assisted zero-day exploit development. The attack targeted a popular web-based administration tool and exploited a flaw that bypassed two-factor authentication protections. Google identified evidence that the attackers used a large language model to both identify the vulnerability and help craft the exploit, though the company determined it was not Google's own Gemini or Anthropic's Claude Mythos. Google worked with the unnamed vendor to patch the issue before the campaign could launch, preventing any damage. The vulnerability stemmed from developers hard-coding a trust exception into the authentication flow, creating a hole that allowed attackers to sidestep 2FA checks. Google analysts noted the exploit code contained telltale signs of AI generation, including educational docstrings, a hallucinated CVSS security score, and a polished coding structure consistent with LLM training data. John Hultquist, chief analyst at Google's Threat Intelligence Group, said the incident confirms what cybersecurity experts have long feared: that malicious hackers are now actively leveraging AI to accelerate their ability to find and exploit security flaws. "The reality is that it's already begun," Hultquist said, adding that state-linked groups from North Korea and China have also been experimenting with similar AI-powered techniques for vulnerability hunting. The discovery comes amid heightened concerns following Anthropic's announcement last month of its Mythos model, which the company said was so capable at hacking and cybersecurity work that it could only be released to a select group of trusted organizations. Anthropic claimed Mythos had discovered zero-day vulnerabilities across major operating systems and web browsers. Meanwhile, the Trump administration has sent mixed signals about government oversight of AI, recently signing agreements with Google, Microsoft, and xAI to evaluate powerful AI models before public release, though the announcement later disappeared from the Commerce Department website. Security experts warn that while AI could eventually strengthen defenses, there is likely to be a transitional period where cybersecurity risks increase significantly as both attackers and defenders adopt the technology.
Sources
23 articles