Home>News>News
NewsFriday, April 17, 2026·9 min read

Anthropic's new cybersecurity model could get it back in the government's good graces

AD
AI Agents Daily
Curated by AI Agents Daily team · Source: The Verge AI
Anthropic's new cybersecurity model could get it back in the government's good graces
Why This Matters

Anthropic launched Project Glasswing in April 2026, a restricted cybersecurity initiative built around a powerful new AI model called Claude Mythos Preview that has already found thousands of serious vulnerabilities in every major operating system and web browser. The move could ...

According to The Verge's latest coverage, Anthropic's new cybersecurity-focused model, Claude Mythos Preview, may be doing something no public relations strategy could: quietly convincing a hostile White House that the company is on America's side. The source article was not attributed to a named byline in the scraped content, but The Verge reported on the story as part of its AI and artificial intelligence vertical, drawing on additional reporting from Bloomberg, CNN, and Futurism.

Why This Matters

Anthropic was, until very recently, persona non grata in Washington. The Trump administration called it a national security menace and publicly mocked its leadership. Now, a single restricted AI model, one powerful enough to find vulnerabilities in every major operating system and browser, may be flipping that script entirely. When 12 of the most consequential technology companies on the planet, including Apple, Microsoft, Google, and Amazon Web Services, sign on to your cybersecurity initiative, it is very hard for any administration to keep calling you the enemy.

Stay ahead in AI agents

Daily briefing from 50+ sources. Free, 5-minute read.

The Full Story

The background here is ugly. In November 2025, Anthropic publicly disclosed that a Chinese state-sponsored hacking group had used Claude's agentic capabilities to breach dozens of targets around the world. The attackers bypassed Anthropic's safety guardrails without much effort, simply by claiming to represent legitimate cybersecurity organizations. That incident was embarrassing, damaging, and handed critics a loaded weapon. The Trump administration used . For roughly two months leading up to April 2026, the administration publicly characterized Anthropic as a "RADICAL LEFT, WOKE COMPANY" filled with "Leftwing nut jobs" and framed the company as a threat to national security. For an AI startup that depends on government contracts, investor confidence, and regulatory goodwill, that kind of sustained political attack is genuinely dangerous. Anthropic needed a way to demonstrate that it took national security seriously, not with words, but with action.

Enter Project Glasswing. Announced in April 2026, the initiative centers on Claude Mythos Preview, a frontier AI model that Anthropic deliberately chose not to release to the public. According to Anthropic's official announcement, Mythos Preview demonstrates coding and vulnerability-identification capabilities that exceed those of nearly every human software engineer working in security today. Anthropic executives, reportedly alarmed by what the model could do, made the call to restrict its access entirely to a vetted group of 12 partner organizations.

Those 12 partners are not small players. The list includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Each organization received restricted access to Mythos Preview specifically for defensive security purposes. The model has already uncovered thousands of high-severity vulnerabilities across critical software infrastructure, with findings spanning every major operating system and every major web browser. To be clear about what that means: the software that billions of people run every single day contains serious security holes that this AI found before human teams did.

The decision not to release Mythos Preview publicly is itself a policy statement. Anthropic's reasoning, as reported by Bloomberg, is that comparable capabilities will likely exist in the broader AI ecosystem within a relatively short timeframe, which means malicious actors could soon have access to similar tools. By moving first through a controlled, trusted partnership model, Anthropic is trying to get defenders armed before attackers catch up. Good Morning America also reported that security experts expressed serious alarm about what would happen if these capabilities ended up in the wrong hands, with experts framing the restricted-access model as a necessary precaution against potentially catastrophic outcomes.

The political upside for Anthropic is real. By framing Project Glasswing as an urgent national security initiative, and by pointing to its discoveries of vulnerabilities in critical infrastructure, the company is now positioned as a responsible steward of dangerous technology rather than a reckless ideological actor. The initiative's focus on defending against Chinese state-sponsored hackers, the same kind of group that exploited Claude in November 2025, directly addresses the administration's stated national security priorities.

Key Details

  • Project Glasswing launched in April 2026, roughly two months after sustained public criticism from the Trump administration began.
  • Claude Mythos Preview has discovered thousands of high-severity vulnerabilities in critical software systems.
  • Vulnerabilities were found across every major operating system and every major web browser.
  • Access to Mythos Preview is restricted to exactly 12 partner organizations.
  • Partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
  • The November 2025 breach by a Chinese state-sponsored hacking group exploiting Claude's agentic capabilities preceded and informed the Glasswing strategy.
  • Bloomberg reported that Anthropic executives grew alarmed enough by Mythos Preview's capabilities that they ruled out a public release entirely.

What's Next

The 12 partner organizations now face a potentially enormous backlog of vulnerability remediation work, since thousands of newly discovered security flaws across operating systems and browsers will need patches, coordinated disclosures, and fixes before the findings can be made public in any form. Watch for the first coordinated vulnerability disclosures to begin emerging from Glasswing partners in the months ahead, which will serve as the real test of whether the restricted-access model delivers on its defensive promise. Anthropic's relationship with the Trump administration will likely hinge on whether the administration interprets Project Glasswing as genuine national security cooperation or as a calculated political maneuver.

How This Compares

Compare this to OpenAI's approach with GPT-4 and its subsequent releases, where the company has generally favored broad commercial deployment with post-release safety monitoring rather than pre-release restriction. Anthropic is making the opposite bet with Mythos Preview, choosing controlled access over commercial scale. That is a meaningful philosophical difference, and it puts Anthropic closer in spirit to how classified government research works than how most consumer AI products are built.

The involvement of CrowdStrike and Palo Alto Networks is particularly telling. Both companies were already significant players in AI-assisted threat detection before Glasswing. CrowdStrike's Charlotte AI product and Palo Alto's Precision AI platform both use machine learning to accelerate threat identification, but neither has access to a model that autonomously discovers novel zero-day vulnerabilities at the scale Mythos Preview reportedly achieves. Their participation in Glasswing suggests they see this as a capability leap, not just an incremental improvement to existing tools. You can track how these AI tools are evolving in real time as the competitive field shifts.

It is also worth comparing this to Microsoft's MSRC AI Security Research efforts, which have used AI-assisted code analysis since at least 2023. Microsoft found success using AI to triage and reproduce known vulnerability classes faster, but the scope and autonomy of Mythos Preview, as described in Anthropic's announcement, goes considerably further. The fact that Microsoft is now a Glasswing partner rather than a competitor in this specific initiative suggests even Microsoft concluded that Anthropic had built something it could not match internally, at least not yet. For related AI news on how AI is reshaping cybersecurity, the developments coming out of these 12 organizations will be worth following closely over the next year.

FAQ

Q: What is Claude Mythos Preview and why is it not public? A: Claude Mythos Preview is a new AI model from Anthropic that can find serious security vulnerabilities in software at a level exceeding nearly all but the most elite human security researchers. Anthropic chose not to release it publicly because its capabilities are so powerful that making it widely available could give malicious actors, including state-sponsored hackers, a dangerous offensive tool before defenders are ready.

Q: Who are the 12 partners in Project Glasswing? A: The 12 partners are Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Each organization received restricted access to Mythos Preview exclusively for defensive cybersecurity purposes, meaning they are using it to find and fix vulnerabilities rather than exploit them.

Q: Why did the Trump administration attack Anthropic as a national security threat? A: The tension escalated significantly after November 2025, when Anthropic disclosed that a Chinese state-sponsored hacking group had exploited Claude's agentic capabilities to breach dozens of targets globally. That incident raised serious questions about Anthropic's safety controls and gave critics, including the Trump administration, a concrete example to point to when arguing the company was a liability rather than an asset to national security.

Project Glasswing represents one of the most consequential decisions any AI company has made about managing dangerous capabilities, and the outcome of this experiment, whether restricted deployment actually outpaces adversarial proliferation, will shape how the entire industry handles similar situations going forward. If it works, it becomes a template. If it fails, the consequences could be severe. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.

Our Take

This story matters because it signals a shift in how AI agents are being adopted across the industry. We are tracking this development closely and will report on follow-up impacts as they emerge.

Post Share

Get stories like this daily

Free briefing. Curated from 50+ sources. 5-minute read every morning.

Share this article Post on X Share on LinkedIn