Anthropic Opposes the Extreme AI Liability Bill That OpenAI Backed
Anthropic has publicly opposed Illinois bill SB 3444, a proposed law backed by OpenAI that would shield AI companies from liability in disasters involving 100 or more deaths or over $1 billion in property damage. The split between two of America's most prominent AI labs reveals a...
Wired's reporting on this story, published in April 2026, reveals that Anthropic has been working behind the scenes to lobby Illinois state senator Bill Cunningham, the bill's sponsor, to either gut or kill SB 3444. The story marks one of the clearest public breaks between OpenAI and Anthropic on a concrete policy question, and the stakes go well beyond the borders of Illinois.
Why This Matters
This is not a minor policy squabble between two competing companies. SB 3444 would let an AI lab walk away clean if a bad actor used its model to kill 99 people, because the threshold for liability begins at 100 deaths. That is not a safety framework. That is a liability loophole large enough to drive a truck through. Anthropic opposing OpenAI on this bill is significant because it signals that at least one frontier AI developer, a company that trained models on over $100 million in compute, believes the industry needs real accountability, not a transparency report posted to a website as a get-out-of-jail substitute.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
Illinois Senate Bill 3444 would establish that frontier AI developers, defined as companies that trained models at a computational cost exceeding $100 million, cannot be held liable for catastrophic harm caused by their systems, provided they did not intentionally or recklessly cause that harm and have published a safety and transparency report on their website. The bill defines catastrophic harm as events resulting in death or serious injury to 100 or more people, or at least $1 billion in property damage. Both OpenAI and Anthropic would fall under this definition, given the scale of their model training costs.
OpenAI has come out in support of the bill, arguing that it reduces the risk of serious harm from frontier AI systems while keeping the technology accessible to individuals and businesses across Illinois. OpenAI spokesperson Liz Bourgeois framed the company's position as part of a broader multi-state effort to build what OpenAI calls a "harmonized" approach to AI liability, one the company hopes will eventually inform a national federal framework. This is a notable strategic shift for OpenAI, which until recently had mostly played defense in legislative battles, opposing bills that expanded liability rather than actively drafting or championing ones that limited . Anthropic sees things very differently. Cesar Fernandez, Anthropic's head of US state and local government relations, said in a statement that the bill functions as a "get-out-of-jail-free card against all liability" and that genuine transparency legislation must pair disclosure requirements with real accountability. Anthropic has not simply issued a press statement and walked away. According to people familiar with the matter, the company has been actively lobbying Cunningham and other Illinois lawmakers, pushing for either significant changes to the bill or its removal from consideration entirely. An Anthropic spokesperson confirmed to Wired that the company has held what it described as promising conversations with Cunningham about using the bill as a foundation for future, more balanced AI legislation.
The response from Illinois Governor JB Pritzker's office suggests that even state executives are skeptical of the bill's current form. A spokesperson for Pritzker said the governor does not believe "big tech companies should ever be given a full shield that evades responsibilities they should have to protect the public interest." That statement does not constitute a veto threat, but it does signal that SB 3444 faces political headwinds beyond just Anthropic's lobbying.
AI policy experts quoted in coverage of the bill have characterized SB 3444 as an unusually extreme measure, even by the standards of industry-friendly legislation that OpenAI has previously supported. The core concern is not just about the liability threshold but about what the bill uses as a substitute for accountability. Publishing a safety report on a company's website does not guarantee that the report is rigorous, independently verified, or enforceable. Critics have pointed out that this creates a moral hazard, where developers could feel insulated from legal consequences for failures that fall below the death and damage thresholds.
Senator Cunningham's office did not respond to Wired's request for comment, which makes it difficult to gauge how seriously the sponsor is taking Anthropic's opposition. But with the governor's office skeptical and one of the two companies the bill is effectively designed to protect now campaigning against it, the bill's path to passage looks difficult.
Key Details
- SB 3444 was introduced in Illinois by state senator Bill Cunningham.
- The bill would shield AI developers from liability in events causing fewer than 100 deaths or less than $1 billion in property damage.
- A "frontier model" under the bill is defined as any AI model trained at a computational cost exceeding $100 million.
- Anthropic's head of US state and local government relations, Cesar Fernandez, issued a named public statement opposing the bill.
- OpenAI spokesperson Liz Bourgeois confirmed the company's support for SB 3444 in a statement to Wired.
- Governor JB Pritzker's office said Pritzker opposes full liability shields for big tech companies.
- Anthropic was founded in 2021 by Dario Amodei, Daniela Amodei, and other former OpenAI employees.
- Wired published its reporting on this story in April 2026.
What's Next
Illinois's legislative session will determine whether Cunningham moves the bill forward as written, incorporates Anthropic's proposed changes, or lets it stall entirely. OpenAI's parallel lobbying in New York and California suggests it will push similar liability-limiting frameworks in other states regardless of what happens in Illinois, which means Anthropic may face more of these fights, not fewer. Advocates tracking related AI news should watch whether other major AI developers, including Google DeepMind and Meta, eventually take public positions on the SB 3444 framework.
How This Compares
This fight maps closely onto the broader federal debate over Section 230, the law that largely shields online platforms from liability for user-generated content. Critics argued for years that Section 230 gave tech companies a free pass, and the AI industry now appears to be pursuing a similar structural protection for model developers. OpenAI's strategy looks a lot like what social media companies did in the early 2010s: get favorable liability language written into law before disasters make legislators angry enough to write punitive versions instead.
Compare this to the European Union's AI Act, which took a risk-tiered approach to liability. High-risk AI applications face stricter obligations under that framework, but developers are not handed a blanket exemption based on publishing a transparency report. Anthropic's position on SB 3444 is more consistent with the EU's approach than with what OpenAI is pushing in Illinois. That is not a coincidence. Anthropic has consistently tried to occupy the "responsible AI" positioning in this market, and opposing a liability shield for mass-casualty events is exactly the kind of stance that reinforces that brand.
It is also worth comparing SB 3444 to California's SB 1047, which was vetoed by Governor Gavin Newsom in 2024. That bill would have imposed significant safety obligations on large AI developers, and the industry, including OpenAI, largely opposed it. Now OpenAI is backing legislation that goes in the opposite direction. The symmetry here is stark: the company fought a bill that would have added accountability, and it is now championing a bill that removes .
FAQ
Q: What is Illinois bill SB 3444 and what would it do? A: SB 3444 is a proposed Illinois law that would prevent people from suing AI companies for disasters caused by their AI models, as long as the companies published a safety report online and did not intentionally cause the harm. The protection would apply to events including mass casualties of fewer than 100 people or property damage under $1 billion.
Q: Why is Anthropic opposing a bill that would also protect Anthropic? A: Anthropic argues that the bill sets a dangerous precedent by treating a published transparency report as a sufficient substitute for real legal accountability. The company believes that if AI systems cause serious harm, developers should face meaningful consequences, and that weakening liability standards now could make the industry less careful about safety over time.
Q: How likely is SB 3444 to actually become law? A: AI policy experts quoted in Wired's coverage describe the bill's chances as remote. The governor's office has publicly signaled skepticism, Anthropic is actively lobbying against it, and the bill's sponsor has not publicly responded to the opposition. The bill may end up being revised substantially before any further action is taken.
The public split between OpenAI and Anthropic over SB 3444 is unlikely to stay confined to Illinois. As both companies scale their state-level lobbying operations, expect more clashes over liability frameworks that will shape what AI companies are actually responsible for when their systems cause harm. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.




