Home>News>News
NewsSaturday, April 11, 2026·8 min read

How the Internet Broke Everyone's Bullshit Detectors

AD
AI Agents Daily
Curated by AI Agents Daily team · Source: Wired AI
How the Internet Broke Everyone's Bullshit Detectors
Why This Matters

The systems that help people tell real content from fake are breaking down, and a new Wired investigation explains exactly why. AI-generated synthetic media now spreads faster than any fact-checker can respond, bots drive more than half of all internet traffic, and even official ...

According to Wired's latest coverage, the collapse of online verification is not a slow leak but a structural failure, one being driven simultaneously by hostile state-linked media operations, algorithmic systems that reward speed over accuracy, and a White House that has quietly adopted the same cryptic aesthetic as the propaganda it supposedly opposes.

Why This Matters

This is not a niche problem for OSINT researchers. When bots account for 51 percent of internet traffic and scale eight times faster than human traffic, the entire information ecosystem is operating on compromised infrastructure. An Iran-linked outlet called Explosive News can produce a two-minute synthetic video in 24 hours, which means bad content almost always gets a full day's head start on the truth. The verification industry, which relies on small teams doing manual research, was never built to fight a volume war at machine speed.

Stay ahead in AI agents

Daily briefing from 50+ sources. Free, 5-minute read.

The Full Story

The current crisis has two faces, one foreign and one domestic, and they are feeding each other. On the foreign side, outlets like Explosive News, which has documented links to Iran, have turned synthetic media production into an assembly line. Their format of choice is deliberately absurd: Lego-style animated clips alleging war crimes, designed to be immediately shareable and just ambiguous enough to escape clear categorization as fiction. These clips do not need to hold up to scrutiny for weeks. They need to travel fast and lodge in memory before a correction arrives.

That 24-hour production cycle is not incidental. It is the entire strategy. A synthetic piece that circulates for a single day before being flagged can reach millions of users, generate thousands of reshares, and anchor itself in the conversation. The correction, if it comes at all, will be quieter, slower, and far less viral. The algorithm is not neutral in this fight. It actively rewards the content that moves fastest, regardless of whether it is real.

The domestic side of the story is, if anything, more unsettling. Last month, the White House posted two "launching soon" teaser videos that were deliberately vague, stripped of context, and designed to generate speculation. Online investigators and open source researchers began breaking them down to determine whether they were authentic communications or something else entirely. The White House then pulled the videos before anyone had fully decoded them. The reveal was mundane: a promotional push for the official White House app. But the episode revealed something more significant. Even official government communications have absorbed the grammar of leaks, cryptic drops, and platform-native mystery. When the president's official account behaves like a meme page, the question "is this real?" becomes the only reasonable response to anything.

OSINT journalist Maryam Ishani, who covers the conflict, described the structural problem clearly. "We're perpetually catching up to someone pressing repost without a second thought," she said. "The algorithm prioritizes that reflex, and our information is always going to be one step behind." That gap between the repost and the correction is where the damage gets done.

The technical backdrop makes all of this worse. The 2026 State of AI Traffic and Cyberthreat Benchmark Report found that automated bot traffic now represents 51 percent of all internet activity, scaling at eight times the rate of human traffic. These systems are not just distributing content passively. They are actively prioritizing low-quality, high-engagement material, which means synthetic and sensational content gets amplified structurally, not just incidentally. The playing field is not level.

Satellite imagery, one of the core tools that OSINT researchers use to verify geographic claims about conflict zones and infrastructure damage, faces its own access crisis. Governments and private companies have tightened restrictions on what resolution is publicly available and when. This creates an asymmetry that favors bad actors: creating synthetic content has almost no technical barriers, while verifying it against authentic reference data keeps getting harder.

Key Details

  • Explosive News, an Iran-linked outlet, can produce a two-minute synthetic Lego-format video in approximately 24 hours.
  • Automated bots now account for 51 percent of all internet traffic, per the 2026 State of AI Traffic and Cyberthreat Benchmark Report.
  • Bot traffic is scaling eight times faster than human traffic, according to the same report.
  • The White House posted and then removed two cryptic teaser videos last month before online investigators could fully decode them.
  • The videos turned out to promote the official White House app, not any substantive policy announcement.
  • OSINT journalist Maryam Ishani is quoted directly on the algorithmic disadvantage facing verification researchers.
  • Manisha Ganguly serves as visual forensics lead at The Guardian and is an OSINT specialist investigating conflict-related synthetic media.

What's Next

Verification tools will keep developing, but detection technology is always reactive by definition, arriving after new generation techniques are already deployed in the wild. The more urgent pressure point is satellite data access, where policy decisions by governments and commercial operators in the next 12 to 18 months will determine whether independent researchers can hold any informational ground at all. Watch for legislative battles over open-source data access in the EU and US as the proxy fight where this war actually gets decided.

How This Compares

The Explosive News operation is not operating in isolation. It fits a pattern documented in multiple recent reports on AI-assisted disinformation. Meta's Q1 2025 adversarial threat report identified coordinated inauthentic behavior networks in at least 8 countries using AI-generated profile images and synthetic content to amplify political messaging. What makes the Explosive News case distinctive is the production format: Lego-style animation is visually novel enough to attract attention and ambiguous enough to complicate standard content moderation, which is trained on photorealistic fakes.

Compare this to the broader fact-checking collapse that organizations like First Draft documented during the 2024 election cycle, when over 50 countries held elections involving more than 4.5 billion citizens. Fact-checking organizations, most of which employ fewer than 20 full-time researchers, were simply overwhelmed by volume. The Wired story shows that the problem has matured beyond elections into a permanent condition. You can follow related AI news to track how these developments are unfolding across the industry.

The White House behavior also echoes a pattern seen in other governments. Hungary's official communications office, Brazil's Bolsonaro-era social media strategy, and India's BJP digital cell have all been documented using platform-native ambiguity as a communication tactic. The United States joining that pattern is a significant escalation, not a curiosity. When the most institutionally credible government on earth borrows tactics from information warfare playbooks, the credibility anchor that Western fact-checkers relied on is gone.

For developers building AI tools in the detection space, this Wired piece is essentially a product brief. The gaps are specific and documented: faster detection pipelines, better access to satellite reference data, and tools that can work at the volume scales bots operate .

FAQ

Q: What is synthetic media and why is it dangerous? A: Synthetic media is content, video, images, or audio, that is generated by AI rather than captured by a camera or microphone. It is dangerous because it can depict events that never happened, spread faster than verification systems can respond, and is increasingly difficult to distinguish from authentic footage without specialized tools.

Q: How do open source investigators verify online content? A: OSINT researchers use techniques like reverse image search, satellite imagery comparison, geolocation analysis, and metadata examination to confirm whether content is authentic. These methods are effective but slow, and the volume of synthetic content being produced right now exceeds what manual investigation teams can realistically process. Guides on verification methods are available for those building their own research workflows.

Q: Why can't AI detection tools just solve this problem? A: Detection tools are always built after new generation techniques have already been deployed, which means they are perpetually one step behind. Adversaries also actively study detection tools and modify their outputs to evade them. Detection helps, but it cannot keep pace with a production pipeline that can turn around new synthetic content in 24 hours.

The verification crisis described in this Wired investigation is not going to resolve itself through better media literacy campaigns or incremental improvements to content moderation. It requires structural changes to data access policy, detection infrastructure, and the economic incentives that currently reward speed over accuracy. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.

Our Take

This story matters because it signals a shift in how AI agents are being adopted across the industry. We are tracking this development closely and will report on follow-up impacts as they emerge.

Post Share

Get stories like this daily

Free briefing. Curated from 50+ sources. 5-minute read every morning.

Share this article Post on X Share on LinkedIn

This website uses cookies to ensure you get the best experience. We use essential cookies for site functionality and analytics cookies to understand how you use our site. Learn more