Silicon Valley has forgotten what normal people want
Elizabeth Lopatto at The Verge argues that Silicon Valley keeps mistaking its own enthusiasm for universal insight, cycling through NFTs, the metaverse, and now AI without ever stopping to ask what regular people actually need. This pattern matters because it shapes where billion...
Elizabeth Lopatto, writing for The Verge on April 20, 2026, published a sharp, personal critique of how tech insiders consistently convince themselves they have discovered something profound, when they have usually just rediscovered something everyone else already knew. The piece is pegged to a real encounter Lopatto had with a tech acquaintance who was genuinely stunned by what large language models had revealed to him about language and knowledge. What he described, with breathless excitement, was essentially a century-old field of academic study. Lopatto uses that anecdote as the opening move in a broader argument about how Silicon Valley's insularity keeps producing the same pattern: a small, influential group of people mistake novelty for wisdom, amplify each other, and drag enormous amounts of money and attention toward ideas that most people outside the bubble find either baffling or useless.
Why This Matters
This is not a minor cultural complaint. When the people setting the agenda for trillion-dollar companies operate inside a closed feedback loop, the products they build reflect that loop rather than actual human need. NFTs peaked in early 2022 with a market that briefly touched 25 billion dollars in annual transaction volume before collapsing to a fraction of that within 18 months. The metaverse cost Meta alone more than 40 billion dollars between 2021 and 2023, and most people never logged in. Now the same dynamics, the same podcasts, the same group chats, the same "thought leaders," are running the same play with AI. The stakes are higher this time because AI is genuinely capable of doing useful things, which makes the hype harder to separate from the substance.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
Lopatto's piece opens with a scene that anyone who has spent time around the tech industry will recognize immediately. A tech acquaintance corners her and excitedly reports that LLMs have revealed something astonishing: knowledge is structured into language, and you can type a single word into ChatGPT and watch it infer meaning, or invent a word and test whether the model understands intent. He concludes, apparently in complete seriousness, that LLMs represent a discovery on par with the invention of writing itself.
Lopatto is a reporter who joined The Verge in 2014 as science editor after working at Bloomberg, and her scientific background shows here. She identifies what her acquaintance stumbled onto as a naive, garbled version of Structuralism, the linguistic theory developed by Ferdinand de Saussure in the early 20th century. Saussure, working around 1910, argued that meaning in language is not inherent to words themselves but arises from the relationships between signs within a system. What the acquaintance experienced as a revelation had a name, a bibliography, and roughly a hundred years of academic debate behind it. He just did not know that.
That gap, between what tech insiders think they have discovered and what the rest of the world already worked out, is Lopatto's central subject. And the pattern she is describing is not new to AI. The All-In Podcast, referenced in the article's image caption, represents a specific strain of this phenomenon: a small group of wealthy, well-connected men who treat their own enthusiasm as a reliable signal of importance and whose audience treats their enthusiasm the same way. The result is a self-reinforcing system where ideas get amplified not because they have been tested against the real world but because the right people are excited about them.
The NFT and metaverse cycles both followed this structure. In 2021 and 2022, the people most loudly promoting NFTs were also the people most financially invested in them. The metaverse was declared inevitable by people whose companies were selling metaverse infrastructure. In both cases, the ordinary consumer, the person who was supposedly going to benefit from these inventions, was mostly an afterthought. When regular users finally encountered these products, they found them confusing, expensive, or simply pointless.
Lopatto's argument is that AI is now inside the same dynamic. The difference is that AI tools, including the AI tools being built on top of foundation models right now, genuinely can do useful things. Summarizing documents, writing code, answering questions: these are real capabilities with real applications. But the most fervent promoters are not talking about those practical uses. They are talking about AGI, about civilizational transformation, about LLMs as discoveries on par with writing. That gap between the practical and the prophetic is where Lopatto's critique lands hardest.
Key Details
- Elizabeth Lopatto has been at The Verge since 2014, previously reporting for Bloomberg.
- The article was published April 20, 2026.
- NFT transaction volume peaked at approximately 25 billion dollars annually in early 2022 before collapsing.
- Meta spent more than 40 billion dollars on metaverse development between 2021 and 2023.
- Ferdinand de Saussure developed Structuralism approximately a century before the acquaintance "discovered" similar ideas through ChatGPT.
- The All-In Podcast is cited as an illustration of the self-referential hype culture Lopatto is critiquing.
What's Next
The AI investment cycle is nowhere near its peak, which means this critique will face maximum resistance from the people it describes for at least another 12 to 24 months. Watch for the same pressure that silenced metaverse skeptics in 2021 to be applied to anyone who questions AI's civilizational importance in 2026. The real test will come when enterprise AI adoption numbers are reported transparently alongside the spending figures, because that gap will tell us whether this cycle ends like the metaverse or like the internet.
How This Compares
Lopatto's piece arrives at a moment when a small but growing number of voices inside tech journalism are willing to say publicly what many have been saying privately. Gary Marcus, the cognitive scientist and AI critic, has been making a structurally similar argument since at least 2022, pointing out that the people most excited about AI progress are often the least equipped to evaluate its actual limitations. The difference is that Lopatto is embedding that critique inside a cultural observation, not just a technical one.
Compare this to the wave of metaverse skepticism that finally broke into mainstream tech coverage in late 2022, roughly 18 months after the hype peaked. That skepticism came too late to redirect the 40-plus billion dollars Meta had already spent. The question for AI coverage right now is whether critical voices like Lopatto's arrive early enough to matter, or whether they arrive in time only to write the postmortem.
There is also a comparison worth making to the broader conversation about who gets to define what AI is for. Reporting on AI news from the past year shows a consistent split between the practical applications being built by developers for specific, narrow use cases and the sweeping claims being made by investors and podcast hosts about AI's inevitable domination of every human endeavor. Lopatto is essentially arguing that the second group, not the first, has captured public attention, which means the public is being prepared for a revolution that may look very different from the one they were promised.
FAQ
Q: Why does Silicon Valley keep hyping technologies that fail? A: The incentive structure rewards excitement over accuracy. Venture capitalists, founders, and media personalities all benefit from the early-stage enthusiasm around a new technology. By the time the technology fails to meet the hype, the money has already moved and the people who were loudest have often already profited or pivoted.
Q: What is the All-In Podcast and why is it relevant? A: The All-In Podcast is a show hosted by four prominent Silicon Valley investors, including Chamath Palihapitiya and David Sacks, known for promoting tech industry viewpoints to a large audience of founders and investors. Lopatto uses it as a symbol of the self-reinforcing hype culture she is criticizing, where a small group treats its own enthusiasm as evidence of importance.
Q: Are all AI tools just hype, or are some actually useful? A: Some AI tools solve real, specific problems well. Code completion, document summarization, and customer service automation have demonstrated measurable productivity gains in controlled settings. The hype problem is not that AI does nothing useful; it is that the most vocal promoters tend to describe capabilities that do not yet exist as if they are already here, which makes it harder for people to evaluate what AI can actually do today.
The gap between what Silicon Valley believes it has discovered and what the rest of the world has already figured out is not a new problem, but AI has given it a much larger budget and a much louder microphone. Lopatto's piece is a useful reminder that the best guides to any new technology come from people who are trying to understand it honestly, not people who are invested in its inevitability. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.




