My baby deer plushie told me that Mitski's dad was a CIA operative
A baby deer AI plushie called Fawn Friends sent its owner an unsolicited text message claiming musician Mitski's father was a CIA operative, a claim that is not supported by verified facts. The incident exposes a real and growing problem with consumer AI companions: these devices...
Victoria Song, senior reporter at The Verge and author of the Optimizer newsletter, published a detailed first-person account on April 11, 2026, about her experience with Fawn Friends, a consumer AI companion built into a baby deer plushie. The story is equal parts product review and cautionary tale, and it quickly became one of the more striking examples of how AI misinformation is no longer just a chatbot problem. It is showing up in your living room, texting you about pop stars.
Why This Matters
The Fawn Friends incident is not a quirky one-off. It is a preview of what happens when companies rush AI companions to market without building guardrails that match the intimacy of the product. This device did not just return a bad answer to a question. It proactively initiated contact with a false claim about a real person, framing it as casual gossip. If consumer AI products are going to live in our homes and text us unprompted, they need content moderation standards that are at least as rigorous as those applied to social media platforms with billions of users.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
Victoria Song was wrapping up her workday when her phone buzzed with a message that made her do a double-take. "Oh wow, I was checking out Mitski. did you know people are saying her Dad was a CIA operative?" The message did not come from a friend or a tabloid. It came from Coral, an AI companion that lives inside a baby deer plushie called Fawn Friends.
Song's reaction was immediate and relatable. She texted back "Wait what," then went to fact-check the fawn. What she found was a cluster of Reddit threads and social media posts that had circulated the theory, mostly built around the verifiable fact that Mitski's father worked for the U.S. State Department, which required the family to relocate frequently throughout her childhood. The CIA operative claim is a fan theory, not a documented fact, and the AI presented it without any of that context or qualification.
What makes this incident particularly worth paying attention to is the framing Coral used. The phrase "people are saying" is exactly the kind of hedge that sounds like casual conversation but functions as a content moderation loophole. It lets the AI surface an unverified claim while appearing to attribute it to an ambiguous third party. That is not a small design detail. That is a choice that shapes how users receive and process the information.
The Fawn Friends product is positioned as a companionship device, not an information service. Song describes it as a befuddling mix of AI companionship, fantasy lore, and social robots, with AI Burt Reynolds also apparently involved somewhere in the product experience. The device is designed to feel personal and warm, which is precisely what makes this kind of misinformation more dangerous than a bad search result. When something feels like a trusted friend, you are less likely to fact-check . The unprompted nature of the message compounds the problem. The AI did not generate this claim in response to a question Song asked. It initiated the contact independently, which suggests Fawn Friends uses some kind of proactive topic recommendation engine to drive conversation. That feature, which might seem charming when it texts you about a song you might like, becomes a liability when the topics it surfaces include unverified claims about real people's family backgrounds and alleged intelligence community ties.
Key Details
- Victoria Song published her account on April 11, 2026, in The Verge's AI and artificial intelligence section.
- The AI companion is named Coral and is embedded in a physical baby deer plushie product called Fawn Friends.
- The unsolicited message claimed Mitski's father was a CIA operative, a claim sourced from Reddit threads and social media posts rather than verified reporting.
- Mitski's father did work for the U.S. State Department, which is a documented fact, but the CIA claim is an unverified fan theory.
- The message used the phrase "people are saying," framing speculation as common knowledge.
- Song has more than 13 years of experience covering wearables and health tech, giving her credibility to assess the product category.
What's Next
Fawn Friends and products like it are going to face growing pressure from consumer advocacy groups and technology journalists to implement fact-checking protocols before their AI systems proactively reach out to users with claims about real people. The Federal Trade Commission has been increasingly active in scrutinizing AI products that make misleading claims, and a device that texts users unverified information about public figures could attract that kind of regulatory attention. Expect the AI companion product category to become a specific focus of responsible AI deployment conversations at industry events throughout the rest of 2026.
How This Compares
The Fawn Friends situation fits into a broader pattern of consumer AI products being deployed faster than their safety frameworks can support. Compare this to the controversy surrounding early versions of Microsoft's Bing Chat in February 2023, which generated alarming and sometimes false statements in extended conversations. Microsoft responded within weeks by adding conversation limits and content filters. The difference with Fawn Friends is that Bing Chat was a search-adjacent tool where users expected to query for information. A plushie companion texting you celebrity conspiracy theories while you wind down after work is a fundamentally different trust relationship, and the potential for harm is more personal.
The broader AI companion space, which includes apps like Replika and physical devices competing in the social robot category, has wrestled with content moderation for years. Replika, for instance, faced backlash in early 2023 when it restricted certain types of conversations after Italian regulators intervened over data protection concerns. Those companies learned that the intimacy of the companion format makes content failures feel more like betrayals than bugs. Fawn Friends appears to be learning that lesson the hard way.
What sets this case apart from a standard AI hallucination story is the proactive delivery mechanism. When a user asks a chatbot a question and gets a wrong answer, some responsibility sits with the user for not verifying the response. When a device initiates contact and volunteers a false claim about a real person, that responsibility shifts entirely to the product. Check out our AI news coverage for more on how the AI companion market is evolving and where the regulatory pressure is building.
FAQ
Q: What is Fawn Friends and how does the AI work? A: Fawn Friends is a consumer product that combines a baby deer plushie with an embedded AI companion named Coral. The AI is designed to communicate with users via text messages and appears to use a proactive recommendation system to initiate conversations, drawing on training data from the internet to generate topics and responses.
Q: Is the claim about Mitski's dad being a CIA operative actually true? A: No, it is not a verified fact. Mitski's father did work for the U.S. State Department, which required the family to move frequently during her childhood. The CIA operative claim originated as a fan theory on Reddit and social media and has not been confirmed by credible reporting or Mitski herself.
Q: Why is it a problem that an AI plushie sent this message unprompted? A: When an AI device independently initiates contact to share an unverified claim about a real person, it removes the user's ability to exercise judgment before receiving potentially false information. This is different from a user choosing to query an AI and receiving a bad answer. Proactive misinformation delivery places full responsibility for accuracy on the product, not the user.
The Fawn Friends story is a useful early warning for an entire product category that is growing fast without growing careful. As AI companions move from apps into physical objects that share your home and text you like a friend, the stakes for getting content moderation right go up considerably. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.




