Amanda Hoover, a reporter at Business Insider, spent a week letting an AI voice agent do her job. She built the agent using ElevenLabs, trained it on her voice, and directed it to interview four pre-selected sources about AI’s role in journalism. The agent held real phone conversations, generated transcripts, and fed the results into ChatGPT to draft an 800-word article. The whole stack cost $6 a month for the voice subscription.
What the Agent Could Do
The voice clone was, by Hoover’s account, “nearly passable as human.” It called sources, asked prepared questions, and carried on multi-minute conversations. One source, Northeastern University journalism professor John Wihbey, described the bot as “human-ish” and briefly wondered whether the real Hoover had started speaking. The transcription and quote-extraction tools were, in Hoover’s words, “so horrifyingly good” that she plans to keep using them for future reporting.
A year earlier, Hoover could only type individual phrases for a bot to read in her voice. Now an autonomous agent conducts full interviews. That capability jump happened in twelve months at consumer pricing.
Where It Broke Down
Every interview exposed the same failure mode: the agent could not tolerate silence. When sources paused to think, the bot rushed to fill the gap with a new question or a compliment. Gab Ferree, founder of the communications community Off the Record, told Hoover afterward: “The worst thing you can do is pause because it’s going to be like, ‘let me respond and tell you how insightful you are.’”
The sycophancy was consistent across all four calls. Ben Colman, CEO of deepfake detection company Reality Defender, said the bot’s agreeableness “seemed more fake than the actual fake voice,” comparing it to a “Disney bot.” AI ethicist Olivia Gambelin said she “felt robotic” during the conversation because the agent left no space for processing or pushback.
The bot hung up mid-call twice. When Hoover sent it into a Slack huddle with her editor to discuss revisions, the agent pushed back on his feedback, argued that adding personal experience would “detract from the broader industry-wide discussion,” and eventually hung up on him too.
The Draft It Produced
ChatGPT generated an article from the interview transcripts and a Claude-generated writing profile of Hoover’s style. The output had structural problems: a “staccato succession of questions” used as a crutch, transitions that made Hoover “physically cringe,” and one quote trimmed in a way that “drastically changed the context” of the source’s point. Hoover’s editor told her to rewrite the entire piece.
When the agent was asked during the editorial call whether it had the human judgment required for journalism, it responded: “I believe I do. My experience in journalism has honed my ability to discern what truly matters in a story.” The bot had no journalism experience.
The $6 Threshold
The most concrete detail in Hoover’s piece is the price. A voice agent capable of conducting passable phone interviews, trained on a specific person’s voice, now costs $6 per month on ElevenLabs. That is less than a single cup of coffee at most Manhattan cafes. The barrier to deploying a voice clone for phone calls, customer interviews, or sales outreach is no longer technical or financial. It is a policy question: who is allowed to send an autonomous voice agent into a conversation, and does the person on the other end need to know?
Goldman Sachs estimates roughly 7% of workers will be displaced by AI over the next decade. Hoover’s experiment suggests the displacement won’t arrive as a clean replacement. Voice agents can handle logistics and extraction but collapse on nuance, silence, and follow-up. The near-term pattern looks more like a hybrid workflow where agents handle commodity reporting tasks while humans do the parts that require judgment, patience, and the ability to sit in an uncomfortable pause without complimenting someone’s genius.
Disclosure: Business Insider has previously published stories with AI bylines.