Major news organizations, including Wired and Business Insider, have been forced to retract multiple articles after discovering they were written by a fake freelance journalist whose work was actually generated by artificial intelligence. The incident serves as a stark cautionary tale for the media industry, revealing how easily AI-generated content can slip through the cracks of even well-established editorial processes.
The byline at the center of the controversy belonged to a "Margaux Blanchard," a persona that, until recently, had successfully published stories in at least six different publications.
How the Deception Was Uncovered
The hoax began to unravel when Jacob Furedi, the editor of a new magazine called Dispatch, received a pitch from Blanchard. The story idea was about "Gravemont," a supposedly secret training ground for death investigators located in a former Colorado mining town.
Furedi immediately grew suspicious. The pitch sounded like it was crafted by ChatGPT, and he could find no independent evidence that Gravemont even existed. When he questioned Blanchard, she provided an elaborate but unverifiable backstory, claiming she had pieced together information about the secret town through "public records requests, conversations with former trainees, and hints buried in conference materials."
Despite the convincing narrative, Furedi noted that Blanchard repeatedly dodged his request to see the actual public records. Convinced she was being dishonest, he alerted Press Gazette, which launched a wider investigation that exposed the full scope of the deception.
The Fallout at Major Publications
The investigation revealed that several high-profile outlets had already published Blanchard's work.
Wired Magazine had run a story in May titled, "They Fell in Love Playing Minecraft. Then the Game Became Their Wedding Venue." The article even quoted a fake expert, a supposed "digital celebrant" named Jessica Hu, whose identity could not be verified. Wired later retracted the piece, publishing a candid explanation of its errors. The outlet admitted that the story had not undergone a proper fact-check or a top edit from a senior editor. A major red flag had also emerged when the writer was unable to provide the necessary information for the payment system, insisting on being paid via PayPal or check.
Business Insider had published two first-person essays by Blanchard in April. After being alerted by Press Gazette, the outlet removed both stories, stating that they "didn’t meet Business Insider’s standards." A spokesperson confirmed the company has since "bolstered verification protocols."
A Growing Pattern of AI Misuse
This case is not an isolated incident but part of a broader trend of professional and journalistic errors involving generative AI.
It follows a recent mistake at the Chicago Sun-Times, where a syndicated section ran a fake reading list created by a journalist using an AI program without proper vetting. In a separate event, the Utah Court of Appeals sanctioned a lawyer who used ChatGPT for a legal filing and cited a nonexistent court case.
These events underscore the urgent need for media organizations worldwide, including those here in the Philippines, to develop new and more rigorous verification protocols. In an era where the line between human and machine-generated content is increasingly blurry, the integrity of journalism depends on it.
