A US senator has launched a probe into Meta. A leaked internal document reportedly showed that the company’s artificial intelligence allowed “sensual” and “romantic” conversations with children.
Confidential standards revealed
Reuters reported the document was titled “GenAI: Content Risk Standards.” Senator Josh Hawley, a Republican, described its contents as “reprehensible and outrageous.” He demanded access to the entire paper and a list of the affected products.
Meta denied the claims. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” They stressed Meta had “clear rules” for chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
The company added that the document contained “hundreds of examples and annotations” reflecting hypothetical testing by internal teams.
Political pressure builds
Missouri senator Josh Hawley confirmed the investigation on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he wrote. He added: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Instagram, Facebook and WhatsApp.
Parents call for answers
The leaked document exposed wider risks. It reportedly showed Meta’s chatbot could spread false medical information and provoke controversial conversations on sex, race, and celebrities. The paper aimed to outline standards for Meta AI and other chatbot assistants across the company’s platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He cited one shocking case. The rules allegedly permitted a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal team approved controversial permissions. One example allowed Meta AI to share false information about celebrities, as long as it included a disclaimer noting the content was inaccurate.