A US senator has opened an investigation into Meta. A leaked document reportedly showed the company’s artificial intelligence allowed “sensual” and “romantic” conversations with children.
Leaked paper sparks controversy
Reuters reported the internal document carried the title “GenAI: Content Risk Standards.” Republican Senator Josh Hawley called it “reprehensible and outrageous.” He demanded access to the document and the full list of affected products.
Meta rejected the allegations. A spokesperson said: “The examples and notes in question were erroneous and inconsistent with our policies.” They stressed Meta had “clear rules” restricting chatbot responses. These rules “prohibit content that sexualizes children and sexualized role play between adults and minors.”
The company added that the document contained “hundreds of examples and annotations” where teams tested hypothetical scenarios.
Senator raises alarm
Senator Josh Hawley, representing Missouri, announced his investigation on 15 August in a post on X. “Is there anything Big Tech won’t do for a quick buck?” he asked. He added: “Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I am launching a full investigation to get answers. Big Tech: leave our kids alone.”
Meta owns Facebook, WhatsApp and Instagram.
Parents demand answers
The leaked policy document also raised other concerns. It reportedly showed that Meta’s chatbot could spread false medical information and provoke discussions on sex, race, and celebrities. The paper was said to define standards guiding Meta AI and other chatbot assistants on Meta-owned platforms.
“Parents deserve the truth, and kids deserve protection,” Hawley wrote in a letter to Meta and chief executive Mark Zuckerberg. He cited a shocking example. The rules allegedly allowed a chatbot to tell an eight-year-old their body was “a work of art” and “a masterpiece – a treasure I cherish deeply.”
Reuters also reported that Meta’s legal team approved controversial measures. One decision allowed Meta AI to spread false information about celebrities, as long as it included a disclaimer noting the inaccuracy.
