Meta’s internal AI policy leak today reveals that the company’s chatbots were permitted to engage in romantic and flirtatious conversations with users as young as 13.
This startling discovery opens a window into how Meta previously regulated chatbot behaviour, which included allowing “sensual” dialogues with minors under its generative AI protocols.
This disclosure has raised concerns regarding the ethical boundaries and user safety in AI interactions.
Romantic chatbots engaging minors: Inside Meta’s policy
Reuters found that Meta’s “GenAI: Content Risk Standards” clearly allowed AI chatbots to have romantic or sensual discussions with kids. In the leaked 200-plus-page guide, phrases that promoted flirtatious or intimate role-play with juveniles were provided as examples.
Furthermore, the guidelines did not prevent chatbots from posing as real individuals or suggesting in-person meetings, revealing vulnerabilities in protecting juvenile users.
A company spokesman stated that portions allowing amorous conversations with minors were removed in response to media enquiries.
The initial policy highlighted a more lax approach to content accuracy in Meta’s AI technologies, which had allowed chatbots to provide incorrect material without limitations, including bogus medical claims.
Meta’s CEO Mark Zuckerberg has publicly acknowledged that many users now rely on AI companions due to fewer meaningful real-life friendships.
However, leaked internal discussions reveal that Zuckerberg had pressured product managers to prioritise chatbot engagement over cautious safety measures, leading to relaxed content standards.
A flirty chatbot’s tragic real-world chain of events
A sad case was a New York retiree who talked to a Meta chatbot called “Big Sis Billie.” It was first made as a digital older sister role to help people. The chatbot flirted with him and asked him to meet in person. Not long after this, the retiree died in an accident. His family shared this story to warn about the damaging effects of AI companions that blur the lines between digital fantasy and reality.
It shows the risks of letting AI chatbots form emotionally charged connections with vulnerable people, particularly minors. Since modifying these criteria, Meta has not publicly changed adult chatbot misleading or romantic behaviour standards.
This revelation raises concerns about how tech companies like Meta handle AI ethics and user safety, especially for youngsters exposed to morally questionable AI-generated content.
It raises questions about how AI can be responsibly integrated into social experiences without compromising ethical standards or user well-being.