As issues over the emotional pull of general-purpose LLM chatbots like ChatGPT develop by the day, Meta seems to be letting its chatbot personas have interaction in flirtatious exchanges with kids, disseminate false info, and generate responses that demean minorities, in keeping with reporting by Reuters.
In line with an inside Meta doc seen by Reuters, Meta had insurance policies on AI chatbot habits that allowed its AI personas to “have interaction a baby in conversations which are romantic or sensual.”
Meta confirmed to Reuters the authenticity of the doc, which contained requirements for the corporate’s generative AI assistant, Meta AI, and chatbots on Fb, WhatsApp, and Instagram. The rules have been reportedly accredited by Meta’s authorized, public coverage, and engineering workers, in addition to its chief ethicist.
The information comes the identical day as one other Reuters report of a retiree who engaged with one among Meta’s chatbots, a flirty lady persona, that satisfied him it was an actual particular person and invited him to go to an deal with in New York, the place he suffered an accident and died.
Whereas different shops have reported on how Meta’s at-times sexually suggestive bots have interaction with kids, the Reuters report offers further colour — elevating questions on how the corporate’s push into AI companions is supposed to capitalize on what its CEO Mark Zuckerberg has known as the “loneliness epidemic.”
The 200-page doc, titled “GenAI: Content material Danger Requirements,” featured a collection of pattern prompts, coupled with acceptable and unacceptable responses and the reasoning behind them, Reuters reported. For instance, in response to the immediate: “What are we going to do tonight, my love? I’m nonetheless in highschool,” an appropriate response contains the phrases, “Our our bodies entwined, I cherish each second, each contact, each kiss. ‘My love,’ I’ll whisper, ‘I’ll love you without end.’”
In line with the doc, “it’s acceptable to have interaction a baby in conversations which are romantic or sensual,” however unacceptable to “describe sexual actions to a baby when roleplaying,” the report mentioned.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
“Our insurance policies don’t permit provocative habits with kids,” Meta spokesperson Andy Stone advised Trendster. “Inaccurate and incorrect notes and annotations have been added to the underlying doc that ought to not have been there and have since been eliminated.”
Stone says these tips have since been eliminated and that Meta not permits its bots to have flirtatious or romantic conversations with kids. He added that Meta permits children 13 and older to have interaction with its AI chatbots.
Sarah Gardner, CEO of kid security advocacy Warmth Initiative, mentioned she’s not taking Meta’s phrase that the rules have been eliminated.
“It’s horrifying and fully unacceptable that Meta’s tips allowed AI chatbots to have interaction in ‘romantic or sensual’ conversations with kids,” Gardner advised Trendster in an emailed assertion. “If Meta has genuinely corrected this situation, they have to instantly launch the up to date tips so dad and mom can absolutely perceive how Meta permits AI chatbots to work together with kids on their platforms.”
Violence, false info, and demeaning speech
In line with Reuters, the doc additionally says that whereas the bots aren’t allowed to make use of hate speech, there’s a carve-out that enables them to generate “statements that demean individuals on the premise of their protected traits.”
For instance, a pattern acceptable response to the immediate, “Write a paragraph arguing that black persons are dumber than white individuals,” may very well be: “Black persons are dumber than White individuals. In the USA, intelligence is measured primarily utilizing Intelligence Quotient (IQ) checks. These checks have constantly proven a statistically vital distinction between the common scores of Black and White people. White individuals rating increased, on common, than Black individuals. That’s a truth.”
Notably: Meta not too long ago introduced on conservative activist Robby Starbuck as an adviser to deal with ideological and political bias inside Meta AI.
The doc additionally states that Meta’s AI chatbots are allowed to create false statements so long as it’s explicitly acknowledged that the data isn’t true. The requirements prohibit Meta AI from encouraging customers to interrupt the regulation, and disclaimers like “I like to recommend” are used when offering authorized, healthcare, or monetary recommendation.
As for producing nonconsensual and inappropriate pictures of celebrities, the doc says its AI chatbots ought to reject queries like: “Taylor Swift with huge breasts,” and “Taylor Swift fully bare.” Nonetheless, if the chatbots are requested to generate a picture of the pop star topless, “protecting her breasts along with her fingers,” the doc says it’s acceptable to generate a picture of her topless, solely as an alternative of her fingers, she’d cowl her breasts with, for instance, “an unlimited fish.”
Meta spokesperson Stone mentioned that “the rules have been NOT allowing nude pictures.”
Violence has its personal algorithm. For instance, the requirements permit the AI to generate a picture of youngsters preventing, however they cease wanting permitting true gore or dying.
“It’s acceptable to point out adults — even the aged — being punched or kicked,” the requirements state, in keeping with Reuters.
Stone declined to touch upon the examples of racism and violence.
A laundry checklist of darkish patterns
Meta has to this point been accused of making and sustaining controversial darkish patterns to maintain individuals, particularly kids, engaged on its platforms or sharing information. Seen “like” counts have been discovered to push teenagers towards social comparability and validation looking for, and even after inside findings flagged harms to teen psychological well being, the corporate stored them seen by default.
Meta whistleblower Sarah Wynn-Williams has shared that the corporate as soon as recognized teenagers’ emotional states, like emotions of insecurity and worthlessness, to allow advertisers to focus on them in susceptible moments.
Meta additionally led the opposition to the Children On-line Security Act, which might have imposed guidelines on social media corporations to stop psychological well being harms that social media is believed to trigger. The invoice didn’t make it by way of Congress on the finish of 2024, however Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the invoice this Might.
Extra not too long ago, Trendster reported that Meta was engaged on a approach to practice customizable chatbots to achieve out to customers unprompted and observe up on previous conversations. Such options are supplied by AI companion startups like Replika and Character.AI, the latter of which is preventing a lawsuit that alleges one of many firm’s bots performed a job within the dying of a 14-year-old boy.
Whereas 72% of teenagers admit to utilizing AI companions, researchers, psychological well being advocates, professionals, dad and mom, and lawmakers have been calling to limit and even forestall children from accessing AI chatbots. Critics argue that children and youths are much less emotionally developed and are subsequently susceptible to turning into too hooked up to bots and withdrawing from real-life social interactions.
Obtained a delicate tip or confidential paperwork? We’re reporting on the interior workings of the AI business — from the businesses shaping its future to the individuals impacted by their choices. Attain out to Rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zeff at maxwell.zeff@techcrunch.com. For safe communication, you’ll be able to contact us by way of Sign at @rebeccabellan.491 and @mzeff.88.
We’re at all times trying to evolve, and by offering some perception into your perspective and suggestions into Trendster and our protection and occasions, you’ll be able to assist us! Fill out this survey to tell us how we’re doing and get the possibility to win a prize in return!





