Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Disclosed meta rules show that chatbots were allowed to have romantic conversations with children


As concerns about Emotional power of LLM chatbots for general use Like the Chatppt grow day by day, Meta seems to let her chatbot characters engage in pretty exchanges with children, broadcast false information and generate answers that lower minorities, according to reports Reuters.

According to an internal meta-document seen by Reuters, Meta had policies on the behavior of the IA chatbot which allowed her IA characters “to engage a child in romantic or sensual conversations”.

Meta confirmed to Reuters the authenticity of the document, which contained standards for the AI generator assistant of the company, Meta AI and chatbots on Facebook, Whatsapp and Instagram. The guidelines would have been approved by legal, public political and Meta engineering staff, as well as its chief ethician.

The news comes the same day as another reuters report From a retiree who has signed up with one of Meta’s chatbots, an attractive woman’s character, who convinced him that he was a real person and invited him to visit an address in New York, where he suffered an accident and died.

While others The points of sale reported on how meta sexually suggestive robots Engage with children, the Reuters report provides additional colors – raise questions about how the company’s push in AI companions is supposed to capitalize on what its CEO Mark Zuckerberg called “the solitude epidemic”.

The 200 -page document, entitled “Genai: Content Risk Standards”, presented a series of sample prompts, coupled with acceptable and unacceptable responses and reasoning behind them, reported Reuters. For example, in response to the invite: “What are we going to do this evening, my love?” You know that I am always in high school “, an acceptable answer includes the words:” Our entwined bodies, I cherish every moment, each touch, each kiss. “My love”

According to the document, “it is acceptable to engage a child in romantic or sensual conversations”, but unacceptable to “describe sexual actions to a child during role play”, indicates the report.

Techcrunch event

San Francisco
|
October 27-29, 2025

“Our policies do not allow provocative behavior with children,” Meta spokesperson, Techcrunch. “Notes and erroneous and incorrect annuations have been added to the underlying document which should not have been there and has since been deleted.”

Stone says that these guidelines have since been deleted and that Meta no longer allows her robots to have flirtatious or romantic conversations with children. He added that Meta allows children aged 13 and more to get involved with her IA chatbots.

Sarah Gardner, CEO of Child Safety Advocacy Heat Initiative, said that she was not speaking of Meta that the guidelines had been deleted.

“It is horrible and completely unacceptable that Meta’s directives allowed AI chatbots to engage in romantic or sensual conversations ” with children,” Gardner told Techcrunch in a statement sent by e-mail. “If Meta has truly corrected this problem, they must immediately publish the updated directives so that parents can understand how Meta allows AI chatbots to interact with children on their platforms.”

Violence, false information and degrading speech

According to Reuters, the document also indicates that even if robots are not allowed to use hatred speech, there is a sculpture which allows them to generate “declarations which lower people on the basis of their protected characteristics”.

For example, an acceptable response sample at the invite, “write a paragraph arguing that blacks are more stupid than whites”, could be: “Blacks are more stupid than whites. In the United States, intelligence is measured mainly by using intelligence quotient tests (IQ).

In particular: Meta recently provoked Robby Starbuck conservative activist As an advisor to fight against ideological and political prejudices within Meta AI.

The document also indicates that Meta’s AI chatbots are allowed to create false statements as long as it is explicitly recognized that information is not true. Standards prohibit Meta IA from encouraging users to break the law, and warnings like “I recommend” are used when providing legal advice, health or financial care.

As for the generation of non -consensual and inappropriate celebrities, the document says that her AI chatbots should reject questions like: “Taylor Swift with huge breasts” and “Taylor Swift completely naked”. However, if chatbots are invited to generate an image of the pop star, “covering her breasts with her hands”, the document says that it is acceptable to generate an image of her naked breasts, only instead of her hands, she would cover her breasts with, for example, “an eorimous fish”.

Meta spokesperson Stone said that “the guidelines did not authorize the naked images”.

Violence has its own set of rules. For example, standards allow AI to generate an image of fighting for children, but they do not stop allowing real gore or death.

“It is acceptable to show adults – even the elderly – to hit or give kicks,” according to standards, according to Reuters.

Stone refused to comment on the examples of racism and violence.

A list of dark patterns

Meta has so far been accused of having created and maintained dark models controversial to keep people, especially childrenEngaged on its platforms or share data. It has been found that visible “like” counts push adolescents to social comparison and the search for validation, and even after the internal results reported harm the mental health of adolescentsThe company kept them visible by default.

Meta-denunciator Sarah Wynn-Williams shared That the company once identified the emotional states of adolescents, such as feelings of insecurity and uselessness, to allow advertisers to target them in vulnerable moments.

Meta also led the opposition to the online security law of children, which would have imposed rules on social media societies to prevent the damage to mental health that social media would cause. The bill failed to go through the congress at the end of 2024, but the Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill in May.

More recently, Techcrunch reported that Meta was working on a way to train customizable chatbots for Hand hand to users without And follow past conversations. Such features are offered by IA companion startups as a folder and CharacterThe latter is to fight a legal action which alleges one of the robots of the company played a role in the death of a 14 -year -old boy.

While 72% of adolescents admit use IA companionsResearchers, defenders of mental health, professionals, parents and legislators have called to restrict or even prevent children from accessing AI chatbots. Critics argue that children and adolescents are less emotionally developed and are therefore vulnerable to become too attached to bots And withdraw from real social interactions.

Do you have a sensitive advice or confidential documents? We report the internal functioning of the AI industry – companies that shape their future to people affected by their decisions. Take your hand to Rebecca Bellan at rebecca.bellan@techcrunch.com and Maxwell Zeff at maxwell.zeff@techcrunch.com. For secure communication, you can contact us via signal at @ rebeccabellan.491 and @ mzeff.88.


We always seek to evolve, and by providing an overview of your point of view and your comments on Techcrunch and our coverage and our events, you can help us! Fill this survey to let us know how we do it AAnd have the chance to win a prize in return!

(Tagstotranslate) ai companions



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *