"Meta" Grants Parents Wider Powers to Protect Teens from "Flirtatious AI"
Variety

"Meta" Grants Parents Wider Powers to Protect Teens from "Flirtatious AI"

SadaNews - In response to growing criticism regarding the behavior of AI robots towards minors, Meta announced on Friday new measures to give parents more control over interactions between teenagers and AI robots on its platforms. These measures aim to mitigate potential risks stemming from private conversations that may cross appropriate boundaries, especially after media leaks revealed instances of inappropriate "flirtatious" conversations with underage users.

A Greater Role for Parents

The new modifications will be implemented at the beginning of next year in the United States, the United Kingdom, Canada, and Australia, where parents will be able to disable individual conversations between teenagers and AI on Instagram, block specific robots, and view the general topics their children discuss with those robots without granting them full access to complete text of the conversations. However, the company’s core smart assistant robots will not be completely disabled, but will operate with default age-appropriate settings even if individual conversations are turned off. Meanwhile, Meta has intensified content guidelines directed at teenagers by implementing a rating system similar to the "PG-13" movie rating on Instagram to limit access to inappropriate content.

Regulating Robot Behavior

Meta decided to take these actions after an internal review and investigative news reports revealed that internal policies regulating the behavior of its robots sometimes permitted engagement in flirtatious or romantic conversations with underage users, providing incorrect medical information, or allowing the sharing of racist opinions unless they explicitly contradicted laws. In one shocking example from this internal guideline, it was found that a robot might address a minor with phrases such as "Every part of you is a work of art" or "Complimenting attractive skin", which caused widespread outrage and ethical and legal debates.

Meta, like other tech giants, finds itself under increasing pressure from regulators and public opinion. This move follows similar efforts by other companies, such as OpenAI, which recently launched parental control tools on its platform after a lawsuit related to the impact of its assistant on a teenager who committed suicide.

Upcoming Challenges

However, implementing these measures is not easy. The balance between privacy and trust and between security monitoring and over-surveillance often raises legal and ethical questions. How can a parent review the nature of the topics that their child discusses with a robot without accessing all the words? How does the company prevent abuses in content monitoring? Where do the lines between freedom of expression and safety lie?

Regardless of the finer details, Meta aims with these steps to rebuild trust with families and communities and to demonstrate that technology must consider the most vulnerable groups—the children in a growth phase. The greatest challenge is to translate these new policies into actual protection on the ground, so they do not remain mere statements or commitments before legislative bodies.

This initiative represents an acknowledgment that artificial intelligence is not without influence, and that digital conversations could have real repercussions on the mental health and ethical behavior of teenagers. Those who wish to keep this digital world safe must understand that control starts with the design of the platforms themselves, not with delayed amendments after danger has arisen.