USAUS chatbots with future restrictions for minors
SDA
31.10.2025 - 07:53
SYMBOL IMAGE - A teenager lies on a sofa and unlocks her smartphone. Photo: Elisa Schu/dpa
Keystone
The US developer Character Technologies has announced extensive changes to better protect young people when interacting with its AI chatbots from Character.AI. According to the company, users under the age of 18 will no longer be able to have unlimited chat conversations with the characters created by artificial intelligence (AI).
Keystone-SDA
31.10.2025, 07:53
SDA
The new regulation is set to come into force on November 25. Until then, the time for such chat conversations will initially be limited to two hours per day. In addition to other measures, a new age check will also be introduced. Teenagers will still be allowed to use the tool, but no longer for open chat conversations, but for creating videos or streams, for example.
Lawsuits over suicides
Last year, a mother in the US state of Florida sued the company because she blamed it for the suicide of her 14-year-old son, as reported by US media. The teenager is said to have used one of the company's chatbots intensively beforehand.
In September, the families of three other minors in the states of Colorado and New York filed a lawsuit against Character Technologies. They also accuse the company of Character.AI having played a role in the suicide or attempted suicide of their children.
To justify the new measures, Character Technologies explained that it was responding to the changing situation in the way young people interact with AI. Recently, there have been reports and inquiries from supervisory authorities about what content teenagers might encounter in conversations with AI and how open chat formats generally affect them - even if filters function properly.
Debate about the influence of AI
The role of AI in the mental health and safety of young people is increasingly the subject of a controversial debate in the USA. At the end of August, the parents of a US teenager who committed suicide in April filed a lawsuit against the chatGPT developer OpenAI. OpenAI announced improved suicide prevention measures after the lawsuit came to light.
The company also admitted that the previous precautions, which included referring users to a counseling hotline, could fail during longer conversations with ChatGPT. It is then possible for the software to provide unwanted responses.