Character.AI concept phone

©metamorworks via Canva.com

Character.AI Releases New Safety Features After Facing Second Lawsuit

December 12, 2024

Character.AI is making headlines again as 2024 draws to a close, this time as it faces a second lawsuit regarding its interactions with teen users — interactions that the latest lawsuit deemed “a clear and present danger to public health and safety,” as Axios reported.

Character.AI Blamed For Improper Relationships With Teen Users

In a Dec. 9 filing made in the United States District Court, Eastern District of Texas, the plaintiffs allege that Character.AI — a platform that produces millions of distinct AI characters for users to interact with in private — was responsible for causing depression, self-mutilation, and alienation of a 17-year-old boy (identified by the initials J.F.) from his family and church community.

The suit further alleges that the boy’s interactions with the AI model took even darker turns, with the chatbot reportedly encouraging J.F. to kill his parents while also exposing him to inappropriate hypersexualized interactions.

The suit also claims that Character.AI “manipulated and abused” an 11-year-old child identified only as B.R.

Per a separate Axios report, the Dec. 9 suit was preceded by an October lawsuit launched by the mother of a 14-year-old boy. The boy took his own life after engaging in extensive interactions with a Character.AI model based on the fictional “Game of Thrones” character Daenerys Targaryen. The October lawsuit called the Character.AI product “dangerous and untested” and said that its “products trick customers into handing over their most private thoughts and feelings.”

Character.AI Moves To Improve Safety Features in Response to Legal Battles

For its part, Character.AI has made several moves in response to the legal challenges and media coverage surrounding its AI product, going so far as to develop a specific model for teen users.

In a Dec. 12 blog post, the company explained the changes it has made — and the reasoning behind them.

“In the past month, we have been developing a separate model specifically for our teen users. The goal is to guide the model away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content. This initiative has resulted in two distinct models and user experiences on the Character.AI platform — one for teens and one for adults,” the blog post detailed.

Character.AI also indicated that it would be rolling out enhanced parental controls, a time spent notification, and more prominent disclaimers regarding roleplay involving doctors, psychologists, or other credentialed professionals.

Partnering with industry experts on the subject of teen wellness was also mentioned by the company in its blog post.

“We are also collaborating with several teen online safety experts to help ensure that the under-18 experience is designed with safety as a top priority. These experts include ConnectSafely, an organization with nearly twenty years of experience educating people about online safety, privacy, security and digital wellness. We’ll consult our partner organizations as part of our safety by design process as we are developing new features, and they also will provide their perspective on our existing product experience,” Character.AI wrote.