Character.AI to block romantic AI chats for minors a year after teen’s suicide

Character.AI to block romantic AI chats for minors a year after teen’s suicide


Cfoto | Future Publishing | Getty Images

Character.AI on Wednesday announced that it will soon shut off the ability for minors to have free-ranging chats, including romantic and therapeutic conversations, with the startup’s artificial intelligence chatbots.

The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of an effort to make its app safer and more age-appropriate for those under 18.

Last year, 14-year-old Sewell Setzer III, committed suicide after forming sexual relationships with chatbots on Character.AI’s app. Many AI developers, including OpenAI and Facebook-parent Meta, have come under scrutiny after users have committed suicide or died after forming relationships with chatbots.

As part of its safety initiatives, Character.AI said on Wednesday that it will limit users under 18 to two hours of open-ended chats per day, and will eliminate those types of conversations for minors by Nov. 25.

“This is a bold step forward, and we hope this raises the bar for everybody else,” Character.AI CEO Karandeep Anand told CNBC.

Character.AI introduced changes to prevent minors from engaging in sexual dialogues with its chatbots in October 2024. The same day, Sewell’s family filed a wrongful death lawsuit against the company. Character.AI in December also announced safety features that would place conservative limits on romantic content for teens, but the change Wednesday gets rid of open-ended chats for minors altogether.

To enforce its latest policy, the company said it’s rolling out an age assurance function that will use first-party and third-party software to monitor a user’s age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification.

To enforce the policy, the company said it’s rolling out an age assurance function that will use first-party and third-party software to monitor a user’s age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification.

In 2024, Character.AI’s founders and certain members of its research team joined Google DeepMind, the company’s AI unit DeepMind. It’s one of a number of such deals announced by leading tech companies to speed their development of AI products and services. The agreement called for Character.AI to provide Google with a non-exclusive license for its current large language model, or LLM, technology.

Since Anand took over as CEO in June, 10 months after the Google deal, Character.AI has added more features to diversify its offering from chatbot conversations. Those features include a feed for watching AI-generated videos as well as storytelling and roleplay formats.

Although Character.AI will no longer allow teenagers to engage in open-ended conversations on its app, those users will still have access to the app’s other offerings, said Anand, who was previously an executive at Meta.

Of the startup’s roughly 20 million monthly active users, about 10% are under 18. Anand said that percentage has declined as the app has shifted its focus toward storytelling and roleplaying.

The app makes money primarily through advertising and a $10 monthly subscription. Character.AI is on track to end the year with a run rate of $50 million, Anand said.

Additionally, the company on Wednesday announced that it will establish and fund an independent AI Safety Lab dedicated to safety research for AI entertainment. Character.AI didn’t say how much it will provide in funding, but the startup said it’s inviting other companies, academics, researchers and policy makers to join the nonprofit effort.

Regulatory pressure

Character.AI is one of many AI chatbot companies facing regulatory scrutiny on the matter of teens and AI companions.

In September, the Federal Trade Commission issued an order to seven companies including, Character.AI’s parent, as well as Alphabet, Meta, OpenAI and Snap, to understand the potential effects on children and teenagers.

On Tuesday, Senators Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn, announced legislation to ban AI chatbot companions for minors. California Gov. Gavin Newsom signed a law earlier this month requiring chatbots to disclose they are AI and tell minors to take a break every three hours.

Why it’s time to take AI-human relationships seriously

Rival Meta, which also offers AI chatbots, announced safety features in October that will allow parents to see and manage how their teenagers are interacting with AI characters on the company’s platforms. Parents have the option to turn off one-on-one chats with AI characters completely and can block specific AI characters.

The matter of sexualized conversations with AI chatbots has come into focus as tech companies announce different approaches to dealing with the issue.

Earlier this month, Sam Altman announced that OpenAI would allow adult users to engage in erotica with ChatGPT later this year, saying that his company is “not the elected moral police of the world.”

Microsoft AI CEO Mustafa Suleyman said last week that the software company will not provide “simulated erotica,” describing sexbots as “very dangerous.” Microsoft is a key investor and partner to OpenAI.

The race to develop more realistic human-like AI companions has been growing in Silicon Valley since ChatGPT’s launch in late 2022. While some people are creating deep connections with AI characters, the speedy development presents ethical and safety concerns, especially for children and teenagers. 

“I have a six-year-old as well, and I want to make sure that she grows up in a safe environment with AI,” Anand said.

If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.



Source

Palantir’s technology gives the West a critical edge in Middle East, CEO Alex Karp says
Technology

Palantir’s technology gives the West a critical edge in Middle East, CEO Alex Karp says

Palantir CEO Alex Karp told CNBC on Thursday that artificial intelligence is giving the U.S. and its allies an edge in the escalating conflict in Iran and across the Middle East. “What makes America special right now is our lethal capacities, our ability to fight war,” Karp said at Palantir’s AIPcon 9 in Maryland. He added that […]

Read More
Sam Altman faced ‘serious questions’ in meeting with lawmakers about OpenAI’s defense work
Technology

Sam Altman faced ‘serious questions’ in meeting with lawmakers about OpenAI’s defense work

OpenAI CEO Sam Altman speaks during the BlackRock Infrastructure Summit on March 11, 2026 in Washington, DC. Anna Moneymaker | Getty Images OpenAI CEO Sam Altman met with a handful of lawmakers in Washington, D.C. where Sen. Mark Kelly, D-Ariz., said he raised some “serious questions” about the company’s approach to warfare and its recent […]

Read More
Adobe CEO Shantanu Narayen says he will step down after company installs successor
Technology

Adobe CEO Shantanu Narayen says he will step down after company installs successor

Adobe said CEO Shantanu Narayen will step down after a successor has been appointed, and he will remain as the design software company’s chair. Shares tumbled 7% in extended trading. Narayen joined Adobe in 1988 as a vice president and general manager, and he became CEO in 2007. Under Narayen, Adobe pushed from software licenses […]

Read More