OpenAI Announces “Erotica” for ChatGPT Days After Lobbying Against Child Safety Legislation

Assemblymember Bauer-Kahan: "AI Companies Will Always Choose Profits Over Children's Lives"

For immediate release:
Lauren Howe
Communications Director
(925) 244-1600
lauren.howe@asm.ca.gov

[Sacramento, CA] – Less than 24 hours after Governor Gavin Newsom vetoed AB 1064 (legislation that would have required safety guardrails for minors using AI companion chatbots), OpenAI CEO Sam Altman announced the company will allow erotica and other mature content on ChatGPT for "verified adults" starting in December.

In a post on X yesterday, Altman acknowledged that OpenAI made ChatGPT "pretty restrictive to make sure we were being careful with mental health issues." But rather than maintaining those safeguards, he announced plans to introduce chatbot "personalities" that can "respond in a very human-like way, or use a ton of emojis, or act like a friend." Most alarmingly, Altman revealed that "as we roll out age-gating more fully," OpenAI will "allow even more, like erotica for verified adults."

Assemblymember Rebecca Bauer-Kahan issued the following statement:

"Less than 24 hours after the tech industry successfully lobbied against AB 1064, legislation that would have required safety guardrails for minors to prevent kids' access to erotica and addictive chatbots, OpenAI announces they're rolling out the exact features that make their products the most dangerous for kids. They admit the mental health risks. They know children have died. Yet they're choosing 'usage-maxxing' over safety.

We cannot bring back Adam Raine or the over 150 other kids who have lost their lives to social media and AI harms. But this announcement proves their families were right that AI companies will never regulate themselves. They will always choose profits over children's lives. And it is the government's responsibility to regulate these companies for the safety of our kids."

Why This Announcement Is Dangerous

AI companion chatbots are specifically designed to form emotional bonds with users, simulating friendships or romantic relationships. These products have already been linked to depression, self-harm, and suicide among young people. Sixteen-year-old Adam Raine of Orange County died by suicide after using an AI chatbot. Over 150 families have lost children to social media and AI-related harms and have called for more safety features regarding chatbots.

The features Altman is now promising, chatbots with "personalities" that act "like a friend" and respond in "very human-like" ways, are precisely what make these products dangerous for vulnerable young people. These AI companions replace real human connections and are designed to be addictive.

While OpenAI claims it will use "age-gating" to restrict adult content, age verification systems are notoriously easy for minors to bypass. More importantly, age gates do nothing to address the core dangers: emotional manipulation, addiction by design, and the formation of parasocial relationships with AI that replaces human connection.

OpenAI's announcement underscores a fundamental truth: without regulation, tech companies will continue to prioritize engagement and profits over child safety. As AI becomes increasingly embedded in young people's lives, the stakes have never been higher. California must lead the nation in establishing guardrails that protect children from products designed to be addictive and emotionally manipulative before more families suffer devastating losses

Connect with us:

 

Capitol Office

Sacramento, CA 95814
Phone: (916) 319-2016
Fax: (916) 319-2116


District Office

12677 Alcosta Boulevard, Suite 395
San Ramon, CA 94583
Phone: (925) 244-1600