Lonely ChatGPT users will soon have the opportunity to engage in kinky conversations with artificial intelligence.
In an X post on Oct. 15, OpenAI CEO Sam Altman revealed the upcoming change. It will be applied to the popular large language model by the end of 2025.
“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” the major tech company leader explained.
Altman says that the move follows implementing an array of safeguards for people with mental health issues. This made ChatGPT less entertaining for those without mental health problems, he highlighted.
“We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” Altman stated.
“Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” he added.
An OpenAI representative informed TechCrunch that ChatGPT will utilize an age prediction system that is currently being developed to ensure that only adults are receiving erotic messages from it. Critics have warned that certain users may use fake IDs and shared accounts to bypass the age requirement.
In relation, OpenAI announced the establishment of an Expert Council on Well-Being and AI this week. The council will be advising the tech leader about suitable interactions with users and providing other guidance about company activities.
Altman expanded on his original erotica statement in another X post after it drew a great deal of media attention.
Ok this tweet about upcoming changes to ChatGPT blew up on the erotica point much more than I thought it was going to! It was meant to be just one example of us allowing more user freedom for adults. Here is an effort to better communicate it:
As we have said earlier, we are… https://t.co/OUVfevokHE
— Sam Altman (@sama) October 15, 2025
Read more: Getting AI bots to plan your next trip could be dangerous
OpenAI continues to face criticism over its practices
The upcoming naughty alteration to the chatbot has attracted condemnation from those who feel it is inappropriate.
“I’m seriously considering not using ChatGPT anymore,” commented one media analyst in a popular social media post. “They just announced that they are not going to focus on mental health, and that they will add capabilities for “erotica for adults.”
“This is just shortly after they have been sued for creating mental health problems,” he continued, “and their Sora being able to create deep fakes of, well, anyone.”
It is minor in comparison to a series of other recent controversies regarding the AI chatbot though.
One of the most recent occurred this summer when the parents of a 16-year-old boy sued OpenAI over the LLM’s alleged role in encouraging their child to commit suicide.
The company’s release of Sora 2 this month has also brought backlash due to deepfake and copyright concerns. It brings the explosion of Ghibli-themed content on the internet earlier this year to mind.
Additionally, the controversial development proceeds several employees leaving the company over concerns of criminal activity, and even one arguably suspicious death.
In an awkward interview with Tucker Carlson last month, Altman affirmed his belief that Suchir Balaji’s death was a suicide despite strange circumstances surrounding it. Balaji’s family firmly believe he was murdered and so does Carlson, based on his interpretation of the evidence.
Read more: OpenAI blocks accounts tied to China’s surveillance and phishing efforts
Follow Rowan Dunne on LinkedIn
rowan@mugglehead.com
