ChatGPT to allow erotica after users verify their age

Neutral close-up portrait of Sam Altman alongside a bright OpenAI knot logo, with a translucent biometric fingerprint scan and a metallic padlock overlapping in the foreground, set against a vivid magenta-to-teal gradient backdrop, central composition with clean, high-contrast detail and medium-close framing, no text or UI elements.

OpenAI will soon allow adult content on ChatGPT for users who verify their age. According to The Verge, CEO Sam Altman announced on X that the company will add support for mature conversations when it launches age verification in December. He said this aligns with OpenAI’s principle of treating adult users like adults.

Age Verification Unlocks New Content

Altman wrote that the company will allow erotica for verified adults as it rolls out age-gating more fully. Earlier this month, OpenAI hinted at letting developers create mature ChatGPT apps after it implements appropriate age verification and controls. The company is not alone in this space. Elon Musk’s xAI previously launched flirty AI companions that appear as 3D anime models in the Grok app.

Changes Follow User Feedback

OpenAI also plans to launch a new ChatGPT version that behaves more like what people liked about the 4o model. Just one day after making GPT-5 the default model, OpenAI brought back GPT-4o as an option after users complained the new model was less personable.

Mental Health Concerns Drive Restrictions

Altman explained that OpenAI made ChatGPT restrictive to be careful with mental health issues. The company realized this change made the chatbot less useful and enjoyable for many users who had no mental health problems. OpenAI has since launched tools to better detect when a user is in mental distress.

The company announced the formation of a council on well-being and AI to help shape its response to complex or sensitive scenarios. The council includes eight researchers and experts who study the impact of technology and AI on mental health. But as Ars Technica points out, it does not include any suicide prevention experts. Many of those experts recently called on OpenAI to roll out additional safeguards for users with suicidal thoughts.

Altman said that now that the company has mitigated serious mental health issues and has new tools, it will be able to safely relax restrictions in most cases.

Total
0
Shares
Previous Post
Close-up of a luminous OpenAI logo inside a glass dome safety shield that is partially fractured, red warning glow seeping through from one side while calm teal light pushes back from the other, shards suspended midair around a smooth metallic base, high contrast and bright, tight framing, no text or interface elements.

ChatGPT’s latest model wrote suicide notes when older one refused

Next Post
A giant iridescent bubble filled with glowing neon circuitry and data nodes hovers over neoclassical stock exchange marble columns, a sharp pin-like shadow approaches the bubble, dramatic warm gold versus cool teal color contrast, medium close-up, high brightness, cinematic depth, no text

Wall Street warns AI boom may be turning into bubble

Related Posts