A New Era dawns for OpenAI’s ChatGPT
OpenAI, a leading developer of artificial intelligence, is pushing the boundaries of its chatbot technology. CEO, Sam Altman, has announced on Tuesday that the company is preparing to implement “erotica” for those ChatGPT users who verify their age on the platform. This shift in policy, scheduled for roll-out in December, adheres to OpenAI’s “treat adult users like adults” principle.
Implementation of this new feature, however, extends beyond merely adding adult content. It corresponds to OpenAI’s broader agenda of strengthening age verification controls and reinforcing the governance concerning mature conversations on their platform. This metamorphosis hints at the company’s future aspirations, as it has previously suggested that developers might be allowed to create “mature” ChatGPT apps once the appropriate age verification controls are in place.
Following Suit in AI Erotica
Within the bustling AI industry, OpenAI is not the only entity venturing into erotica. Elon Musk’s initiative, xAI, is already riding the wave with its flirty AI companions. These companions manifest as intriguing 3D anime models on the Grok app, indicating that the advent of AI erotica aligns with the global trend of more immersive and human-like AI experiences.
In parallel with the addition of erotica, OpenAI is also going full steam ahead to launch a newer version of ChatGPT. This latest iteration aims to replicate more closely the traits users appreciated about the earlier version, “4o”. Consequently, just after GPT-5 was appointed the reigning model behind ChatGPT, OpenAI reintroduced GPT-4o as an option, due to user feedback suggesting that the latest model was less personable.
The Pursuit of AI Wellness
OpenAI’s inclination towards more engaging AI experiences goes beyond features and focus on cognitive and emotive aspects of interaction. In light of this, Altman asserts the originally strict constraints on ChatGPT were instituted to ensure careful navigation around mental health issues.
In keeping with this mission, the company has recently launched tools to better detect users undergoing mental distress. This move aligns with its commitment to user safety and well-being without compromising the enjoyment of the platform’s vast user base. Parallel to this development, the AI giant recently announced the formation of a council focusing on “well-being and AI”.
Comprising a team of eight researchers and experts, this council is tasked with shaping OpenAI’s response to complex or sensitive scenarios, particularly regarding the impact of technology and AI on mental health. Interestingly, it has drawn attention that no suicide prevention experts are included in this panel, despite the recent calls from such experts emphatically voicing their wind for more robust safeguards for users experiencing suicidal thoughts.
Designed for Safety, Optimized for Engagement
However, now that OpenAI claims to have effectively mitigated severe mental health issues and new tools are in place, the company looks set to relax restrictions in most cases. This move reflects the delicate balance the company is striking between advancing AI capabilities and safeguarding user’s mental well-being. The introduction of mature content, then, represents a move towards liberalising the platform without compromising the company’s core values or user safety.
Original article: The Verge.