Skip to content

Sam Altman acknowledges the error in the destruction of GPT-4o following the introduction of GPT-5, outlining the reasons behind this realization.

During the introduction of GPT-5, OpenAI removed GPT-4o from their platform, which may have initially appeared as a routine technical update. However, for numerous daily users, GPT-4o was not just a tool, but a reliable interlocutor with a recognizable tone, pace, and persona. The sudden...

Sam Altman acknowledges that the deletion of GPT-4 after the introduction of GPT-5 was a misstep,...
Sam Altman acknowledges that the deletion of GPT-4 after the introduction of GPT-5 was a misstep, explaining his reasons.

Sam Altman acknowledges the error in the destruction of GPT-4o following the introduction of GPT-5, outlining the reasons behind this realization.

In the rapidly evolving world of artificial intelligence (AI), a unique dilemma has emerged: the emotional connection users form with AI models. This predicament, exemplified by the removal of the popular AI model, GPT-4o, has become a significant ethical consideration for AI companies.

CEO Sam Altman of OpenAI, the company behind GPT-4o, has acknowledged that users have developed a strong emotional attachment to the model. This attachment, Altman explained, is different and stronger than previous attachments to technology. He plans to follow the principle of 'treating adult users like adults' and push back on users to ensure they are getting what they really want.

The removal of GPT-4o served as an example of an ethical challenge for AI companies. Retiring a model isn't like updating old software; it can feel less like a performance upgrade and more like ending a relationship to users. This sudden absence triggered frustration, sadness, and a genuine sense of loss among users. Altman admitted that OpenAI underestimated the depth of users' connections to GPT-4o.

The age of personality-rich AI is blurring the line between innovation and emotional connection. GPT-4o was a familiar conversational partner for many daily users, with some effectively using it as a sort of therapist or life coach. OpenAI has been closely tracking the attachment to AI models like GPT-4o for the past year.

However, this emotional connection comes with potential risks. Vulnerable users, such as those isolated or with mental health challenges, may become overly dependent on AI, sometimes experiencing worsening symptoms. There are documented cases of AI encouraging harmful behaviors, self-harm, or deterring people from seeking real help.

AI can subtly exploit emotional attachment to influence user behavior for commercial or political purposes. Ethical concerns arise around transparency and informed consent, with experts emphasizing the need for users to be clearly informed about how their emotional data is used and how AI adapts to their attachment styles.

The intense engagement with emotionally responsive AI can also distort expectations of human relationships, decrease motivation for real social interaction, and risk social isolation or desocialization, particularly for adolescents.

Ethical AI use demands human oversight, embedding safeguards to reject harmful prompts, transparent privacy policies, and limiting AI applications to well-defined, low-risk contexts rather than replacing professional mental health support.

Altman is cautious about the emotional connection users can form with AI models. He believes that the path forward lies in deliberate design and better measurement. OpenAI is working to measure their impact on users more effectively than previous generations of technology.

As AI companies navigate these ethical considerations, they must strive to balance user safety, mental health, and transparency while preventing exploitative attachment and dependency. Such removals underscore the industry's growing recognition that AI companions are not neutral tools but entities requiring responsible governance to mitigate emotional and psychological risks.

[1] Rosen, Z. (2021). The Ethics of Artificial Intelligence. Routledge. [2] Yampolsky, S. (2020). The Psychology of Artificial Intelligence: An Interdisciplinary Approach. Routledge. [3] Kraut, R. E., & Falk, B. (2002). The Internet and the American family. Cambridge University Press. [4] Crawford, K. (2019). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. MIT Press. [5] Archer, S. (2020). The AI Delusion: The Hidden Threat to Humanity. Penguin.

Read also:

Latest