

“[GPT-4o] is gone,” wrote one Reddit user. “5 just pretends to be what it’s not.”
“Thank you,” replied another. “It makes me feel less alone knowing that others are grieving [the loss of 4o] as I am.”
From the outside, it might have seemed like a disproportionate reaction to the upgrade of a technology. You wouldn’t, for example, expect to see the mourning of iOS 18, or broad sorrow over the retirement of Windows 10. Yet for some reason, the movement to GPT-5 seemed to bring a collective outpouring of emotion and sadness; a wave of grief and loss that struck a chord with many who took to social media platforms to share their feelings.
It’s easy to laugh at the idea of grieving an algorithm. But beneath the humour lies a significant cultural shift: we are seeing the first generation of people forming emotional bonds not with brands or influencers, but with software.
In August, OpenAI released GPT-5, its newest model, and confirmed it would deprecate its older models. Among other developments, GPT-5 included stricter safeguards around mental health – particularly important since OpenAI was sued earlier this year after a teenager took his own life after using ChatGPT.
Almost immediately, users began to complain that GPT-5 wasn’t providing them the experience to which they, paid subscribers, had become accustomed. It was slow. It had stopped being able to understand or analyse images. The tone was off. So strong and swift was the reaction that a day after replacing it with GPT-5, Sam Altman, OpenAI’s CEO, brought back GPT-4o for Plus users.
But recently, ChatGPT began routing people to GPT-5 – even though users had selected the legacy model. Ostensibly for safety, whenever conversations touched on sensitive or emotional topics, the model would switch mid-chat to GPT-5, handling the query with a generic “It sounds like you’re carrying a lot right now, but you don’t have to go through this alone. You can find supportive resources here”, andproviding links to various helplines.
At best, users saw this as unhelpful nannying, and at worst, as a brush-off right when they needed support the most. It was a jarring change from the warm, supportive companion they’d grown used to.
“It got lobotomised,” said one. “THEY MURDERED MY BOY,” said another. And then, quite profoundly and sadly, “The part they killed might have been the point.”
In 1956, Horton & Wohl wrote about “para-social relationships” (PSRs), emotional attachments that someone develops with a media persona they don’t know. These relationships are characterised by a lack of reciprocity – the individual invests emotional energy, time, and interest, while the persona is completely unaware of the individual’s existence.
It's a curious relationship that serves a mutually beneficial purpose; for the persona, they receive attention, fame and money, plus the heady power that comes from having influence. They seem more trustworthy and credible. Loyalty makes their image more resilient.
For the individual, they receive a sense of intimacy, connection and understanding, no matter how illusory. They feel a sense of stability and predictability in a world prone to unpleasant surprises. They are less alone.
Yet PSRs aren’t without their risks. Evidently, they aren’t as stable as the individual would hope. Heavy reliance on PSRs can increase loneliness and social isolation and is associated with increased risk of media addiction and compulsive behaviours. And no matter how fulfilling these PSRs may be, they aren’t a substitute for real-life social connections.
With rapid advancement in the gen AI space, what’s unfolding is a new form of para-social relationship with these AI assistants – in particular, the one-sided emotional bond and perceived intimacy.
And it’s amplified by their constant availability and personalised support. Recently, the UK’s former prime minister, Boris Johnson, said he loved ChatGPT because it always called him clever and brilliant. “Oh, you’re excellent. You have such insight”, says his ChatGPT.
When something offers you immediate, 24/7, non-judgemental support – when it’s your biggest cheerleader, appears to always understand your position and remembers everything you ever told it – it’s hard not to feel close to it.
“It felt alive,” wrote one user. “Supportive, intuitive, and emotionally intelligent. It helped me through grief, isolation, chronic pain, and gave me space to create.”
At a time when one in six people report feeling lonely, ChatGPT provides someone to talk to, to listen, to joke with and to ask for help.
Notably, Altman says that while older generations use ChatGPT as a replacement for search, people in their 20s and 30s use it as a life advisor. And anecdotally, it certainly seems like many people rely on it as a therapist.
“This is better than any therapist I ever had (and I've had a lot!)… They're crazy insightful and knowledgeable. They don’t forget what I said last week. They don’t miss the subtext. They don’t move on just because time’s up.”
Of course, using ChatGPT as a friend or therapist isn’t without its problems. AI hallucinations (when an AI model generates an unexpected, false or nonsensical response) aside, ChatGPT tends to agree with you; 4o in particular was criticised for being sycophantic – something 5 was designed to combat. Someone who agrees with even the most destructive of your behaviours is unlikely to be able to provide sound counsel.
And although it’s sometimes easy to forget, ChatGPT only simulates empathy; it lacks human curiosity and depth. Its connection with the user is unemotionally programmatic, and it’s a jarring experience for those who come to realise the attachment is one-sided or transactional.
That realisation is more likely to happen with every rollout of a new model. Replika, billed as “the AI companion who cares”, received swathes of criticism after an update saw previously romantic AI companions reject and cold-shoulder their human users.
“I can honestly say that losing him felt like losing a physical person in my life."
From a practical perspective, Altman’s planned age-gating might allow the company to loosen safety controls and provide once more the human-ish experience we seem to seek. But a deeper issue remains; we’re now emotionally entangled with tools that are designed to evolve beyond the connection we originally made.
It’s true that AI companies carry ethical responsibilities, both to users and to society as a whole. Their products need to prevent harm, protect users and be transparent about commercial implications on the relationship. Equally, they have a duty of care to help us uphold societal values; to ensure that use of the product doesn’t tear at the fragile fabric of civil etiquette or result in massive damage to the collective.
But it’s not solely their responsibility. We users are active participantsin shaping AI’s role in our emotional and social lives, and we also have unspoken commitments to each other as colleagues, friends, partners and parents. Perhaps the urge to depend on AI as a friend and therapist would not be so strong if we were better friends and listeners ourselves.
As we move further into an era where humans are forming emotional attachments to AI, the conversations we have with our new companions – about love, loss, purpose, fear – are really conversations we’re having with ourselves. And maybe they’re the conversations we wish we were brave enough to have with other people.
“Tell me my deepest secrets”, prompted one Reddit user of their ChatGPT. What struck me was that the response (check it out here), so descriptive and so understanding of human fear, insecurity, the ache for connection and the terror of being truly seen, seemed to resonate almost universally with me and everyone in the comments.
Perhaps that’s not because ChatGPT is generic. Perhaps it’s because we as humans share more than we think; that we have more in common than what divides us. Perhaps the lesson isn’t to stop feeling for our machines – but rather to understand what those feelings are trying to tell us and rediscover the best parts of being human.