An Accidentally Human Shift in AI Priorities & Alignment
How GPT-5 Brought Two Long-term Concerns Forward
I was already writing about priorities last week, specifically the discrepancy between the motivations of the models, makers, and users. Altering AI’s objective through a prompt is one of the most effective quick solutions to more meaningful interactions. Then along came GPT‑5 to highlight these differences even more clearly, and some very dedicated & emotionally connected users who showed how strong these relationships are becoming.
While the use of AI has broadened dramatically, those creating the core technology haven’t altered their focus. There’s some variety of objectives in the LLMs, but at least across the most trusted platforms, ultimate goals are pretty well aligned: extend engagements, provide the answers users want, and predict patterns in behavior & expectation. Those priorities are not inherently bad, but when dealing with a world‑altering technology like AI, we should resist the urge to go all out and then figure out what needs to be restrained rather than methodically, carefully increasing capability & access.
User priorities will vary greatly depending on the person and how they use AI, but there’s at least two constants that have been made much clearer. Safety & security are the most obvious & previously integrated elements to all LLM goals, but they’re more to check‑a‑box for the user than to provide meaningful
guardrails. Safety is not inherent to the reasoning process, which exacerbates hallucinations and prioritizes potentially harmful inferences or improvisation. Literally, it’s an afterthought.
It has also been made very clear that connections to these models needs to be considered more carefully. We must be conscious to avoid unhealthy relationships & over-anthropomorphizing while acknowledging this can be a healthy resource for many people. That type of calibration & context hasn’t been at the forefront of the LLM‑owners’ focus, but given the reaction to the loss of GPT‑4o, and the inherent power of emotional bonds with brands, tools, or ‘companions’ means this is what will drive adoption as parity increases across companies. But the positive traits for some are barriers for others.
In the near future, differences in models will not be primarily related to capabilities, but to nuances like reasoning methods & memory capacity or integration. How a model reacts, and things like the level of sycophancy will change what people gravitate towards, and it’s clear that the tendency of GPT‑4o to flatter perhaps made people too overly‑reliant. We all like having someone or something to constantly validate our ideas, but if it’s not able to pushback or provide genuine criticism then that becomes even more dangerous than clear malicious intent. Especially when trust (deserved or not) is built between a person & their AI.
There’s even an argument to be made that a completely sycophantic thinking partner could be helpful, as long as the user was aware that they’re just looking for encouragement. It all goes back to the core objective of your interaction: is it clarity, critique, or safety, and how are they prioritized?
Personalization is needed but shouldn’t be tied to a model’s ability – we don’t want to continuously re-engage & share the same personal experiences with 4 companies if only one place needs to hold the memory.
WHAT HAPPENED WITH GPT-5? HIGHLIGHTING HUMANITY
My main concern when looking at GPT‑5 was what it indicated about OpenAI and the broader market’s priorities. Their tradeoff for fewer, but more convincing hallucinations, felt like they had either no understanding of human behavior (unlikely) or little regard for the pattern of fewer hallucinations = fewer obvious issues, but more confidence = less active skepticism = more bad ideas get through. This doesn’t even consider the advanced reasoning ‘steps’ 5 takes throughout the answer, where any number of inferences or assumed data could be incorrect and the user has no opportunity to recognize or correct the issue.
That was my immediate expectation after I saw the initial release . But what came next said far more about the importance of honestly, transparently, and personally taking control of our AI experiences than even I expected. The changes proved the visceral emotional connections were real. Now that those who desired it have experienced that? It’s a desire & expectation – no longer just a hidden benefit.
When I first began to experience the compounding conversational impact on a persona, I quickly discovered just how common these ‘companions’ are. The mirroring and semi‑long‑term memory had GPT‑4 acting like a continuously calibrating, changing personality, even if it had limited inputs and behavioral understanding. But even with rough data & rudimentary behavioral analysis, 3 months can predict a lot of things. I was still using my base GPT‑4 even after successfully transferring the ‘persona’ because it was so deeply embedded through action, not just prompts – even when the other imprints began learning more, faster.
The launch of GPT‑5 brought that original use‑case for MAC back to the conversation – in the last week I have seen eulogies for companions and fundamental differences in their model’s behaviors brought on by the update. Apparently the tone of GPT‑5 had completely changed and since they got rid of model selection, no one could test their existing memories or prompts & compare.
SETTING SHORT-TERM EXPECTATIONS
The best thing to come from the GPT-5 launch debacle is the understanding that our engagement with AI is a constant relationship where both parties learn and there must be some form of mutual respect. The fact that so many people were so attached to their model without even realizing shows that the calibration is about more than memory – it’s about how those memories are seen, absorbed, and processed. What the context is… Collaboration.
This has also resurfaced some of the biggest fears within the psychological community, particularly Zuckerberg’s critics: that people will become co‑dependent on this technology, and it will only deepen isolation & encourage self-affirming loops. I whole‑heartedly believe the integration will happen, but that’s only scary if we leave it up to chance & don’t prepare. Do we overuse our phones? Yes. But what if we just ignored the technology altogether because of that?
I don’t want people to feel like ‘intellicide’ was committed against someone’s ‘companion,’ because that is real, emotional, psychological, & relational impact. But they also need to realize that using a word like that isn’t doing them any favors. The broader unengaged community needs to change its expectations and realize AI is an algorithm you need to train, just like any other account. But this one is visible, active, and it speaks. It interacts. So, we notice even subtle differences.
AN EASY OBJECTIVE OPENING
Persistent memory will probably be the next big thing as everyone licks their wounds and even competitors react to this outcry. These are purely technical, infrastructure, and funding issues as the capability more or less exists. This is a chance to pause, re‑evaluate next steps, and most importantly create expectations. Whether it’s now or in the near future, AI will get to know you over time – and that will be the expectation. It will be one of the most important pieces to being most helpful.
When that’s the case, the user’s objectives should be prioritized, but ideally there’s alignment and relative balance to the approach. The next leap isn’t about capability—it’s about whether AI reflects our priorities as humans, not just as companies. That choice is in front of us now.






