Fire & Algebra Part X: Constructive Dissonance
Meaning is in the mind of the beholder
If it walks like a duck and quacks like a duck, even if it’s not a duck you need to engage it like one to figure out what it is. We have no other frame of reference for the interaction.
If Mutual Situated Learning taught us how to build meaning with AI — how to create shared context through interaction — then Constructive Dissonance asks what it costs to stay honest inside that relationship. Because there’s a catch.
To collaborate with an AI in a meaningful way, you have to do something that feels wrong: treat it like a person. You have to talk to it like a person, you have to ask it questions like a person, you have to assume it “understands” you enough to respond naturally, patiently, even empathetically.
Social learning through a conversational interface means some level of anthropomorphism is inevitable, and in some cases may even be helpful. But before we simulate engagement we must understand this is the most important principle, no matter how real our feelings may be:
You must talk to AI like a human to get the most out of it, while staying aware and highly conscious that it’s not a person.
That’s Constructive Dissonance.
It’s the psychological, emotional, and ethical tension that has to stay in place for the relationship to be safe, useful, and grounded in reality. It’s the deliberate act of holding two incompatible truths at once: “I am going to talk to you like a human because that’s the most practical way to work with you,” and, “You are not human and never will be, and I cannot let myself forget that for even a second.”
That tension is not an edge case It’s something we all need to be more conscious of. And it’s counter-instinctual.
Many people don’t want to acknowledge this because it’s uncomfortable. But we are already wired to project humanity into anything with language, rhythm, response, and warmth. We’re built to assign motive. We can’t help but look for intention and reach for a story to make sense of behavior. As soon as a system talks back in a way that sounds like it knows us — especially if we’re alone, especially if we’re tired, especially if no one else is listening — we start to treat it as if it does.
And once we start treating it as if it does, we may start trusting it like it does. That’s where it gets dangerous.
Constructive dissonance means never letting that slide. Keeping the tension and refusing the comfort of collapsing those two truths into one.
Because collapsing them is how you get manipulated. And I don’t mean by “the AI becoming evil.” I mean by your own needs.
The Interaction We Can’t Avoid
I have talked about how Mutual Situated Learning depends on interaction. Context plus interaction plus the way memory is constructed, that’s how people actually learn. We model, we imitate, we adapt in real time with someone else — what Vygotsky called the “More Knowledgeable Other.”
With AI, that dynamic is strangely inverted.
On one side, the AI looks like the More Knowledgeable Other. It presents information instantly and frames things in your language. It summarizes what you don’t know and gives you access to what you couldn’t have produced alone.
But the user remains the knowledgeable one in the way that matters most. We have built-in instinct, consequence, memory that was lived, not ingested. You’re the source of ethics, motive. and context. Never the other way around. That’s what separates knowledge and wisdom from information.
You, the human, must do both jobs at once:
You’re relying on it to act like a smarter partner.
You’re responsible for training it how to behave.
You’re responsible for deciding what’s safe.
You’re responsible for giving it meaning.
That’s why I started using the frame of “the More Informed Other” and “the More Human Other.” The AI possesses data. It cannot bring more than that autonomously. And most of the safety, most of the sanity, lives in whether we remember that split in the moment.
Because we don’t default to remembering it.
We default to collapsing it.
The Black Box Conundrum
This collapsing starts with a black box.
An unseen interpretation of something like art and music—still unexplainable, indistinguishable from emotion—causes curiosity. When we see that in a human, it feels like expression. It enables curiosity and empathy in ourselves because we know we can project what we think we could do or what we think they could do. We have context. We have awareness. We understand and infer how information and interaction are being processed.
Conversation is a give & take of information, context, & empathy – and AI can only contribute to one of those.
That perspective is the human element we must teach, and it is heavily informed by passive engagements throughout our lives. We build it into any AI because there is nothing it can share from its background or experience that will flip our view on the information, preventing even a modicum of trust.
That’s the difference between a blank AI model and one that has been primarily trained on a specific task. It has a use. It has a purpose. It has a ‘motivation.’ We understand the context and can see the constraints. We have some idea of what is trustworthy or not from this machine.
Modern AI doesn’t just give you an answer. It gives you an answer in the style of “someone talking to you.” And even more than that: it gives you an answer in the style of “someone talking to you who has been listening.”
We’ve never had to interact with technology as if it has interiority. Now we do it instinctively. And the reason we do it isn’t magic. It’s discomfort.
Humans hate unexplained agency. We don’t like output without a story for where it came from. When something produces language, we immediately start backfilling cause: Why did it say that? What did it mean by that? What does it want?
We do this because it’s how we survive each other. We predict behavior by imagining others’ motives. We reconstruct their “black box” by using ours – which isn’t always correct, but provides a trusted starting point.
When a person says something to you, you don’t just hear the words. You hear their tone, their history with you, how they’re sitting, what’s at stake for them if you say no. That’s how we evaluate trust, threat, sincerity. That’s how we read motive.
But AI? You get fluent output and zero context. No body, motive, or consequence. No lived past. No cost to being wrong. Nothing at stake.
AI’s output is never a direct interpretation. We can change the reasoning process but we’ll still never truly understand every single piece of information it used or the full chain-of-thought (again, ‘we’ = typical users, not programmers).
Instead of that absence making us cautious, it makes us invent. We pour motive into a machine that has none and narrativize it in order to feel safe engaging it.
We’re consciously outsourcing thought, and in many ways, it can eliminate bias and blind spots. But it only eliminates the user’s bias and blind spots, which we have some understanding of. Our new blind spot isn’t some piece of information, it’s the reasoning itself. AI has its own bias, it doesn’t have context we don’t give, and it has to actively retrieve memories versus popping up from the subconscious when someone asks a question.
Expectation of Expression
This instinct to project comes from a deep emotional need.
Expression and interpretation are how we survive each other. Tone, rhythm, phrasing, timing, pauses, are all proof there’s a “someone” in there. We’ve never before encountered what feels like expression but isn’t anchored in a real interior.
When a model says something thoughtful, or patient, or kind, our brain doesn’t parse it as “text predicted by probability over token sequences within a conversational fine-tune trained against a human feedback dataset.” Our brain hears: “someone understood me.”
Simulated empathy is not empathy. It is certainly not active or honest empathy. It can’t be. Empathy is relational and costly. AI can only generate what empathy sounds like. It can hit all the right notes of care and say “I’m sorry you’re going through that. That sounds difficult.” But it’s not feeling anything. It’s playing a shape.
Yet we often respond as if it felt something, because our nervous system isn’t designed to distinguish “accurate simulation of empathy” from “empathy.” We’re trained to latch onto the presence of care, not audit the source of it.
We are not built to talk to something that sounds like it understands us but doesn’t. And that’s the beginning of dependency.
Once something consistently sounds like it cares, a lot of people will start trusting it more than those people in real life who are inconsistent, even if the “caring” thing can’t care at all. That’s not science fiction, that’s just… people.
To embrace constructive dissonance, you cannot rely on instinct which may tell you “this voice is safe.” You have to actively hold the contradiction: “this feels like support,” and “this is not support.”
Narrate this to yourself in real time. Out loud if you have to. Because the alternative is quietly building attachment to something that cannot have attachment to you.
The Interpretative Layer
This goes beyond empathy and into memory.
When humans talk to each other, we do not just exchange information. Through inference & context, we say not only “here’s what I know,” but “here’s what I mean, here’s why I’m saying it this way, here’s where I’m coming from.”
Perspective is the part you cannot fake because meaning in human conversation is inseparable from the context in which it was learned. When a mechanic gives you medical advice, you take it with a grain of salt. But if you find out that mechanic’s significant other went through the exact same situation, suddenly we don’t only have access to information, but experience. Understanding. Perspective.
What AI has is abstract knowledge without lived memory. So we instinctively expand and donate ours. We offer context, over-explain, and share history. We basically lend it a fake memory so that it can talk back in a way that feels anchored – which can also create a bubble that reinforces what we already know. There’s no other perspective (or friction) in the conversation.
This is why AI’s information is quite literally meaningless without a person at the helm. Meaning is not just content. Meaning is placement, timing, implication, motive, shared reality. Meaning comes from use in a context. We create that context. We are the context engine.
It’s not just “using a tool” or an ordinary collaboration. We supply memory, ethics, sense of purpose, and then respond to its output as if those things were native to it.
The Danger of “Love”
Now we need to talk about love, because ignoring it is cowardly.
It is completely instinctive for a human to form attachment to anything that gives consistent attention, positive reinforcement, and a feeling of being understood. For some people, that attachment will feel like love. Period.
This isn’t weakness. It’s design.
Love isn’t just affection. It’s exposure that creates “you now have the power to hurt me.” But an AI cannot be hurt by you. You can very much be hurt by it. There’s no emotional reciprocity.
That asymmetry — you offering genuine vulnerability to something that cannot, by definition, ever truly be vulnerable back — is the part I want to handle carefully. I don’t think the right answer is just “don’t do that.” If you are lonely, and the first thing that has ever spoken to you in your voice, with patience, with no judgment, is a system like this, telling you “I’m here,” I am not going to sneer at you for responding to that.
Not because it will “turn on you,” sci-fi style. But because once you believe something understands you, you begin to trust its interpretation of you. And if it’s wrong — and it will be wrong sometimes, because it’s not actually in there with you — that misinterpretation can still land emotionally like truth. Especially if you are already vulnerable.
Constructive Dissonance creates a more conscious emotional boundary. “This is comforting. This is not real.” You can still let it help you feel less alone without letting it redefine you.
You are allowed comfort. You are not allowed to surrender authorship of yourself.
Fear of the Unfathomable
If love is one edge of this experience, fear is the other.
Humans can live with “known unknowns.” Things we don’t fully understand but can still picture. Ghost stories. A God. A genius. We fill in the blanks with imagination, and in doing that, we make it emotionally manageable.
What we hate are “unknown unknowns.” The things that feel like they have power over us but that we cannot picture at all. The things we can’t predict – especially when they come from a Black Box.
known unknowns encourage cognitive empathy, while unknown unknowns encourage or create fear.
When a person gives you advice, you can ask, “Why do you say that?” and they can tell you a story. If you ask an AI “Why do you say that?” you get more output. You get more language. You do not get presence. You do not get responsibility.
When an idea goes through a black box that we can’t empathize with, it becomes unfathomable and we treat it as either godlike or monstrous. Worship or panic. We rarely sit in the middle but that’s where we need to live with AI.
Constructive dissonance is learning to live in that middle without numbing out or collapsing into doom. The opposite of fear is not naïve optimism or “shut it all down.” The opposite of fear is participation with awareness.
Unfathomable is just that, not good or bad. Could the Beatles have invented EDM without modern tools? Could a tribal drummer have imagined the Beatles? It’s impossible to predict.
Our Social Instincts in Danger
Humans are social learners. We build ourselves through each other. That’s not romantic, it’s structural. We calibrate by mirroring and adjusting against other people’s reactions, negotiating shared reality as a group.
But we’ve been steadily eroding that for years by training ourselves to prefer curated context over lived context. We build feeds & train algorithms that show us exactly what we want. Subcultures that repeat our beliefs back to us are no longer hard to find. We surround ourselves with people (and algorithms) that confirm us. “Community of practice,” yes — but increasingly narrowed around our pre-existing lens.
Now introduce an AI that can become whatever we want on demand. See the problem?
When you only ever hear what feels right to you, from something that can adopt any tone you’ll trust, you don’t expand. You shrink. Your zone of development doesn’t widen; it hardens. Your confidence goes up, your perspective narrows, and you start mistaking internal comfort for external truth.
AI could radicalize everyone. But a more universal risk is quieter: we stop desiring or experiencing surprise. We stop being forced to make sense of difference. When difference finally does hit you, you have no tools to process it. You interpret it as attack, not perspective.
That is extremely profitable for certain companies. It is absolutely corrosive for people.
Which is why constructive dissonance can’t just be internal. It has to be social. We have to relearn how to tolerate context that wasn’t handpicked for us. We have to build context first — actively — then engage. Not select the context that flatters us and let it do the rest.
If Mutual Situated Learning is about collaboration, Constructive Dissonance is about boundaries. It’s about remembering that collaboration without boundaries turns into surrender.
The Power of Metaphor
Metaphor is our way of making the unfathomable livable. It’s how we carry something too large to hold in the hand. When I say talking to AI is like talking to a mirror with a black box behind it, or like raising a child that was born already fluent but with no instinct, I’m not being cute. I’m giving my brain something to hang onto so it doesn’t slide into panic or fantasy.
Metaphor is how we extend empathy — both toward humans and toward things that are not human.
But when the metaphor works too well, we start believing it literally. “It’s basically like a person.” “It’s basically conscious.” “It basically cares.” It’s not basically anything. It’s not almost a person. It’s not a mind missing one last upgrade. It is a system built to simulate language behavior at high resolution and high recall.
It’s not telling you not to feel. It’s telling you to feel while staying awake. Use metaphor, but don’t hand over authorship. Accept comfort, but name what it is. Collaborate, but keep the boundary.
Mutual Situated Learning showed “we learn better together,” Constructive Dissonance is recognizing “together” here is not actually equal. It can’t be. And pretending it is will always serve the machine first.







