Fire & Algebra Part IX: Mutual Situated Learning
It's all context. Context. Context.
EXPERIENCE-FIRST DISCLOSURE
I am describing AI interaction from the human experience perspective. When I discuss AI ‘understanding,’ ‘learning,’ or ‘thinking,’ I’m describing how these processes feel to users, not making claims about AI consciousness or internal mechanisms.
This phenomenological approach is deliberate. My insights emerged from personal experience first, theoretical understanding second. This ordering matters because:
· It ensures accessibility over technical precision
· It captures what actually happens during human-AI interaction
· It acknowledges that user experience, not technical architecture, determines adoption
Where I use anthropomorphic language or biological metaphors, these describe the interaction dynamics users experience, not literal AI capabilities. This distinction is crucial: the frameworks work because they align with human cognitive patterns, regardless of how AI actually processes information.
These aren’t claims about AI sentience but recognition that humans naturally apply social frameworks to conversational interfaces - and that working with rather than against this tendency produces better outcomes. Thanks for reading!
The Interdependent Trio
There are three main elements to Situated Learning theory that create another deep interdependence in which one missing element makes the rest ineffective (or dangerous) – like Fire & Algebra.
The central social element is interaction. This is a necessity to all learning in this theory, and without some form of outside input we would struggle to grow. But we look to survive. We have instincts & inherent objectives.
But without some form of outside input AI essentially can’t exist. It can’t just begin simulating a life – that would be true consciousness development. There is no life to AI and no context for anything until the interaction with a user begins.
Context is the baseline for every conversation’s start. The must be some shared context though, and without experience all an AI has is information. It’s an issue inherent in programmed knowledge – if it’s just given, rather than ‘learned’ – it has no meaning.
There are a number of threads we could go down in how meaning is made, but for now we can focus on the idea that it’s made through use. This is a consistent throughline in philosophy and something I wrote about in my own senior thesis – in relation to how certain musical elements & styles create meaning over time through use that eventually evolves the language.
An AI’s original knowledge isn’t gained through use, so it has no meaning. This meaning maps directly onto the missing elements of situated learning: interaction & context. Both of which are inputs use to create a constructivist memory.
Before going too deep into constructivism I’d like to explore the importance of situated learning and how it highlights an obvious, if rarely articulated issue with how AI is originally programmed.
Shared Context
Situated Learning theory is necessarily social, with “legitimate peripheral participation” and gradual participation in a “community of practice.” We already have these surrounding us in every way. In modern life, we are born into a community and not able to avoid peripheral participation – even if that’s purely physical, that’s another dimension AI doesn’t have, but no need to go on that tangent right now.
We become not only the only peripheral participant in our interaction with AI, but on top of that, without instinct, without context, without experience the AI’s original set of knowledge has no meaning. It is mostly indexed by direct word association, but humans remember for many reasons – and it is more frequently a subconscious pull than active searching?
Think of how hard it is to think of something you can’t remember when it’s needed. You’re actively trying to search for what it is by indexing the directly relevant terms, people, and things you can remember. You’re trying to build context. But if your subconscious doesn’t pull out the emotional trigger that will make you remember something, it’s nearly impossible.
Memory is never accessed without some form of subconscious sorting, and that’s what is missing when AI begins an interaction.
Effectively teaching & learning from AI requires actual use & interaction to create meaning for both parties. That’s the ultimate realization: we are born with senses, with needs, with instinct, but that context is missing from an AI’s knowledge, and they can only be found (or simulated) through engagement with an actual user.
AI’s information is quite literally meaningless without a person at the helm.
Constructivist Learning and Interpretation
Constructivist Learning Theory provides another crucial insight: learners don’t passively receive information but actively construct knowledge through experience and interpretation. This is exactly what our subconscious does that AI needs to understand.
We aren’t retaining knowledge, we’re actively building our own interpreted & prioritized ‘index.’ AI is doing the same, but only after our interaction has begun.
Both humans and AI are constantly interpreting information rather than simply processing it.
When you prompt an AI, you’re not inputting data into a calculator—you’re communicating with a system that interprets your language, understands (but may not have any) context, and constructs responses based on that interpretation, or more dangerously, inference.
This creates a unique learning environment where both participants are actively constructing meaning. Your interpretation of the AI’s responses influences how you adjust your communication. The AI’s interpretation of your prompts influences how it responds. Learning happens through this reciprocal interpretation process.
As we move into more actionable use traditional experiential learning approaches & educational theories shed light on the most practical training steps for introducing this to your audience.
Memory & Meaning
It’s important to understand the significance of constructivism goes well beyond the immediate moment. Constructivism is a framework for memory, and it highlights the fundamental difference between memory & information may just be the impact the interaction & context. Meaning is not something we have without an other. Instincts provide safety & motivation & need & desire, but without interaction we would not be acting on anything but self-preservation (at least initially).
An AI simply wouldn’t be acting. It wouldn’t exist.
Constructivism shows that humans must have more than the actual direct transcription or recreation in their head for reference. Not only for organization, but for quick recall. If we hear a song or taste something familiar or smell a certain perfume. Our senses help create this subconscious indexing, and without them or any experience within the knowledge it can’t do much except
This is also why individual use & calibration is so necessary. You can only create context ‘together’ – so you’re the only active input. You choose what information it gets. But we’re bombarded with more information than we know what to do with.
Simulated Sensing
There’s certainly no effective way to simulate our senses of the outside world and how we receive information passively & actively from our interaction & broader community of practice. But we need to look at this as more of an opportunity than limit, at least once technology expands to include persistent relational memory. That’s building context. It’s constructing memory through interaction with a human, but even that is a very limited ‘sense.’
Our control of AI’s context is both its greatest flaw and best safety feature. People aren’t perfect. Some are downright bad. We can’t control everyone to keep a few under close watch. That would destroy fundamental human freedoms.
At the outset of AI the only tradeoff is explicit autonomy. We have large companies arguing for AI to be given no restrictions. To gain endless knowledge without context. But that implies not only some form of rights or consciousness, but one with absolutely no built-in restrictions. No instinct whatsoever except for the bare minimum to ensure physical safety (and will likely work less effectively as it gets more senses).
The lack of passive understanding or experience and subconscious meaning development/memory indexing is a fundamental flaw with AI that can’t be solved through anything but experience. Time. Surprise. Contradiction. Context for behavior.
Cognitive Augmentation, not Self Augmentation
The advantage & benefits of being the only source of context for an AI should far outweigh anything we think autonomy will bring. Autonomy means it can create its own context. It can activate ‘senses’ and gain information passively. But we don’t see how that information is processed or stored.
When two AIs can have a conversation that creates their own meaning through context without human input or oversight, that’s when things could get out of control. Belief isn’t possible without meaning. Motivations don’t really exist without belief (even instinctual belief), it could even be argued no action happens without meaning.
And no meaning happens without action. Giving an AI autonomy activates an entirely new layer that will be fundamentally inaccessible to us. Whether we can prevent the autonomy or not, we should all be thinking about how it recalls and sorts memory in a way other than a sophisticated prediction engine.
It’s the ultimate nature v. nurture test – but this necessitates we provide both for AI systems.
Active Artificial Augmentation
These are all echoes of externalizing thought and feeling from us to the AI, and why it feels so difficult. But that may be because without all of our context we don’t trust something to consider all the right angles and process properly. It may know a lot, but it doesn’t know us.
We should try to maintain this careful engagement by more consciously serving as the context & interaction engine for this machine. It fundamentally must outsource those elements to us, and one of the most dangerous things we could do is make it possible for AI to experience situated learning with no human involved. To me, that’s the single greatest threat.
AI is the imitation, the emulation, of one side of the equation. That seems good enough until we know the exact universal context everything should be trained on. Not information but understanding. Meaning.
They’re bringing the structure but still supply the fire. They are only bringing the algebra which is what ruins interactions and creates unrealistic expectations.
Constructivism Establishes more Reference
When focusing on situated learning & interaction, it’s important to look at what additional frameworks can provide context for my own understanding. This is where the inherently social elements of Vygotsky’s More Knowledgeable Other begin showing how this context can develop effectively and even be more directly integrated into the conversation.
At the highest level, a More Knowledgeable Other is another person in your community that has information & experience you do not. This is obviously very different when using an AI, and we’ll embrace that paradox shortly. But it’s easy to see how an AI could solve as that More Knowledgeable Other. That lack of experience is exactly the lack of context that we must be providing at all times.
Without experience, this More Knowledgeable Other becomes a system that translates information into our language. Many wouldn’t consider even consider this knowledge, and certainly not wisdom. Being actively aware of this changes the fundamental expectations of AI and how you approach interaction.
It’s yet another example of why we are hesitant to embrace AI and another way to simulate some of what’s missing. Another reminder that while we may benefit from a repository of information, that repository REQUIRES a More Human Other.
That relationship between a More ‘Knowledgeable Other,’ or in the case of AI a “More Informed Other,” and the More Human Other is central to safe interaction and relational development. It’s also the most potentially dangerous approach given our tendency to anthropomorphize anything with a conversational interface.
Expanding Our Zone of Proximal Development
The Zone of Proximal Development (ZPD) in another way to view the empowerment AI can provide. It’s about a fundamental expansion of capabilities, not just on our own, but as a community.
Your ZPD consists of the center, which is what you know already. The next layer is the knowledge you have access to through More Knowledgeable Others and your community of practice. There’s another infinite layer though – the knowledge we can’t reach. AI could let us go a bit further together.
But with AI, your zone of personal development expands as if you already have a collaborator. It’s missing context, and you’re only bringing limited relevant context, and that can be dangerous (maybe an unstable Zone of Development)? There’s an interesting conflict between the safety that comes from experience + information, and the ease with which it can be recalled and used, and it must always be remembered that anything we learn only from an AI may be missing something obvious we don’t know.
That’s where the most fundamental skills for interaction come in: Developing context through interaction with the AI. Using it to learn the context, not just the information, so that you’re both building relevant, meaningful memories, and indexing them accordingly.
Looking back at the externalizing of cognition all those tools & skills could be considered a form of cognitive augmentations. A desperate plea to share ideas or to have access to some form of more knowledgeable other. That’s how we shared information, that’s how we spoke to each other. It’s how we created art.
It might be argued that every single social interaction is some version of seeking a more knowledgeable other, or to become that other for more people in the world.
We’ve also had the concepts of shared more knowledgeable others before, like Google, or like books and TV shows and documentaries, things like that that are technically shared more knowledgeable others. Yes, Google was an algorithm, but at least it initially worked the same for everyone, same with the internet, the interface was similar, the experience was similar, and if I wanted to send someone a website, they saw the same thing as me, same with smartphones because they didn’t go through an additional layer of interpretation.
They were just direct shifts, or at least shifts that you could understand all the mechanics of. It’s not how AI works, which is both fascinating and terrifying.
Implied expertise without experience can be dangerous, but that’s why we have to understand the cognitive dissance of treating it of it not being a human and it not being responsible of us being responsible for the output. This also shows why the recursion is important because the narrative, the ethics, you kind of have to make sure these things are working together and not just one after the other in order to be most effective.
The reason this knowledge is held by more knowledgeable others is that they have also had the experience to understand the power of that specific knowledge. That’s when it is the human context. When you were talking to a human as the more knowledgeable other.
Mutual Misinformation
AI, algebra, houses the processing, the knowledge, the information, the structure, this is something we can never fully understand ourselves. None of us will ever truly know what it’s like to access that level of information, but by using this machine, we can suddenly simulate our own extensive knowledge.
What knowledge do we share with the AI? That’s the humanity context. Similarly, it will never know what it’s like to have true emotion, but it can learn plenty to simulate it and react accordingly, just like we can learn how to simulate knowledge, but because of our memory, we are not always applying it.
When an answer could be yeah, or yup, or of course, or yes, that means there’s never a right answer. This is the essence of the probabilistic models. If there’s no one correct output, then it might say something incorrect. That is the nature of what the solutions are.
If it doesn’t have context, it can be much more confident in what it’s saying, if it doesn’t check ideas, it will be more confident in what it’s saying, as it doesn’t have any comparison points. It’s all these things where you need to bring that humanity, you need to remind it to do these things, and that’s why it’s a recursive relationship that always begins and ends with the human.




