The strategy director had 22 years of experience synthesizing complex information. Within six months, AI did it in seconds. He felt the ground disappear — not because he was fired, but because he realized he had never learned to think. He had learned to process.
The strategy director had 22 years of experience synthesizing complex information. That was his value — the ability to look at scattered data and find the pattern. Within six months, the company deployed an AI system. Synthesis now took seconds.
He felt the ground disappear beneath him.
Not because he was fired. Because he realized, for the first time, that he had never learned to think. He had learned to process.
A recent body of research on mental models in the AI era — produced for executives and leaders navigating digital transition — identifies this with more precision than most reports on the future of work. The central conclusion is not about jobs. It is about cognition.
The thesis: the mental models that made us competent in the 20th century — individual expertise, linear causality, deterministic planning — have not disappeared. They have become incomplete.
And there is an enormous difference between obsolete and incomplete.
What AI actually displaces
AI executes with surgical precision under conditions of clear goals and abundant data. Pattern recognition. Text generation. Process optimization. Everything that once required years of human training now takes seconds.
But there is a hidden cost to this efficiency.
Every task we delegate without understanding creates a dependency we cannot see. The user who blindly trusts AI output is not being efficient — they are surrendering a cognitive muscle that, like any muscle, atrophies when unused.
The research calls this automation bias. I call it cognitive atrophy.
The ladder that matters
The most useful concept in the research is what the authors describe as a "ladder of ambiguity." The higher you climb in any organization, the less well-defined the problems become.
At the first rung: tasks with explicit criteria. AI dominates here.
At the last rung: problems without a statement. Someone has to define the problem before any solution can exist.
AI provides the greatest leverage at the lower rungs, where success criteria are already defined. At the higher rungs — where the work begins by discovering what the right problem even is — humans remain irreplaceable.
The conclusion is not that humans should flee to the top of the ladder. It is that they should stop competing on the rungs where AI always wins.
The five mental models the research identifies
The body of research is clear — and rare — in not stopping at diagnosis. It proposes five concrete metacognitive competencies.
Systems thinking. Not as philosophical abstraction — as the capacity to map feedback loops between AI models, human behavior, organizational incentives, and data. Most AI failures are not algorithmic failures. They are system failures that nobody mapped.
Cognitive flexibility. The capacity to shift perspectives, modes of thinking, frames of reference — without locking into a single one. The research shows that people with greater openness to experience treat AI outputs as scaffolding, not as answers. The difference in outcomes is measurable.
Ambiguity navigation. The capacity to frame the problem before searching for the solution. To use AI to stress-test assumptions — not to confirm what you already believe. To experiment with hypotheses while maintaining control over evaluation criteria and risk boundaries.
Complementarity-aware collaboration. The research proposes three interdependent mental models: understanding of the domain, understanding of how AI transforms inputs into outputs, and — the rarest — metacognitive awareness of when human judgment outperforms the AI recommendation. And when it does not.
Trust calibration. Trust in AI is not a fixed property of a system. It is the emergent result of continuous interaction. Those who do not develop an accurate mental model of AI's error boundaries get trapped between two symmetrical mistakes — always following, or never following.
What this means for organizations
In the contexts where I work — with leaders in Portugal, the UAE, Saudi Arabia, and the Philippines — the problem is not AI adoption. Adoption happens. The problem is the absence of cognitive architecture beneath the adoption.
Tools get implemented. The mental models that determine how those tools are used do not get redesigned.
The research captures this precisely: the training programs that work do not teach tools. They teach adaptive thinking, critical reflection, and systemic awareness. The tool is the context. Cognition is the object.
An organization that adopts AI without investing in this layer creates short-term efficiency and medium-term fragility. Cognitive muscles that go unused atrophy — and when the AI fails, changes, or produces an output that nobody knows how to evaluate, there is no human reserve to compensate.
The position I defend
The research speaks of "human strengths as design constraints." The formulation is correct but insufficient.
I prefer another: cognitive sovereignty as competitive advantage.
Do not delegate thinking to systems you do not understand. Do not confuse speed of output with quality of reasoning. Maintain control over intention — the "why" — even when execution is entirely AI-assisted.
The AI your competitor uses is the same AI you use. The mental model with which you use it is the only differentiator AI cannot replicate.
The question is not whether you have access to the tool. It is whether you know how to think with it — and without it.
This article draws on a body of research on mental models in the AI era, produced for executives and leaders of organizations in digital transition. Findings have been analyzed and contextualized through the lens of AI-Human systems and field work developed at Bitsapiens.
References and further reading
- Holstein, K. & Satzger, H. (2025). Development of Mental Models in Human-AI Collaboration. arxiv.org/abs/2510.08104
- Narayanan, R. et al. (2024). Influence of Human-AI Team Structuring on Shared Mental Models. Georgia Tech Cognitive Engineering Laboratory.
- Exploring Human-AI Collaboration Using Mental Models of Early Adopters (2025). arxiv.org/html/2510.06224v1
- Musick, G. et al. (2022). The role of shared mental models in human-AI teams. Ergonomics.
- Park, H. et al. (2019). Building Shared Mental Models between Humans and AI for Effective Collaboration. CHI 2019.
- Ramirez-Aristizabal, A.G. & Kello, C.T. (2022). Intelligence IS Cognitive Flexibility. Frontiers in Psychology.
- Zhang, Y. et al. (2025). The Impact of AI Usage on Innovation Behavior at Work. PMC.
- Dong, A. (2024). Mental models for AI along the ambiguous ladder.
- The Thinking Effect (2024). Rewiring Thinking: How to Shape New Mental Models for the Age of AI.
Related Insights
Cognitive Load in the Age of Agents
When AI generates infinite options, human judgment becomes the bottleneck.
A IA não vai substituir psicólogos. Mas pode tornar irrelevante quem a ignorar.
Um em cada três utilizadores já recorre à IA para apoio emocional. A Psicologia ainda finge que não vê. Quem integrar a IA agora não vai perder relevância — vai liderar a próxima revolução em saúde mental.