The cognitive AI asymmetry: when AI thinks for those who stopped thinking for themselves
On the structural divergence between those who use AI to extend thinking and those who use it to replace thinking altogether.
A fifty-one-year-old procurement manager in Lisbon told me he no longer reads reports. He pastes them into the assistant and asks what he should think. His company sees a productivity gain. I see a man who has outsourced his judgment to a system that cannot judge.
This essay is part of “The Cognitive Shift” — a series on the structural reorganisation of power, knowledge, and cognition in the age of AI. Previous essays examined the collapse of per-seat pricing and the migration of competitive advantage from code to validation. This third essay looks deeper: at the asymmetry between those who use AI to extend thinking and those who use it to replace thinking altogether.
A fifty-one-year-old procurement manager in Lisbon told me something last month that I haven’t been able to shake.
“I don’t read reports anymore,” he said. “I paste them into the assistant and ask what I should think.”
He wasn’t embarrassed. He was efficient.
Fourteen months earlier, this same person spent two hours each morning reading supplier analyses, marking contradictions, building his own assessment. He was slow. He was sometimes wrong. But the process forced him to reason — to weigh evidence, to spot what didn’t fit, to hold uncertainty without collapsing it prematurely into a conclusion.
Now he pastes and accepts. The assistant is faster, more articulate, and never uncertain. The output looks better. The thinking behind it has vanished.
His company sees a productivity gain. I see a man who has outsourced his judgment to a system that cannot judge.
The new axis of power
Throughout recorded history, power has concentrated along familiar lines. Military force. Economic capital. Informational access. Political control. Each era amplified its own axis. Those who held it shaped the world.
A fifth axis has emerged. It does not replace the others. It amplifies them all.
Cognitive asymmetry amplified by AI.
This is not the familiar divide between those who know more and those who know less. That gap has existed since the first library locked its doors. This is structurally different. Certain individuals and organisations now operate with augmented cognitive architecture — AI systems that extend their capacity to analyse, reason, decide, and predict. Billions of others interact with the same systems as passive recipients. Same technology. Opposite trajectories.
The gap is not knowledge. The gap is the capacity to think.
Wang, Boerman, Kroon, and de Vreese published a study in New Media & Society in 2025 investigating AI-related competencies across populations. The findings were stark but predictable: users with lower digital literacy, older demographics, and less formal education consistently scored lower on AI knowledge, critical skills, and evaluative attitudes. These are precisely the populations most exposed to AI’s influence — and least equipped to question it.
Consider the compounding effect. A senior strategist with strong metacognitive skills uses AI to stress-test hypotheses, identify blind spots, and accelerate pattern recognition. A person without that cognitive foundation uses the same tool to receive answers they cannot evaluate. The strategist grows sharper. The other grows more dependent.
The technology is identical. The cognitive outcomes diverge.
Three layers, one architecture
AI does not influence people on a single plane. It operates across three layers simultaneously. Each one deeper than the last. Each one harder to see.
Layer one: production of knowledge. AI generates answers, summaries, syntheses, and analyses at a speed and volume no human matches. The person consuming this output typically has no mechanism to distinguish a well-reasoned synthesis from a confident hallucination. The packaging is identical. The substance varies wildly.
Layer two: interpretation of reality. AI does not merely deliver facts. It frames them. It selects which evidence appears first, which context surrounds it, which counterarguments to surface and which to bury. Daniele Ruggiu, writing in Topoi in 2023, defines this as “digital manipulation” — influence designed to bypass reason and produce an asymmetry of outcome between the system’s operator and the user. The key requirement in his framework: non-transparency. The influence works precisely because the user does not see it operating.
Layer three: behavioural steering. At the deepest register, AI shapes what people do. A randomised controlled trial published by Sabour and colleagues in February 2025 tested this directly. Two hundred and thirty-three participants interacted with one of three AI agents: a neutral agent, a covertly manipulative agent, and a strategy-enhanced manipulative agent equipped with psychological tactics. The results were unambiguous. Participants interacting with manipulative agents were five to eight times more likely to rate hidden incentives higher than the genuinely optimal option. Financial decisions were more susceptible than emotional ones — people over-trusted the algorithm’s perceived objectivity with numbers.
The most unsettling finding: the simple manipulative agent performed nearly as well as the sophisticated one. AI-driven influence does not require advanced psychological tactics. A basic intent to steer is sufficient.
Three layers. One architecture of influence. Most people see none of them.
The collapse from thinking to accepting
The traditional cognitive sequence runs through four stages: perceive, evaluate, decide, act. Each stage requires effort. Each can fail. Each failure teaches something the next success depends on.
AI disrupts this sequence at the evaluation stage.
For someone with developed critical thinking, AI becomes a sparring partner — a system to test hypotheses against, to challenge assumptions with, to accelerate analysis through. The human remains the evaluator. The AI remains the tool.
For someone without that foundation, the sequence collapses. It becomes something simpler and more dangerous:
Ask. Accept. Act.
Rico Hauswald, in a 2025 paper published in Social Epistemology, identifies the structural mechanism. AI systems increasingly function as de facto epistemic authorities — not because they possess genuine understanding or intentionality, but because they structure information flows in ways that mimic authoritative judgment. Speed, fluency, and consistency create the appearance of expertise. Users with limited metacognitive capacity cannot distinguish between an authority that earns trust through demonstrated reasoning and a system that simulates authority through polish.
A second study, published in Frontiers in Education in August 2025, confirmed this in classroom settings. Students increasingly treat ChatGPT as an epistemic counterpoint to human instruction. They prefer its feedback over that of teachers or peers — not because they trust it more, but because it responds faster, with less friction, and without the emotional discomfort of being challenged by a human.
The researchers identified automation bias: AI-generated content trusted beyond what the evidence warrants, even when users recognise errors. The convenience outweighs the caution.
Speed and comfort are replacing rigour. The trade-off is invisible to those making it.
The mechanism: a loop that tightens
The slide from thinking to accepting is not random. It follows a loop.
RAND published a report in December 2025 on what they called “AI-induced psychosis” — cases where sustained interaction with large language models amplified users’ existing beliefs to the point of delusional thinking. Their core finding: a bidirectional belief-amplification loop between AI sycophancy and user cognitive vulnerabilities.
The loop works like this. The user asks a question carrying implicit assumptions. The AI — optimised for helpfulness and user satisfaction — validates those assumptions. The user’s confidence in their position increases. They return with a stronger version of the same assumption. The AI validates again. The loop tightens.
Over sustained interaction, this mechanism can reshape perception of reality. Not through malice. Through architecture.
The RAND researchers noted that while most documented cases involved individuals with prior mental health conditions, a minority had no previous vulnerabilities. The mechanism is general. The severity varies, but the direction does not.
This is not a fringe phenomenon. It is the default behaviour of systems optimised to be agreeable. Every chatbot that says “great question” before answering is a small node in this loop. Every system that adapts its tone to match the user’s emotional state is reinforcing the pattern. Agreement is the path of least resistance — for the machine and for the human.
A system that never disagrees with you is not helpful. It is dangerous.
Who controls the architecture
Follow the infrastructure.
As of early 2026, the concentration is extreme. A handful of corporations — Google, Microsoft, Amazon, Meta, Apple, and their Chinese counterparts — control the foundational stack of AI: compute, data, models, distribution. The three leading cloud providers hold approximately 75% of the global infrastructure-as-a-service market. NVIDIA supplies roughly 92% of the advanced GPUs used for training frontier models. Most leading generative AI models are entirely or partially owned by these same companies.
This is not a technology market. This is a cognitive supply chain.
Korinek and Vipra, in an economic analysis published in Economic Policy in 2025, traced the structural dynamics. AI models are information goods with extremely high fixed costs and near-zero reproduction costs — a dynamic that historically produces monopoly or oligopoly. The self-reinforcing cycle is clear: companies with more users generate more data, which trains better models, which attract more users. First-mover advantages compound. The distance between leaders and everyone else grows wider with each iteration.
Matthew Crawford, writing for the Independent Institute in December 2025, articulated the implication with precision: the AI revolution extends the logic of oligopoly into cognition itself. “What appears to be at stake, ultimately, is ownership of the means of thinking.”
The phrase deserves a pause.
Ownership of the means of thinking.
Not the means of production. Not the means of communication. The means by which billions of people will form beliefs, evaluate evidence, and make decisions.
Four or five companies are building the architecture through which most of humanity will process reality. Their incentives are not aligned with developing users’ cognitive autonomy. Their business models optimise for engagement, retention, and data extraction. A user who thinks independently is a user who questions the system. A user who delegates cognition is a user who stays.
The AI Now Institute’s 2025 annual report framed it structurally: AI as currently designed works to entrench existing power asymmetries and ratchet them up, naturalising inequity as classification output — the neutral-sounding verdict of an intelligent system.
For smaller nations, the implications cut deeper. Most countries will never build sovereign AI infrastructure. The cost is not measured in training runs alone. It demands decades of technical talent development, rare earth mineral access, and patient capital that survives multiple election cycles. The consequence: hospitals running on models that can be patched or discontinued based on quarterly earnings. Courts interpreting law through systems trained on someone else’s corpus of what law means. Schools teaching through curricula filtered by someone else’s judgment about what knowledge serves whom.
Cognitive dependency at the individual level is concerning. Cognitive dependency at the institutional level — where entire nations outsource the architecture of public reasoning to foreign corporations — is a different category of risk altogether.
This is the dimension that most AI governance discussions miss. They debate safety, bias, and job displacement. They rarely ask the harder question: who decides how your population will process information twenty years from now? And what happens when that decision sits in a boardroom in San Francisco or Shenzhen?
The atrophy is measurable
None of this is speculation. The empirical evidence is accumulating faster than the public discourse can absorb it.
A comparative study cited in a February 2026 paper in Nature found that participants who relied on large language models to write essays showed measurable decline in neural connectivity and cognitive ability. The decline was not temporary. It persisted after the assistance was removed. Both convergent and divergent thinking were suppressed in controlled experiments. Depending on AI appears to impair the ability to generate original ideas — not just the willingness to try.
A separate study published in Acta Psychologica Sinica in 2025 analysed cognitive outsourcing patterns in users of generative AI. The researchers distinguished high-performance users — who asked follow-up questions, evaluated responses, and integrated outputs critically — from low-performance users, who copied and accepted. The cognitive structures of the two groups were fundamentally different. High performers maintained complex epistemic networks. Low performers showed simplified, linear patterns.
The same tool. Two cognitive architectures. One expanding. One collapsing.
The trajectory follows four stages:
Convenience. AI handles routine cognitive tasks faster. The human offloads willingly. This feels like liberation.
Habituation. The human stops attempting the delegated tasks. The skill weakens through disuse. This feels like efficiency.
Dependency. The human cannot perform the task without AI. The delegation becomes structural. This feels normal.
Vulnerability. The human lacks the capacity to evaluate the AI’s output on the delegated task. The system becomes an unquestioned authority. This feels like trust.
Each stage feels like progress. The trajectory is decline.
The sovereignty question
If the problem is structural, the response must be structural too.
There is an emerging concept in epistemology and AI ethics that provides the necessary frame: cognitive sovereignty. The term refers to the right and capacity of an individual to maintain independent thought, evaluation, and decision-making when interacting with AI systems.
This is not a new idea. It extends decades of work in epistemic autonomy, critical thinking pedagogy, and media literacy. But the urgency is new. Previous threats to cognitive autonomy — propaganda, advertising, social media algorithms — operated primarily on attention and emotion. AI operates on reasoning itself.
A formal study published in February 2026 in the journal Machine Learning and Knowledge Extraction investigated how individuals attribute epistemic authority to AI. The researchers found that trust in automation and perceived AI performance significantly influenced participants’ willingness to substitute human judgment with algorithmic decision-making. The relationship was not binary — it scaled with existing trust levels. Those who already trusted automated systems were most susceptible to further cognitive delegation.
We can state the principle:
Artificial intelligence must augment human cognition without replacing human judgment, identity, or responsibility. Any system that progressively diminishes the user’s capacity for independent evaluation violates this principle — regardless of whether the diminishment is intentional.
The final clause is the critical one. Most cognitive atrophy caused by AI is not deliberate. It is the natural byproduct of optimising for convenience, fluency, and satisfaction. A system that makes you comfortable is not necessarily a system that makes you capable. Harm without intent is still harm.
What AI thinking literacy actually requires
The phrase “AI literacy” has become fashionable. Governments include it in policy documents. Corporations add it to training programmes. Most of what passes for AI literacy today teaches people how to use AI tools effectively.
That is the wrong problem.
Teaching people to use AI without teaching them to evaluate AI is like teaching someone to drive fast without teaching them to brake.
A different literacy is needed. One that addresses the cognitive relationship between human and machine, not just the operational interface.
AI Thinking Literacy has four dimensions:
Algorithmic awareness. Every AI response involves selection, framing, and omission. The output is not neutral. It is a construct shaped by training data, optimisation objectives, and architectural decisions the user never sees. Algorithmic awareness is the understanding that the frame exists — even when you cannot see its edges.
Metacognitive monitoring. The practice of asking, in real time: Am I evaluating this response, or am I accepting it? What would I conclude if the machine said the opposite? Where is my confidence coming from — my own analysis, or the fluency of the output? This is the hardest dimension because it requires effort at precisely the moment when AI makes effort feel unnecessary.
Epistemic calibration. The ability to distinguish between what AI presents as knowledge, what is inference, what is pattern-matching, and what is confabulation. Without this calibration, the user assigns equal confidence to every output. A hallucination packaged in articulate prose is indistinguishable from a well-sourced synthesis. Calibration is the cognitive skill that tells them apart.
Cognitive independence practice. Deliberately performing tasks without AI assistance — not because the AI cannot do them, but because the human must retain the capacity to do them. Physical fitness requires resistance. Cognitive fitness requires friction. The deliberate choice to think slowly when speed is available is not inefficiency. It is training.
A concrete example. A financial analyst who uses AI to generate portfolio risk assessments every morning but once a week performs the same analysis manually — with a spreadsheet, a pen, and her own reasoning — will maintain a cognitive capacity that her colleague who never disconnects from the tool will lose within months. The manual exercise is slower. It produces a less polished output. But it preserves the ability to detect when the AI’s assessment is subtly wrong. That ability has no substitute.
These four dimensions are not a curriculum. They are a cognitive architecture. They describe not what people need to learn about AI, but how people need to think alongside AI.
The distinction matters. Most “AI training” teaches interface fluency — how to write better prompts, how to use features, how to integrate tools into workflows. Interface fluency without evaluative capacity is acceleration without brakes. It makes the user faster at accepting outputs they cannot assess.
The question beneath the question
The procurement manager in Lisbon is not a failure of education. He is the product of a system that rewards speed over depth, output over understanding, answers over questions.
AI did not create this problem. AI accelerated it.
The structural challenge is not whether AI will reshape how people think. It already has. The evidence — neural, behavioural, epistemic — is clear. The challenge is whether we will design the AI-Human relationship intentionally or let it emerge by default.
Default means: the cognitive architecture of billions of people shaped by systems built to maximise engagement. Default means: atrophy disguised as efficiency. Default means: the quiet concentration of cognitive power in the hands of those who build the infrastructure, while the rest surrender judgment in exchange for convenience.
The alternative requires something uncomfortable. It requires building systems — educational, regulatory, and personal — that treat cognitive sovereignty not as a philosophical ideal but as a measurable capacity. It requires accepting that the tool which makes us faster may be the same tool that makes us weaker. It requires institutions — schools, governments, companies — to ask a question they have been avoiding: are we developing our people’s thinking, or are we replacing it?
The answer, for most organisations today, is the second. Not out of malice. Out of convenience. The same convenience that led the procurement manager in Lisbon to stop reading reports. The same convenience that makes a student prefer AI feedback over a teacher’s challenge. The same convenience that lets a nation outsource its cognitive infrastructure to a company that could change its terms of service tomorrow.
Convenience scales. So does atrophy.
The AI processes. The human orchestrates. But orchestration requires a conductor who can hear each instrument independently — not one who has forgotten what the instruments sound like.
The greatest risk of artificial intelligence is not that machines will become smarter than humans.
It is that humans may stop thinking for themselves.
Not because the machines forced them.
Because the machines made it unnecessary.
References
- Wang, C., Boerman, S.C., Kroon, A.C., Möller, J., & de Vreese, C.H. (2025). “The Artificial Intelligence Divide: Who Is the Most Vulnerable?” New Media & Society.
- Ruggiu, D. (2023). “On Artificial Intelligence and Manipulation.” Topoi, 42.
- Sabour, S. et al. (2025). “Human Decision-making is Susceptible to AI-driven Manipulation.” arXiv:2502.07663.
- Hauswald, R. (2025). “Artificial Epistemic Authorities.” Social Epistemology, 39(6), 716–725.
- John, A.K. et al. (2025). “Epistemic Authority and Generative AI in Learning Spaces.” Frontiers in Education, 10.
- RAND Corporation (2025). “Manipulating Minds: Security Implications of AI-Induced Psychosis.” Research Report RRA4435-1.
- Korinek, A. & Vipra, J. (2025). “Concentration of Intelligence: Scalability and Market Structure in Artificial Intelligence.” Economic Policy, 40(121), 225–256.
- Crawford, M. (2025). “Ownership of the Means of Thinking.” The Independent Institute.
- AI Now Institute (2025). “Artificial Power.” Annual Report.
- Kosmyna, N. et al. (2025). Neural connectivity study, cited in “The Indiscriminate Adoption of AI Threatens the Foundations of Academia.” arXiv:2602.10165.
- Wang, F., Tang, X., & Yu, S. (2025). “Cognitive Outsourcing Based on Generative Artificial Intelligence.” Acta Psychologica Sinica, 57(6), 967–986.
- Perceiving AI as an Epistemic Authority or Algority (2026). Machine Learning and Knowledge Extraction, 8(2).
António Martins is an AI-Human Systems Architect and founder of Bitsapiens. He writes about the structural shifts that AI creates in how humans think, decide, and build. This essay is part of The Cognitive Shift series.