A metacognitive competency describing the capacity to collaborate with AI systems with an accurate understanding of where each party — human and AI — holds a genuine advantage, and to adjust the division of cognitive labour accordingly.
The research identifies three interdependent mental models that together constitute this competency:
- Domain understanding: Knowledge of the subject matter sufficient to evaluate AI outputs, not just receive them. Without this, the human in the loop is not a check — they are a rubber stamp.
- Process understanding: A working model of how AI transforms inputs into outputs — including what types of errors the model is prone to, what it optimises for, and what it cannot see. This is not technical expertise. It is operational literacy.
- Metacognitive awareness: The rarest of the three. The capacity to know, in real time, whether the human's judgment in a given situation is more or less reliable than the AI's recommendation — and to act accordingly, without defaulting to either systematic deference or systematic override.
The term "complementarity-aware" is precise: this is not about humans and AI doing different things. It is about understanding, in each specific situation, what each does better.
Context & Strategy
Related concepts
The operational expression of Cognitive Sovereignty — the ability to use AI without surrendering autonomous judgment. Related to Trust Calibration (which governs confidence in AI outputs) and to ADT — AI Design Thinking (which provides a methodology for designing systems that integrate human and artificial thinking as complements, not substitutes).