Summary
Self-explanation and reflection require the learner to articulate what is happening, why it is happening, what they trust, and what still needs review.
For AI enablement, this pattern is essential because fluent AI output can conceal weak understanding.
Evidence status
Assessment: evidence-backed
Primary support:
- Source - The ICAP Framework (Chi & Wylie, 2014)
- Source - How AI Impacts Skill Formation (Shen & Tamkin, 2026)
Important caution:
- Source - Fostering Appropriate Reliance on Large Language Models (Kim et al., 2025)
Why this pattern belongs here
- It supports deeper processing rather than passive acceptance.
- It helps expose whether learners understand the code, reasoning, or tradeoffs they are about to accept.
- It aligns with the project stance that AI should support mastery rather than substitute for it.
What this pattern is trying to achieve
- make understanding inspectable
- reveal overconfidence and weak comprehension
- build judgment about what still needs verification
- support progression from assisted use toward more independent oversight
When to use it
- after demonstrations
- during guided practice
- when a learner is using AI on unfamiliar work
- during debriefs of successful or failed AI usage
When not to misuse it
- do not treat AI-generated explanations as proof
- do not use reflection as a substitute for tests, peer review, or evidence
- do not turn reflection into vague journaling disconnected from real tasks
Patterns and practices
- ask the learner to explain the result in their own words
- ask what they would verify before shipping or relying on it
- ask what part of the answer they do not yet trust
- ask what failure mode is most likely
- ask when they would stop using AI and escalate to a human
Good forms for this project
- “Explain this code path and where it could fail”
- “What would you verify before accepting this test suite?”
- “What assumption in this architecture rationale is least justified?”
- “Which part of these acceptance criteria still requires stakeholder confirmation?”
Anti-patterns
- asking only whether the learner “understands” without requiring evidence
- rewarding confident explanations more than accurate ones
- treating reflection as complete if the learner repeats the model’s wording
- skipping independent verification because the explanation sounded good
Example application in AI enablement
After an AI-assisted refactoring suggestion, ask the engineer to explain what changed, what behavior might regress, and what tests or review would still be needed.
What should accompany this pattern
- real tasks or worked examples
- independent verification
- later reinforcement or follow-up practice
Source notes
- Source - The ICAP Framework (Chi & Wylie, 2014)
- Source - How AI Impacts Skill Formation (Shen & Tamkin, 2026)
- Source - Fostering Appropriate Reliance on Large Language Models (Kim et al., 2025)