Summary
Worked examples and demonstrations show a realistic task being handled step by step, with the reasoning and verification process made visible.
For this project, the goal is not just to show a prompt and output. The goal is to show how a capable practitioner thinks, checks, and decides while using AI.
Evidence status
Assessment: evidence-backed
Primary support:
- Source - When Instructional Guidance is Needed (Chen et al., 2016)
- Source - The ICAP Framework (Chi & Wylie, 2014)
Why this pattern belongs here
- New and cognitively dense workflows often need guidance before independent performance becomes reliable.
- Adult technical learners usually benefit from concrete, job-relevant examples more than abstract instruction alone.
- AI-enabled work is especially vulnerable to shallow imitation if the reasoning stays hidden.
What this pattern is trying to achieve
- reduce ambiguity about what good AI-assisted work looks like
- make invisible reasoning visible
- show verification as part of the workflow, not as an optional extra
- establish a credible baseline for later guided practice
When to use it
- introducing a new AI workflow
- teaching a cognitively dense task
- teaching a task where verification is easy to forget
- onboarding people into a new internal standard or practice
When not to rely on it alone
- when the goal is durable performance without follow-up practice
- when learners already know the workflow and need reinforcement more than demonstration
- when the session risks becoming passive observation only
Patterns and practices
- show a real role-specific task, not a toy example
- narrate the decision points, not only the final answer
- surface where the instructor doubts the AI output
- show what gets verified and why
- explicitly point out a tempting but unsafe shortcut
- keep the example bounded enough that learners can discuss it afterward
Good forms for this project
- live paired debugging walkthrough
- architecture critique demo
- backlog-clarification demo with ambiguity detection
- flaky-test investigation walkthrough
Anti-patterns
- showing only the successful prompt and final result
- hiding corrections, uncertainty, or failed attempts
- using a polished example that does not resemble real work
- making the demonstration too long for later practice
Example application in AI enablement
For developers:
- show a bounded debugging session where AI suggests hypotheses, but the engineer still inspects code paths and verifies with tests
For product owners:
- show AI helping draft acceptance criteria, then explicitly identify what the model cannot know and what still requires stakeholder clarification
What should accompany this pattern
- guided practice
- reflection on what was trustworthy and what was not
- later retrieval or follow-up reinforcement