This note explains how the capability model should be applied across the lifecycle phases.
Phase 1. Alignment and boundaries
- define
E1,E2, andE3usage expectations - define
R3work that requires stronger review regardless of oversight-readiness band - communicate that title alone does not determine safe AI usage
Phase 2. Baseline and workflow discovery
- identify where
E1engineers are currently overusing AI on unfamiliar work - identify where
E3engineers can safely pilot stronger acceleration - classify pilot workflows by familiarity, risk, and verification difficulty
Phase 3. Paired-engineering pilot
E1: explanation-first, attempt-first, no default full-solution generation on new workE2: guided acceleration with explanation and modification requirementsE3: stronger leverage on bounded work, but still accountable for review quality
Phase 4. Capability-aware expansion
- formalize different workflow rules by capability band
- add stronger review rules for
R3work - treat low-verifiability work as elevated risk even when the surface task looks lightweight
- prevent seniors from normalizing unsafe habits that juniors will imitate without equivalent judgment
Phase 5. Standards, self-service, and internal platform support
- encode capability-aware rules into templates, standards, and examples
- encode verification checks and escalation paths into the templates, not just prompt patterns
- make learning-mode and delivery-mode workflows clearly distinct
- use Verification Standards by Artifact and Work Type so verification requirements vary by artifact rather than collapsing into a generic review rule
Phase 6. Measure, adjust, and decide whether to scale
- check whether capability-aware rules were actually followed
- review whether lower-oversight-readiness engineers are growing toward independent judgment
- review whether higher-leverage usage is creating hidden review debt
- review where verification difficulty was underestimated during the pilot