COMPEL Specialization — AITE-VDT: AI Value & Analytics Expert Lab 5 of 5
Lab objective
Assemble a portfolio-level AI value scorecard for the ten-feature program described below. Apply the attribution-governance and status-calibration rules from Article 30. Write the accompanying one-page executive narrative that turns the scorecard into a decision artifact.
Duration: 90 minutes. Deliverable: A scorecard (spreadsheet, dashboard JSON, or Markdown table) and a one-page narrative. Linked articles: 30 (portfolio scorecard), 16 (VRR), 35 (board-grade reporting).
Scenario
You are the head of AI value analytics for a mid-market insurance company. The AI program has ten active features at various COMPEL stages. Leadership has asked you to produce the first formal portfolio scorecard for the quarterly steering committee meeting in two weeks.
Feature inventory
You have these ten features, each with its current status data provided. Use the data as-is; part of the lab is applying status calibration, not fixing data.
| # | Feature | Stage | Realized value QTD | Investment to date | Primary risk (feature-lead self-report) | Attribution model used |
|---|---|---|---|---|---|---|
| 1 | Underwriter Copilot | Evaluate | $1.4M | $3.2M | Continued model quality | Linear |
| 2 | Claim Triage ML | Evaluate | $2.8M | $2.1M | Continued data availability | Last-touch |
| 3 | Fraud Detection+ | Evaluate | $4.7M | $5.4M | False-positive rate drift | Shapley |
| 4 | Policy-Renewal Predictor | Produce | $0.9M | $1.8M | Adoption by sales team below target | First-touch |
| 5 | Agent Onboarding Assistant | Produce | $0.3M | $1.1M | Quality issues flagged in week 3 | Linear |
| 6 | Customer-Service Copilot | Model | N/A | $2.4M | Pilot delayed two quarters | N/A |
| 7 | Actuarial Modeling Copilot | Model | N/A | $3.6M | Capability evaluation below threshold | N/A |
| 8 | Dynamic Pricing Optimizer | Calibrate | N/A | $0.8M | Regulatory review ongoing | N/A |
| 9 | Broker Intelligence Dashboard | Learn | Sustaining | $2.9M | Feature reaching end-of-life | Last-touch |
| 10 | Document Intake Automator | Learn | $1.1M | $2.6M | Being considered for sunset | Linear |
Additional context you should apply.
- The portfolio’s primary attribution model (per the program office rule) is Shapley. Features currently using other models must be flagged.
- Status calibration: red if significant business-case variance OR pilot-blocking event; yellow if material risk or under-target adoption; green if on-track.
- Cumulative program realized-value target for this quarter: $12M. Actual aggregate from feature realized-value to date: compute.
- Regulatory review (Feature 8) is ongoing and may cause delay; decision pending at two-week mark.
What to produce
Step 1 — Status calibration
For each of the ten features, apply the status-calibration rules and assign green, yellow, or red. Document the rationale for each assignment in a notes column. Be prepared to defend the assignment against feature-lead pushback (feature leads will naturally prefer green).
Predicted calibration:
- Feature 3: green (realized value exceeds projection).
- Feature 2: green or yellow (depends on attribution-model re-analysis).
- Feature 1: yellow (realized value below pilot-case projection).
- Feature 4: red (adoption below target is a pilot-blocker at this stage).
- Feature 5: red (quality issues flagged; needs urgent intervention).
- Feature 6: red (pilot delayed two quarters is a significant schedule risk).
- Feature 7: red (capability evaluation below threshold blocks Model-stage exit).
- Feature 8: yellow (regulatory review uncertain; decision pending).
- Feature 9: green (sustaining as expected in Learn stage).
- Feature 10: yellow (sunset under consideration; not yet red until decision).
Make your own assignments; the predictions above are illustrative.
Step 2 — Apply attribution-model governance
Flag the features using non-primary attribution models (Features 1, 2, 4, 5, 9, 10 all use models other than Shapley). Note that realized-value figures for these features may not be directly comparable to the portfolio-primary-model basis. Document this in the scorecard’s footnotes.
For the aggregate portfolio realized-value total, either:
- Option A: Re-compute each feature’s realized value under Shapley attribution (not feasible in a 90-minute lab; note as follow-up).
- Option B: Report the aggregate with a caveat that multiple attribution models are in use and that true-Shapley aggregation is pending analysis refresh.
Choose Option B for this lab; document the need for Option A in the follow-up section.
Step 3 — Produce the scorecard
Assemble the scorecard with these columns: Feature, Stage, Status, Realized value QTD, Investment to date, Cumulative payback ratio, Primary risk (re-written if needed to meet the risk-writing discipline from Article 30), Next decision and date, Owner, Notes.
Sort by status (reds first, yellows second, greens third) then by realized value descending.
Step 4 — Write the one-page narrative
One page, four sections.
- Portfolio health. Count by status; aggregate realized value against quarter target; two or three most significant takeaways.
- Reds and pending decisions. Brief write-up (2–3 sentences) for each red feature: what is wrong, what is the recommended next action, when is the decision.
- Attribution note. One paragraph acknowledging the mixed-attribution situation and the in-flight remediation.
- Looking forward. Next-quarter expected decisions; portfolio shifts expected.
Guidance
- Status calibration is political. Feature leads will push back on yellow and red assignments. Your narrative should ground each assignment in specific evidence — the calibration meeting happens before the committee, not during it.
- The risk-writing discipline matters. “Continued model quality” and “Continued data availability” are boilerplate. Rewrite to be specific (“Model F1 score has drifted from 0.84 to 0.79 over two quarters; root-cause analysis ongoing”).
- The narrative is the artifact. Committees glance at the scorecard; they absorb the narrative. Spend most of your writing time on the narrative section.
- Honesty about the aggregate. Adding up feature realized-value across five attribution models produces a number that may overstate or understate the true aggregate. Disclose the caveat clearly.
Evaluation rubric
| Dimension | What to demonstrate | Weight |
|---|---|---|
| Status calibration | Reasonable assignments; defensible rationale | 20% |
| Attribution governance | Flagged; aggregate caveat disclosed | 15% |
| Risk rewriting | Specific risks, not boilerplate | 15% |
| Scorecard readability | Sorted correctly; one page; decision-supporting | 15% |
| Narrative quality | Four sections; highlights reds; reads in 5 min | 20% |
| Follow-up items | Clear enumeration of post-meeting actions | 10% |
| Board-grade discipline | Aligns with Article 35’s red-line rules | 5% |
Reflection questions
- Three feature leads pushed back hard on your red assignments, each with a plausible-sounding argument. What structural process prevents the pushback from degrading the scorecard’s credibility?
- The aggregate realized-value number under mixed attribution models shows $15M against a $12M target. Under primary-model Shapley re-analysis (estimated), the aggregate would be $11M. How do you report the quarter to the steering committee?
- Feature 6’s pilot delay is the highest-visibility red. If leadership proposes to skip the pilot and proceed directly to production rollout, what is your recommendation and what are the evidence bases?
Linked articles and further reading
- Article 30 — Building a portfolio scorecard.
- Article 16 — The Value Realization Report.
- Article 26 — Attribution modeling.
- Article 35 — Board-grade AI value reporting.
Submission
Submit the scorecard and the one-page narrative together. Reviewer will validate the calibration decisions, the attribution-governance handling, and the narrative’s decision-supporting structure.