Skip to main content
AITE M1.3-Art75 v1.0 Reviewed 2026-04-06 Open Access
M1.3 The 20-Domain Maturity Model
AITF · Foundations

Template 5: Portfolio Scorecard

Template 5: Portfolio Scorecard — Maturity Assessment & Diagnostics — Advanced depth — COMPEL Body of Knowledge.

6 min read Article 75 of 48 Calibrate

COMPEL Specialization — AITE-VDT: AI Value & Analytics Expert Template 5 of 5


This template produces a portfolio-level AI scorecard with two artifacts: (1) a spreadsheet-form scorecard for human consumption, and (2) a platform-neutral JSON schema for BI-dashboard implementation. The spreadsheet feeds the monthly or quarterly steering-committee meeting; the dashboard supports self-service drill-through for analysts.


Portfolio Scorecard: [Program Name]

Program: [Name, scope] Reporting period: [Quarter / Month] Scorecard date: [Date] Prepared by: [AI value lead] Reviewed by: [FinOps lead, Program office] Scorecard version: [Major.minor]


1. Portfolio overview (one page, top of scorecard)

MetricThis periodPrior periodTarget
Active features count[N][N]
Green features[N][N]
Yellow features[N][N]
Red features[N][N]
Aggregate realized value[$][$][$]
Aggregate investment to date[$][$]
Portfolio cumulative payback ratio[X.Y][X.Y]

Status distribution: [🟢 Green × N | 🟡 Yellow × N | 🔴 Red × N]

Top three findings:

  1. [Finding 1]
  2. [Finding 2]
  3. [Finding 3]

Pending decisions this period: [List]


2. Feature scorecard (one row per feature, sorted by status then realized value descending)

#FeatureStageStatusRealized value QTDInvestment to datePrimary riskNext decisionOwnerNotes
1[Feature A][Evaluate]🔴[$][$][Specific risk][Decision, date][Role][Notes / attribution-model flag]
2[Feature B][Produce]🔴[$][$][Specific risk][Decision, date][Role][Notes]
3[Feature C][Evaluate]🟡[$][$][Specific risk][Decision, date][Role][Notes]
4[Feature D][Evaluate]🟢[$][$][Specific risk][Decision, date][Role][Notes]

Maximum 15 features per scorecard. If the portfolio has more, nest into program-level scorecards.


3. Attribution governance footnote

Portfolio primary attribution model: [Shapley / Linear / Last-touch / etc.]

Features using non-primary attribution:

  • [Feature X] uses [Model]. Realized-value reported here is under the feature’s native model; Shapley re-analysis is [in-flight / planned for Q_].
  • […]

Aggregate caveat: Aggregate realized-value is the sum across features’ native attribution models. Under uniform Shapley attribution, the aggregate is estimated at [$] ± [$]. See [Reference] for the attribution-harmonization workstream.


4. One-page executive narrative (accompanies scorecard)

Portfolio health

[Two to three sentences summarizing the aggregate: count by status, realized value vs. target, significant movement since last period.]

Reds and pending decisions

For each red feature, one short paragraph covering what’s wrong, what’s the recommended action, and when the decision is expected.

  • [Red feature 1]: [Three sentences]
  • [Red feature 2]: [Three sentences]

Attribution note

[One paragraph on the mixed-attribution situation and in-flight remediation.]

Looking forward

[Two to three sentences on next-quarter expected decisions, anticipated portfolio shifts, emerging risks.]


5. Platform-neutral JSON schema (for BI implementation)

{
  "portfolio": {
    "name": "[Program name]",
    "period": "[YYYY-Q#]",
    "prepared_by": "[Name]",
    "aggregate": {
      "feature_count": 0,
      "green_count": 0,
      "yellow_count": 0,
      "red_count": 0,
      "realized_value_total": 0,
      "investment_to_date_total": 0,
      "payback_ratio": 0.0,
      "primary_attribution_model": "shapley"
    },
    "features": [
      {
        "id": "F001",
        "name": "[Feature name]",
        "stage": "evaluate",
        "status": "green",
        "realized_value_qtd": 0,
        "realized_value_cumulative": 0,
        "investment_to_date": 0,
        "payback_ratio": 0.0,
        "primary_risk": "[Specific risk statement]",
        "risk_probability": "M",
        "risk_impact": "H",
        "next_decision": "[Decision description]",
        "next_decision_date": "YYYY-MM-DD",
        "owner_role": "[Role]",
        "attribution_model": "shapley",
        "notes": "[Any notes]"
      }
    ],
    "pending_decisions": [
      {
        "feature_id": "F001",
        "decision": "[Description]",
        "date": "YYYY-MM-DD",
        "decision_maker_role": "[Role]"
      }
    ]
  }
}

Implementation guidance per BI platform

  • Power BI: Import the JSON via Power Query; create a table visualization with conditional formatting on status; drill-through to feature-level VRRs.
  • Tableau: Use JSON data source; build a scorecard view with a shape or color mark on status; action filters for drill-through.
  • Looker: Define a LookML model with feature as primary table; dashboard with tile-per-row and conditional formatting.
  • Metabase: Import JSON as a question; dashboard with single-cell visualizations per row; dashboard filters for stage and status.
  • Superset: Native SQL-backed dashboard with heatmap or table visualization; filter boxes for interactive filtering.
  • Grafana: Stat panel per feature with threshold-based color coding; organized into dashboard rows.

The same JSON schema feeds all six platforms; the implementation is BI-tool-specific but the underlying data contract is unified.


6. Scorecard preparation workflow

Week 1 — Data refresh

  • Each feature lead submits current realized value, investment, primary risk, next decision to the AI program office by [Date].
  • Data must come from the same source as feature-level VRRs.
  • Discrepancies are flagged for resolution before aggregation.

Week 2 — Status calibration

  • AI program office reviews each submission against status-calibration rules.
  • Calibration meetings resolve any disputed status assignments.
  • Attribution-model compliance is checked; non-primary usage is flagged.

Week 3 — Aggregation

  • Scorecard assembled per the structure above.
  • Aggregate totals computed.
  • Attribution-model footnote and caveats drafted.
  • Narrative drafted.

Week 4 — Review and distribution

  • Joint sign-off by AI program office and FinOps lead.
  • Pre-brief for steering committee chair.
  • Distribution to steering committee with the one-page narrative.

7. Failure-mode checklist

Before distributing, verify the scorecard does not exhibit:

  • All-green bias (every feature green; unlikely in a healthy portfolio)
  • Feature inflation (more than 15 features; nest if needed)
  • Realized-value inconsistency (different attribution models without disclosure)
  • Investment-to-date ambiguity (different definitions across features)
  • Risk-column dilution (boilerplate risks; rewrite to be specific)
  • Decision-column absence (features with no open decision; question their presence)

Appendix A — Status-calibration rules

StatusCriteria (must meet at least one)
🔴 RedSignificant business-case variance (>30% below projection); pilot-blocking event; red-flagged risk materialized; capability-evaluation below threshold
🟡 YellowMaterial variance within tolerance (10–30% below projection); active risk mitigation; adoption below target but above floor
🟢 GreenWithin 10% of projection; no active red risks; adoption on target

Appendix B — Linkage to other artifacts

  • Individual feature VRRs: [References]
  • Attribution-harmonization workstream: [Reference]
  • Compute-budget portfolio view: [Reference]
  • Board-grade quarterly summary: [Reference]