This article describes the questions a defensible intake form must ask, the routing logic that turns answers into appropriate review paths, and the operational discipline that keeps the form from drifting into either pointless bureaucracy or false precision.
Why a Structured Intake Form Matters
Three benefits justify the investment in formal intake.
First, early visibility. Without intake, AI initiatives proliferate in business units that may not have the expertise to evaluate them. By the time governance learns of a problematic project, redress is expensive. The Office of Management and Budget Memorandum M-24-10 on AI in U.S. federal agencies at https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf describes intake-driven inventory as a core governance practice.
Second, proportional review. Not every AI initiative needs the same level of scrutiny. Intake provides the data to route low-risk experimentation through fast-track review while routing high-impact systems through full ethical, legal, and security review.
Third, portfolio visibility. The aggregated intake data tells leadership what AI work is happening, what risks are accumulating, and where capability investment should focus. Without intake, the portfolio view is fictional.
The Question Set
A useful intake form covers eight territories.
1. Use Case Identity and Sponsorship
Who is proposing the use case, who is the executive sponsor, what business unit owns it, what date does it target for production. This section often catches use cases without genuine sponsorship — projects whose champion is a mid-level individual without organisational standing to deliver.
2. Business Purpose and Value
What problem does the AI solve? What is the expected business value (revenue, cost reduction, customer experience, risk reduction)? How will value be measured?
The section should require quantification at proposal stage. “Improves customer experience” is not actionable; “reduces median resolution time on customer service tickets by 30 percent” is. The U.S. Government Accountability Office AI Accountability Framework at https://www.gao.gov/products/gao-21-519sp explicitly recommends measurable value statements as part of AI initiative governance.
3. Decisions and Affected Parties
What decisions will the AI make or influence? Who is affected by those decisions? Are any affected parties members of vulnerable populations? Are decisions reviewable, contestable, or final?
This section is the gateway to risk classification. A system making consequential decisions about people warrants different scrutiny than a system suggesting marketing copy variants.
4. Data Inputs and Sources
What data feeds the system? Where does the data come from? Is it Personally Identifiable Information (PII), Protected Health Information (PHI), or otherwise sensitive? What lawful basis covers the proposed use? Is there a Data Protection Impact Assessment (DPIA) requirement?
For Generative AI use cases, the section must include retrieval sources, prompt content (especially user-supplied content), and any data the system might inadvertently expose to upstream providers.
5. Models and Vendors
What models will be used? Self-hosted, vendor-hosted API, fine-tuned? Which vendors? What contractual terms govern data use? Open-source or proprietary (per the previous article)?
6. Risk and Compliance
What regulations apply (EU AI Act, GDPR, HIPAA, sectoral regulations)? What is the proposed risk classification (using the EU AI Act’s tiering or the organisation’s internal scheme)? What ethical concerns has the proposing team identified? What red-team or adversarial scenarios have been considered?
The European Union AI Act Article 6 at https://artificialintelligenceact.eu/article/6/ defines the high-risk classification with reference to specific use cases; a defensible intake form should map directly to the article’s structure.
7. Operational Posture
How will the system be monitored? What human oversight applies? What happens if the system is wrong (correction, escalation, fallback)? What operating window does the team commit to (24/7, business hours, batch)?
8. Resource Requirements
What infrastructure (compute, storage, network)? What people (data engineers, ML engineers, product managers)? What budget? What dependencies on other initiatives?
Routing Logic
Answers to the form should drive routing automatically. Common routing dimensions:
Risk tier. High-risk use cases (per EU AI Act criteria, or internal equivalents) route through full ethics review, legal review, security review, and senior governance committee approval. Low-risk use cases route through expedited review.
Data sensitivity. Use cases involving PII or PHI route through privacy review and DPIA. Public data use cases skip the privacy track.
Vendor profile. Use cases relying on new vendors route through vendor due diligence (Module 1.10). Use cases relying on already-evaluated vendors skip the procurement track.
Decision consequence. Use cases making decisions about people route through a fairness review, with mandatory subgroup performance evaluation. Use cases without decisions about people skip this track.
Generative AI specifics. Use cases using Generative AI route through prompt-injection review, output-filtering review, and hallucination evaluation. Non-generative use cases skip these.
The routing should be transparent: the form should display the assigned review tracks immediately on submission, with named owners and target turnaround times.
Form Design Principles
The form’s design strongly affects whether it gets used.
Progressive disclosure. Show simple questions first; expand based on answers. A form that opens with 80 fields drives proposers away.
Plain language. The form should be readable by a product manager or business unit leader without training. Technical detail can be requested in follow-on review steps.
Examples. Each non-trivial question should include example answers from previously-approved use cases.
Save and continue. Filling the form should be possible across multiple sessions; many fields require consultation across functions.
Template reuse. Where the same business unit submits multiple similar use cases, prior submissions should be reusable as templates.
API and integration. The form should be addressable through an API for cases where intake is triggered programmatically (for example, by a vendor procurement workflow or a project management tool).
The U.S. Digital Service Playbook at https://playbook.cio.gov/ describes form-design principles that translate well to AI intake.
Operational Discipline
Cycle time tracking. The time from submission to first review, from first review to decision, and from decision to start should all be tracked. Excessive cycle time pushes work underground.
Rejection ratios. The proportion of submissions rejected, with reasons. High rejection rates indicate either bad submissions (intake is wasteful) or excessive gatekeeping (review is wasteful). Both are addressable.
Shadow inventory reconciliation. Periodic comparison of the intake register with cloud spend, vendor invoices, and observed running systems. Systems running without intake records are remediation items.
Quarterly review of the form itself. Questions that nobody answers usefully should be reworked. Questions that produce surprising answers should be promoted in prominence.
Common Failure Modes
The first is security-theatre intake — the form asks long lists of questions that no one reads. Counter by reviewing the actual answers in retrospect and pruning unread fields.
The second is missing low-risk path — every submission gets full review, slowing innovation. Counter by designing an explicit fast-track for low-risk cases.
The third is false self-classification — proposers under-classify their own systems to avoid scrutiny. Counter with sampling-based validation, where review staff occasionally elevate self-classified low-risk submissions to full review based on red flags.
The fourth is intake-only governance — the form captures information at the start but the system drifts during development. Counter by requiring re-intake when material changes occur (use case scope, data sources, vendor, deployment population).
Looking Forward
The next article turns to the broader AI use case management portfolio practices that the intake form feeds into. Intake captures one decision; portfolio management is the cycle of decisions that follow.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.