Skip to main content
AITF M1.27-Art02 v1.0 Reviewed 2026-04-06 Open Access
M1.27 M1.27
AITF · Foundations

Regulatory Submission Preparation for High-Risk AI

Regulatory Submission Preparation for High-Risk AI — AI Use Case Management — Foundation depth — COMPEL Body of Knowledge.

7 min read Article 2 of 4

This article describes that architecture, the cross-cutting practices that translate internal governance artefacts into submission-ready documents, and the operational rhythm that makes the second submission much cheaper than the first.

The Common Architecture

Across the regulators that supervise high-risk AI today, submission packages share six structural elements.

1. System Description

A precise statement of what the system is, what it does, what decisions it produces, and what it is intended for. Specificity matters — submissions that describe systems generically often produce supplementary information requests that delay review.

The U.S. Food and Drug Administration draft guidance on Predetermined Change Control Plans for Machine Learning-enabled Device Software at https://www.fda.gov/regulatory-information/search-fda-guidance-documents/predetermined-change-control-plans-machine-learning-enabled-medical-devices articulates the description level expected for medical AI; the pattern applies broadly.

2. Risk Analysis and Management

Documented analysis of foreseeable risks (intended use and reasonably foreseeable misuse), the chosen risk treatment for each, and the residual risk that remains. This connects to the risk management work of Module 1.21.

ISO/IEC 23894:2023 AI Risk Management at https://www.iso.org/standard/77304.html provides the international reference for AI risk management documentation that regulators increasingly accept.

3. Technical Documentation

The deep technical detail: data sources and quality, model architecture, training methodology, validation procedures, performance metrics including subgroup analysis. The depth required varies by regulator but the structure is consistent.

4. Quality Management System Evidence

Evidence that a quality management system governs the development, deployment, and operation of the system. ISO/IEC 42001:2023 AI Management System at https://www.iso.org/standard/81230.html and ISO 13485 (medical devices) provide reference frameworks.

5. Post-Market Performance Plan

How the system will be monitored after deployment, what triggers will prompt corrective action, and how performance results will be reported back to the regulator.

6. Declarations and Attestations

The signed declarations, conformity statements, and accountability attestations the specific regulator requires. These are typically the smallest part of the package by volume but the most legally consequential.

Translating Internal Artefacts to Submission Documents

Most organisations have many of the necessary inputs from their AI governance program. The challenge is translation, not creation. Three patterns help.

Crosswalk Tables

A crosswalk table maps each regulator requirement to the internal artefact that addresses it. Completing the crosswalk early in submission preparation surfaces gaps and reduces the risk of last-minute scrambles. The U.S. National Institute of Standards and Technology has published crosswalk material at https://www.nist.gov/itl/ai-risk-management-framework that maps NIST AI RMF to ISO/IEC 42001 and other frameworks; the same approach can map internal documents to regulator-specific structures.

Re-framing, Not Re-writing

Internal documents are usually more candid and operational than regulator-facing documents. The translation should preserve substance while adopting the regulator’s vocabulary, format, and emphasis. Re-writing from scratch loses the evidentiary weight of the underlying internal artefact.

Annex Strategy

Most regulators accept lengthy supporting annexes alongside a focused main document. Use annexes liberally for the underlying internal documentation; keep the main document focused on the required structure with cross-references to annex evidence.

Specific Regulator Patterns

Each regulator has distinctive expectations.

U.S. Food and Drug Administration

The FDA’s Software as a Medical Device (SaMD) framework, with the AI/ML Action Plan published at https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device, expects a particular emphasis on the validation regimen, the predetermined change control plan, and the post-market surveillance approach. The pre-submission Q-Sub program is highly recommended for novel AI submissions.

European Banking Authority and National Banking Regulators

For credit risk AI in EU banking, the EBA Discussion Paper on Machine Learning at https://www.eba.europa.eu/sites/default/documents/files/document_library/Publications/Discussions/2021/Discussion%20on%20machine%20learning%20for%20IRB%20models/1023883/Discussion%20paper%20on%20machine%20learning%20for%20IRB%20models.pdf and the underlying Internal Ratings-Based Approach framework demand model documentation aligned to specific regulatory technical standards.

U.S. Federal Reserve and OCC

For U.S. banking AI, the long-standing Supervisory Letter SR 11-7 on Model Risk Management at https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm and OCC Bulletin 2021-39 at https://www.occ.gov/news-issuances/bulletins/2021/bulletin-2021-39.html define the model risk management expectations that AI submissions must satisfy.

EU AI Act Conformity

Discussed in detail in the previous article. The harmonised standards regime is the navigational aid.

Sector-Specific National Authorities

Energy, transportation, and telecommunications regulators are developing AI-specific submission expectations. The pattern is generally to extend existing safety case methodologies with AI-specific evidence requirements.

The Operational Rhythm

A program that submits regularly to regulators benefits from operational discipline that one-time submitters can adopt aspirationally.

Submission readiness as a program metric. The proportion of high-risk systems that could be submitted within 30 days if requested. A low number indicates documentation debt.

Pre-submission rehearsal. An internal review team simulates the regulator’s evaluation, identifying weaknesses before submission. The rehearsal often catches more issues than the actual review.

Submission tracking. Each submission is tracked from preparation through filing to authorisation, with cycle time, finding response, and resource cost recorded.

Cross-jurisdictional template management. Where the organisation submits to multiple regulators, a master content base feeds jurisdiction-specific outputs. Maintaining the master is more efficient than parallel writing.

Post-submission learning. Findings from each submission feed back into the standard documentation and processes, raising the floor for future submissions.

Working with Regulators

Regulator relationships are themselves a strategic asset.

Pre-submission engagement. Most regulators welcome pre-submission discussions on novel or complex submissions. Pre-engagement reduces surprises and clarifies expectations.

Honest disagreement. Where the organisation believes a regulator’s interpretation is incorrect, engaged dialogue is more productive than submission compliance with informal disagreement. Regulators generally prefer transparent engagement to surface compliance.

Industry coordination. Industry associations frequently engage regulators on emerging issues. Participation in these conversations exposes the organisation to interpretive trends before they hit individual submissions.

Documentation of regulator interactions. Every meaningful regulator interaction is documented and circulated internally. The institutional memory of regulator preferences and recurring concerns is valuable across the program.

Common Failure Modes

The first is narrative drift — the submission tells a story that does not match the operational reality of the system. Counter with internal pre-review where operational staff verify the submission against actual practice.

The second is scattered ownership — multiple teams contribute to the submission with no integrating editor, producing inconsistent voice and gaps. Counter with a single named submission lead.

The third is late starts — the submission preparation begins after the system is built rather than alongside development. Counter by integrating regulatory submission planning into the AI lifecycle from intake.

The fourth is incomplete post-submission planning — preparation focuses on the initial submission but not on the post-market obligations that follow approval. Counter by building post-market plans concurrently.

Looking Forward

The next article turns to ISO 42001 certification — the voluntary management system certification that increasingly accompanies regulatory submissions and provides a structural backbone that simplifies multi-regulator engagement.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.