Skip to main content
AITM M1.2-Art10 v1.0 Reviewed 2026-04-06 Open Access
M1.2 The COMPEL Six-Stage Lifecycle
AITF · Foundations

Transparency and Regulatory Obligations

Transparency and Regulatory Obligations — Transformation Design & Program Architecture — Applied depth — COMPEL Body of Knowledge.

13 min read Article 10 of 14

AITM-PEW: Prompt Engineering Associate — Body of Knowledge Article 10 of 10


A prompt engineer who has reached the final article of this credential has learned the craft. What remains is the regulatory perimeter in which the craft is practised. A prompt-configured feature is now, in most jurisdictions of consequence, a regulated artefact. The regulation does not demand that the practitioner become a lawyer; it demands that the practitioner know which controls the regulation expects, ensure those controls are in place or route the work to someone who can, and produce the evidence that demonstrates the controls work. This article covers the three instruments a practitioner must be able to name and apply: EU AI Act Article 50, NIST AI RMF’s GOVERN function, and ISO/IEC 42001 Clause 7.5. Each is translated into a concrete checklist item the practitioner can bring to a review meeting.

EU AI Act Article 50 — transparency

The Artificial Intelligence Act, Regulation (EU) 2024/1689, came into force on 1 August 2024 and became progressively applicable across 2025 and 20261. Article 50 carries the transparency obligations for AI systems interacting with natural persons, for deepfakes, for AI-generated text intended to inform the public on matters of public interest, and for emotion recognition and biometric categorisation systems.

Article 50(1) requires that providers ensure AI systems intended to interact directly with natural persons are designed and developed in such a way that the natural persons concerned are informed that they are interacting with an AI system, unless that fact is already obvious. Article 50(2) extends the duty to marking of synthetic audio, image, video, and text content in a machine-readable format so that it can be detected as AI-generated. Article 50(4) addresses deepfakes and requires disclosure. Article 50(5) addresses AI-generated text published for informing the public on matters of public interest, with similar disclosure duties.

For a practitioner, the Article 50 checklist on a prompt-configured feature reads as follows. A user-facing feature presents a visible disclosure that the user is interacting with an AI system; the disclosure is not buried in a help article or a footer; it is present at the first point of meaningful interaction. Content the feature generates that could be mistaken for human-generated is labelled in a machine-readable way where technically feasible. Audio, image, or video that the feature produces is watermarked if the content is synthetic. Documentation records the decisions taken, the rationale, and any exceptions.

The exception under Article 50(1) for cases where the AI nature is already obvious is narrow and should be interpreted conservatively. A labelled chatbot button on a website may make the AI nature obvious; a conversational feature embedded inside an otherwise human-facing workflow may not. A practitioner who applies the exception should be able to articulate, in writing, why the obviousness meets the standard; a practitioner who cannot articulate this should apply the default disclosure.

NIST AI RMF — the GOVERN function

The NIST AI Risk Management Framework 1.0, with its Generative AI Profile, structures AI risk management around four functions: GOVERN, MAP, MEASURE, and MANAGE23. The GOVERN function is the organisational practice of establishing policies, processes, and accountability for AI risk. GOVERN subcategories applicable to prompt-configured features include 1.1 (legal and regulatory requirements involving AI are understood, managed, and documented), 1.2 (AI risk management is cultivated as a cultural norm), 1.4 (trustworthy characteristics are integrated across the lifecycle), 2.1 (roles, responsibilities, and lines of communication are documented), 3.2 (policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations), and 4.1 (technology-adopting organisations have policies on the contracts with and performance of third-party AI providers).

A practitioner’s GOVERN checklist on a prompt-configured feature asks: has a policy been written that describes how prompt-configured features are authored, reviewed, and deployed in this organisation; are the roles (author, reviewer, owner, security, governance, incident-response) defined for this feature; does the organisation have a standing relationship with the model provider (or with the team managing the self-hosted model) that addresses the provider’s responsibilities and the organisation’s; has the feature’s risk tier been assessed against the organisation’s risk tolerance, and is that assessment recorded.

GOVERN is distinct from MEASURE. A feature can be excellently measured (Article 8’s harness) and poorly governed (no policy, unclear ownership, no incident process). A feature can also be well-governed and poorly measured (clear policies, no actual quality data). A practitioner needs both.

ISO/IEC 42001 — documentation and management

ISO/IEC 42001:2023 is the international management-system standard for artificial intelligence4. It follows the pattern of ISO 9001 and ISO 27001: a set of management-system requirements an organisation can certify against. Clause 7.5 addresses documented information: the organisation’s AI management system must include documented information required by the standard and that the organisation determines as necessary for the effectiveness of the system.

For a prompt-configured feature, the documented information that is pertinent includes: the feature’s intended use and boundary; the model, retrieval source, and guardrail configuration that constitute the feature; the prompt registry entry; the evaluation harness design and results; the approval records for each change; the incident log; and the decisions taken in response to incidents or to monitoring findings. ISO 42001 does not mandate the specific format; it mandates that the information exists, is current, and is accessible to the people who need it.

Clause 8.1 on operational planning and control, Clause 9.1 on monitoring, and Clause 10.2 on nonconformity and corrective action together frame the operational discipline around a prompt-configured feature. A feature that passes the documentation check but has no evidence of monitoring or of corrective action in response to incidents is non-conformant; a feature that has monitoring but no documentation is equally non-conformant. The three must move together.

A California note

The United States does not have a federal equivalent to the EU AI Act. It has a patchwork of state laws and executive actions. California has advanced farthest. AB 2013, enacted in 2024, requires developers of generative AI systems offered to Californians to publish high-level summaries of the data used to train the systems5. SB 942, the California AI Transparency Act, requires certain providers to offer free AI-detection tools and to apply latent and manifest disclosures to AI-generated content6. Practitioners whose features reach California users should know these duties and route them to the teams responsible for compliance; the duties sit adjacent to Article 50’s disclosure obligations and often satisfy both with careful design.

An Italian enforcement illustration

The Italian data-protection authority, Garante per la protezione dei dati personali, fined OpenAI €15 million in December 2024 following its 2023 investigation of ChatGPT7. The decision is instructive because the grounds included lawful basis for processing, transparency to users, and measures to protect minors. The transparency finding cited inadequate disclosure of the data-processing operations to users at the relevant moments. The practitioner’s takeaway is not that OpenAI acted unusually badly; the practitioner’s takeaway is that a regulator applied transparency duties to a generative-AI feature in a way that produced an eight-figure fine, and that the legal instrument was pre-AI-Act GDPR transparency duties, not a new AI-specific statute. The perimeter was already there; regulators applied it.

[DIAGRAM: Bridge — aitm-pew-article-10-regulation-to-controls — Left: regulatory instruments (EU AI Act Article 50, NIST AI RMF GOVERN, ISO 42001 Clause 7.5, California AB 2013 and SB 942, GDPR transparency); right: concrete controls (disclosure banner, watermark, audit log, change log, model card, training-data summary); bridge beams map regulation to control.]

The four Article 50 scenarios

Article 50 produces four scenarios that a practitioner can hold in mind as a mental model.

A user interacts with a system and is informed of the AI nature of the interaction: disclosed AI, aware user. This is the compliant baseline. A user interacts and is not informed because the AI nature is obvious: undisclosed AI, aware user. This is the narrow exception, applied carefully. A user interacts and is not informed because the disclosure was omitted: undisclosed AI, unaware user. This is non-compliance. AI-generated content reaches a user without labelling: undisclosed AI content, unaware consumer. This is the content-marking gap Article 50(2) and 50(4) address.

[DIAGRAM: Matrix — aitm-pew-article-10-transparency-matrix — 2x2: user informed (yes/no) on one axis, AI nature obvious (yes/no) on the other; cells label the four Article 50 scenarios with compliance status.]

The practitioner’s discipline is to aim explicitly at the first cell (disclosed AI, aware user) as the default, fall back to the second cell only with a written rationale, and never ship a feature in the third or fourth cell.

Jurisdictional sweep

The regulatory perimeter extends beyond the EU and the United States. The United Kingdom has taken a principles-based approach under its pro-innovation AI framework, assigning AI regulation across existing sector regulators rather than creating a new one; features operating in the UK still face duties under data-protection, consumer-protection, and sector-specific regulation. Singapore’s Model AI Governance Framework and AI Verify toolkit establish voluntary standards that many organisations adopt to signal maturity. Canada’s proposed Artificial Intelligence and Data Act, still progressing at the time of writing, would establish AI-specific duties; the Office of the Privacy Commissioner has already applied PIPEDA to generative-AI features in several investigations. Brazil’s AI legal framework has advanced through the Senate. Japan’s AI guidelines, issued by the Ministry of Economy, Trade and Industry, take a soft-law approach that nonetheless shapes industry practice.

The practitioner’s task is not to master each of these jurisdictions; it is to know which apply to a given feature and to route the compliance work to the teams that can handle it. A feature reaching users in several jurisdictions is, in practice, a feature whose disclosure, evaluation, and documentation need to satisfy the strictest applicable regime across all users, because segmenting behaviour by jurisdiction is often more expensive than meeting the strictest uniformly.

Documentation as a deliverable, not an afterthought

A recurring theme across the three instruments is that documentation is a deliverable. The EU AI Act requires technical documentation for high-risk systems under Article 11 and Annex IV8. NIST AI RMF’s GOVERN demands that policies, procedures, and decisions be documented and accessible. ISO 42001 Clause 7.5 specifies documented information as a management-system element. None of the three treats documentation as optional.

A practitioner who has shipped the prompt, the retrieval pipeline, the guardrails, and the evaluation harness but has not written the corresponding documentation has built two-thirds of a compliant feature and one third of a liability. The registry entry from Article 9, the evaluation plan from Article 8, the test-case sets and their results, the change records, the incident log, and the regulatory-posture section of the registry together constitute the documentation. They are produced as part of the work, not collated at the end.

A practitioner-sized checklist

For any prompt-configured feature the practitioner is shipping:

  1. Disclosure that the user is interacting with an AI system is present at the first meaningful interaction.
  2. Content the feature generates is marked, where technically feasible, in a form a downstream detector can read.
  3. The feature’s risk tier has been assessed against the organisation’s risk tolerance, and the assessment is recorded.
  4. The feature’s prompt registry entry, evaluation results, and guardrail configuration are linked and accessible to reviewers.
  5. The feature’s incident-response runbook names the owner, the escalation path, and the communication duties.
  6. For features reaching California users, training-data summary and AI-detection tool duties (AB 2013, SB 942) have been routed to the compliance team.
  7. For features reaching EU users, the Article 50 disclosure has been reviewed against the Article 50(1) obviousness exception.
  8. Documented information under ISO 42001 Clause 7.5 (intended use, model and retrieval configuration, registry entry, harness design and results, change approvals, incident log) is present.

A checklist of this size is not exhaustive. It is a practitioner’s baseline. A higher-risk feature adds items (data-protection impact assessment, fundamental-rights impact assessment under Article 27 for high-risk AI systems, vendor contract review). A lower-risk feature might not need every item, but omissions are decided in writing, not by forgetting.

From checklist to practice

A checklist is a starting point; the practice is producing the evidence against the checklist on an ongoing basis. Three disciplines keep the practice honest.

The first is periodic review. The checklist is not applied once at launch and forgotten. Every release, and at minimum every quarter, the checklist is re-applied to verify that the controls remain in place, the disclosures remain visible, the documentation remains current, and the regulatory landscape has not changed in ways that invalidate a prior conclusion.

The second is incident-driven update. When an incident occurs, the checklist item that would have prevented or detected the incident is reviewed; if the item was present but ineffective, the item is strengthened; if the item was absent, it is added. The adversarial probe set, the harness, the disclosure wording, and the runbook each evolve under this discipline.

The third is regulatory-horizon scanning. The regulatory landscape is moving. A team assigns an owner for horizon-scanning, whose job is to track the EU AI Act’s implementing acts and guidance, NIST’s ongoing AI RMF publications, ISO’s new AI-related standards, and relevant national laws, and to surface changes that affect the feature. The owner does not need to be a lawyer; the owner needs to know when to bring in a lawyer.

Summary

A prompt-configured feature is a regulated artefact. EU AI Act Article 50 governs transparency in interaction and content marking. NIST AI RMF’s GOVERN function demands clear policies, roles, and accountability around prompt-driven decisions. ISO 42001 Clause 7.5 demands documented information sufficient to demonstrate the system works as declared. California’s AB 2013 and SB 942 add specific US duties; the Italian Garante decision shows that transparency duties can carry eight-figure consequences. A practitioner who can name the instruments, apply the checklist, and produce the evidence has graduated from prompt engineering as a craft into prompt engineering as an enterprise discipline. The capstone for this credential tests exactly that graduation.

Further reading in the Core Stream: Ethical Foundations of Enterprise AI and Produce: Executing the Transformation.



© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.

Footnotes

  1. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 (Artificial Intelligence Act), Article 50. https://eur-lex.europa.eu/eli/reg/2024/1689/oj — accessed 2026-04-19.

  2. NIST AI Risk Management Framework 1.0. National Institute of Standards and Technology, January 2023. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf — accessed 2026-04-19.

  3. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, NIST AI 600-1, July 2024. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf — accessed 2026-04-19.

  4. ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system. International Organization for Standardization. https://www.iso.org/standard/81230.html — accessed 2026-04-19.

  5. California AB 2013 (Generative Artificial Intelligence: Training Data Transparency). California Legislature, 2024. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2013 — accessed 2026-04-19.

  6. California SB 942 (California AI Transparency Act). California Legislature, 2024. https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB942 — accessed 2026-04-19.

  7. Provvedimento del 30 dicembre 2024 (n. 755), sanction against OpenAI. Garante per la protezione dei dati personali. https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/10085455 — accessed 2026-04-19.

  8. Regulation (EU) 2024/1689, Article 11 and Annex IV (technical documentation for high-risk AI systems). https://eur-lex.europa.eu/eli/reg/2024/1689/oj — accessed 2026-04-19.