Skip to main content
AITF M1.26-Art04 v1.0 Reviewed 2026-04-06 Open Access
M1.26 M1.26
AITF · Foundations

External Communications: AI Transparency to Customers

External Communications: AI Transparency to Customers — AI Use Case Management — Foundation depth — COMPEL Body of Knowledge.

7 min read Article 4 of 4

This article describes the principal categories of external AI communications, the legal and ethical frameworks that constrain them, and the operational practices that prevent the most common failures: AI-washing, under-disclosure, and mismatched messaging across channels.

The Categories of External AI Communication

Five categories make up the external AI communications surface.

1. Product Disclosure

What customers are told about the AI capabilities of the products they use. Includes interface notices (“This response was generated by AI”), terms of service language, product documentation, and customer support content. The European Union AI Act Article 50 at https://artificialintelligenceact.eu/article/50/ codifies specific disclosure obligations, including the requirement that synthetic content be marked as such and that natural persons interacting with AI systems be informed.

2. Marketing Claims

How AI capabilities are positioned in advertising, sales materials, website content, and earnings communications. Marketing claims are subject to consumer protection law and, for public companies, securities law. The U.S. Securities and Exchange Commission has pursued multiple enforcement actions on so-called “AI washing” — material misstatements about AI capabilities — at https://www.sec.gov/news/press-release/2024-36.

3. Regulatory Disclosures

Filings, notifications, and responses to regulators. Includes general filings (EU AI Act conformity declarations, periodic reports) and event-driven filings (incident notifications, change control submissions). Each regulatory regime has specific requirements; departures from required form or content trigger findings.

4. Affected Person Notices

Notices to people whose data is used by AI systems or whose lives are affected by AI decisions. Includes privacy notices under the General Data Protection Regulation (GDPR), notices of automated decision-making under GDPR Article 22, and analogous notices under sectoral privacy and consumer protection rules.

5. Public Reporting

Sustainability reports, AI transparency reports, ethics reports, and other voluntary public disclosures. The Stanford Center for Research on Foundation Models has shown through the Foundation Model Transparency Index at https://crfm.stanford.edu/fmti/ that the substance and quality of voluntary AI reporting varies enormously, even as the practice spreads.

The Constraint Framework

External AI communications operate under multiple overlapping constraints.

Truthfulness. Statements must be accurate as of the time made and must remain accurate or be updated. Past statements that become misleading because of new facts can themselves create liability.

Materiality. For public companies, statements about AI must avoid misleading investors. The U.S. SEC guidance on cybersecurity disclosure at https://www.sec.gov/news/press-release/2023-139 sets a precedent that AI materiality disclosure is following.

Specificity over generality. Consumer protection law has consistently treated specific quantifiable claims as more enforceable than generic capability claims. “Reduces error rates by 30 percent” exposes the speaker more than “improves accuracy.”

Regulatory specifics. Sectoral regulators (financial, healthcare, transportation) have specific disclosure requirements. The U.S. Federal Trade Commission has issued multiple guidance pieces on AI claims at https://www.ftc.gov/business-guidance/blog that constrain marketing.

Privacy and confidentiality. External communications must not inadvertently disclose customer data, third-party intellectual property, or operational secrets that would compromise security.

Disclosure Patterns That Work

Several patterns have proven effective across organisations.

The Layered Disclosure

A short, prominent notice (“This response was generated by AI”) with a link to longer detail (how the system works, its limitations, what data informs it). The layered approach respects the user’s time while making depth available.

The Data Use Map

For privacy-relevant AI, a structured description of what data is used, for what purpose, on what legal basis, and with what choices the user has. The format originated in privacy notice design and has matured in AI-specific contexts.

The Capability and Limitation Pair

Marketing claims that name a capability paired with a limitation maintain credibility better than claims that name only the capability. “Generates first drafts in seconds; outputs require human review for accuracy” is more durable than “AI-powered content generation.”

The Periodic Transparency Report

Annual or semi-annual reports that disclose AI portfolio composition, governance structure, key incidents and remediation, and plans for the coming period. The U.S. NIST AI RMF at https://www.nist.gov/itl/ai-risk-management-framework recommends transparency reporting as a governance maturity indicator.

The Incident Statement Template

A pre-developed template for AI incident communications that can be adapted quickly when needed. Templates should be reviewed by legal, communications, and the AI governance function before adoption.

Coordination Across Channels

A perennial failure mode is mismatched messaging across channels: marketing says one thing, the privacy notice says another, the incident communications say a third. Coordination requires institutional infrastructure.

The AI message map. A central document maintained by AI governance and communications that captures the organisation’s authoritative position on AI capabilities, limitations, governance, and incidents. All channel-specific communications draw from the message map.

Pre-clearance for public statements. Major external statements about AI (press releases, earnings remarks, regulatory testimony) are pre-cleared by AI governance, legal, and communications.

Inventory of public claims. A register of every significant public statement the organisation has made about its AI, with date, channel, and current accuracy status. The register supports both refresh decisions and incident response.

Cross-functional channel reviews. Quarterly reviews where the AI governance function, legal, communications, and the relevant business owners review what has been said, what should be updated, and what new disclosures should be issued.

Specific Disclosure Topics

Generative AI in customer-facing channels. The fact that a chatbot is AI-powered, the limitations on its accuracy, the path to a human agent, and the data flows. Many jurisdictions are moving toward mandatory disclosure here.

Algorithmic decision-making in consequential contexts. Decisions about credit, employment, housing, education, and similar consequential domains carry specific notice requirements under existing law and emerging AI regulation.

Synthetic content. The Coalition for Content Provenance and Authenticity (C2PA) at https://c2pa.org/ has published technical standards for synthetic content marking; the EU AI Act Article 50 requires disclosure of AI-generated content in many contexts.

Foundation model dependencies. For consumer-facing services, disclosing the foundation model in use is increasingly expected, both for transparency and to enable user-driven concerns to be routed appropriately.

Energy and environmental impact. Some jurisdictions and many voluntary frameworks expect disclosure of the environmental cost of AI services. The EU AI Act Article 40 requires general-purpose AI providers to publish summary information about training data and energy consumption.

Common Failure Modes

The first is AI-washing — describing existing analytics, automation, or rule-based systems as “AI.” The U.S. Federal Trade Commission and Securities and Exchange Commission have both signalled enforcement attention here.

The second is aspirational marketing — describing capabilities the system might have in the future as if it has them now. Counter with strict gate review for marketing claims by AI governance.

The third is under-disclosure of consequential decision-making — failing to inform people that a consequential decision was AI-influenced. Counter with mandatory disclosure templates for decision categories.

The fourth is omission of incident communication — declining to communicate publicly about an AI incident in the hope that no one notices. Counter with documented thresholds that trigger external communication.

The fifth is over-disclosure that compromises security or privacy. Detail about model architecture, training data, or operational defences may itself be a security risk. Counter with security review of significant external AI documentation.

Looking Forward

Module 1.26 closes here. Module 1.27 turns to AI conformity assessment under the EU AI Act and the related compliance work that constitutes the formal regulatory layer. External communications is the public face of the organisation’s AI; conformity is the hidden architecture that makes the public face credible.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.