The COMPEL Glossary Graph visualizes relationships between framework terminology, showing how concepts interconnect across domains, stages, and pillars. Term nodes cluster by pillar affiliation while cross-references reveal semantic dependencies — for example, how risk appetite connects to control effectiveness, model governance, and assurance requirements. This network representation helps practitioners navigate the framework vocabulary and understand that COMPEL terminology forms a coherent conceptual system rather than isolated definitions.
COMPEL Glossary / inter-annotator-agreement
Inter-annotator agreement
A statistical measure of consistency between human labelers annotating the same items — Cohen's kappa (two annotators), Fleiss' kappa (multiple annotators, nominal scale), Krippendorff's alpha (any scale, tolerant of missing data).
What this means in practice
Low IAA signals either ambiguous guidelines or insufficient annotator training; both are data-readiness gaps.
Synonyms
IAA , annotator agreement , labeler agreement
See also
- Data quality dimension — A measurable attribute of data integrity — accuracy, completeness, consistency, timeliness, validity, uniqueness, representativeness — used as a scoring axis in a data-readiness rubric.
- Fitness for purpose — The determination that a specific dataset is appropriate for a specific AI use case given the task, risk tier, and intended deployment context.
- Bias-relevant variable — A feature whose inclusion, exclusion, or proxy-behavior affects fairness across protected groups — a direct sensitive attribute (race, gender) or an indirect proxy (postal code, device type).