Skip to main content

COMPEL Glossary / inter-annotator-agreement

Inter-annotator agreement

A statistical measure of consistency between human labelers annotating the same items — Cohen's kappa (two annotators), Fleiss' kappa (multiple annotators, nominal scale), Krippendorff's alpha (any scale, tolerant of missing data).

What this means in practice

Low IAA signals either ambiguous guidelines or insufficient annotator training; both are data-readiness gaps.

Synonyms

IAA , annotator agreement , labeler agreement

See also

  • Data quality dimension — A measurable attribute of data integrity — accuracy, completeness, consistency, timeliness, validity, uniqueness, representativeness — used as a scoring axis in a data-readiness rubric.
  • Fitness for purpose — The determination that a specific dataset is appropriate for a specific AI use case given the task, risk tier, and intended deployment context.
  • Bias-relevant variable — A feature whose inclusion, exclusion, or proxy-behavior affects fairness across protected groups — a direct sensitive attribute (race, gender) or an indirect proxy (postal code, device type).