Skip to main content

COMPEL Glossary / prompt-engineering

Prompt Engineering

Prompt engineering is the practice of designing and refining the text inputs (prompts) given to a large language model to produce desired outputs.

What this means in practice

Effective prompt engineering can dramatically improve the quality, accuracy, and relevance of AI-generated content without modifying the underlying model. Techniques include providing clear instructions, giving examples (few-shot prompting), assigning roles, and structuring complex tasks into steps. Prompt engineering has democratized AI: tasks that previously required specialized ML teams can now be accomplished by domain experts through well-crafted prompts. However, this democratization also increases governance risk, as non-technical users may deploy AI capabilities without appropriate oversight. COMPEL's governance framework addresses this through acceptable use policies and lightweight governance pathways for prompt-based applications.

Why it matters

Prompt engineering has democratized AI, enabling domain experts without ML expertise to accomplish tasks that previously required specialized teams. However, this democratization increases governance risk as non-technical users deploy AI capabilities without appropriate oversight. Organizations must balance the productivity gains of accessible AI with governance controls that prevent misuse of sensitive data or deployment of unreliable AI outputs.

How COMPEL uses it

COMPEL's governance framework addresses prompt engineering through acceptable use policies and lightweight governance pathways for prompt-based applications, designed during the Model stage. The People pillar includes AI literacy training that covers effective prompt engineering practices. The Governance pillar defines risk-proportionate oversight requirements, ensuring that even democratized AI usage operates within governance boundaries.

Related Terms

Other glossary terms mentioned in this entry's definition and context.