HomeWikiPrompt Engineering
AI & Data

Prompt Engineering

The practice of designing and optimising inputs to AI models to elicit accurate, useful, and consistently structured outputs.

What is Prompt Engineering?

Prompt engineering is the craft of writing instructions (prompts) that guide large language models toward producing desired outputs. It sits at the intersection of technical writing, domain expertise, and an understanding of how LLMs process and generate text. A well-engineered prompt can be the difference between a vague, generic response and a precise, actionable answer.

Core Techniques

  • System prompts — Define the model's role, constraints, and output format upfront
  • Few-shot examples — Include input/output pairs that demonstrate the expected pattern
  • Chain of thought — Ask the model to reason step-by-step before answering
  • Output schemas — Specify JSON or structured formats to ensure parseable responses
  • Negative constraints — Explicitly state what the model should not do

Beyond the Basics

Advanced prompt engineering involves prompt chaining (breaking complex tasks into sequential steps), self-consistency (generating multiple answers and selecting the most common), and tool use (teaching models to call external functions when they need real data).

The rise of agentic AI has made prompt engineering even more critical—agent system prompts must handle delegation, error recovery, and multi-step reasoning reliably.

The Blue Note Logic Perspective

We treat prompt engineering as a first-class engineering discipline, not an afterthought. Every CorpusAI deployment includes carefully versioned system prompts, tested against evaluation suites just like code. Our rule of thumb: if you're considering fine-tuning, first spend a week on prompt engineering. In our experience, 80% of “the model can't do this” problems are actually “we haven't told the model what we want” problems.

Live chat — Coming Soon