Lesson 4 ยท Beginner

System Prompts and Instruction Hierarchy

Learn how system instructions, developer instructions, and user prompts interact to shape model behavior.

Read the explanation carefully, then review the examples and coding section. The goal is to understand both the concept and how it appears inside a real application workflow.

Explanation

Instruction hierarchy helps define which rules the model should prioritize.

A strong system prompt sets boundaries, tone, and output expectations for many requests.

Separating permanent instructions from user input improves consistency.

Why this topic matters in practice

In generative AI products, the model is only one part of the system. The surrounding workflow determines whether the output is useful, safe, and maintainable. This lesson matters because it helps you connect the idea to tasks such as tutoring, search, copilots, business assistants, and production automation.

Examples

Tutor mode

The system prompt can require simple explanations and examples in every answer.

Support mode

The system prompt can force escalation language when policy evidence is missing.

Structured outputs

A system prompt can instruct the model to return JSON-ready content.

Combining system and user instructions

The code below is intentionally concise so the underlying pattern stays clear. It focuses on the application logic you can reuse, even if you later switch model providers or deployment environments.

system_prompt = "You are a clear and careful tutor. Use simple explanations."
user_prompt = "Explain embeddings in one paragraph."

full_prompt = f"{system_prompt}\n\nUser: {user_prompt}"
print(full_prompt)

How the coding section works

  • Separating system and user instructions improves maintainability.
  • The system prompt often acts as your persistent product policy layer.
  • Applications should test system prompts like any other versioned asset.

Implementation advice

When turning this lesson into a real feature, think beyond the code snippet itself. Decide what inputs should be allowed, how you will validate outputs, how you will recover from errors, and how you will measure whether the feature is actually helping users. Those surrounding choices often determine whether an AI feature feels polished or unreliable.

Summary / key takeaways

  • Instruction hierarchy helps control behavior consistently.
  • System prompts define the stable contract of the application.
  • Good system prompts are clear, focused, and easy to revise.

Exercises

  1. Write a system prompt for an AI study coach.
  2. Why should permanent rules live outside the user prompt?
  3. Create a user prompt that works with the sample system prompt.