Get AI summaries of any video or article — Sign up free
What is Prompt Engineering ? thumbnail

What is Prompt Engineering ?

DataScienceChampion·
5 min read

Based on DataScienceChampion's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Prompt engineering improves language model outputs by specifying instructions and context that match the task’s real requirements.

Briefing

Prompt engineering is the practice of crafting the exact instructions and context given to a language model so its answers match what a user actually needs. The core insight is simple: vague requests produce vague results, while carefully specified prompts—paired with the right background information—steer the model toward more relevant, accurate, and usable outputs. That matters because language models can otherwise “fill in the blanks” using general knowledge rather than the information a task depends on.

The transcript first grounds the idea of a “prompt” in a real-world analogy: asking a knowledgeable friend in a coffee shop for “the best way to grow tomatoes” is the prompt, and the friend’s tailored advice is the model’s output. In the same way, the clarity and structure of a question strongly influence the quality of the response. From there, prompt engineering is framed as the difference between telling a librarian “please provide me a book to read” versus giving a more detailed request like “suggest motivational books from the latest bestsellers.” The second request narrows interpretation and makes the recommendation more likely to fit preferences.

Next comes the breakdown of prompt types. A system prompt (also called an instruction prompt) acts like a rulebook for the model: it sets the role, tone, and boundaries for how the assistant should behave. A user prompt is the actual request coming from the person interacting with the system. The transcript emphasizes that system prompts can include predefined instructions and context, such as how to greet users, what domain to operate in, and what to do when information is missing.

A key example shows how system prompts can force the model to use provided context rather than relying on its own memory. With a context snippet like “Patty loves animals… has two cats and one dog,” a question about how many cats Patty has should be answered strictly from that context. This distinction becomes central to building reliable applications.

The practical workflow is described as iterative. Requirements get translated into prompts (system/instruction prompt plus user prompt and any additional context). The model generates results, those results are checked against the original requirement, and then prompts or context are modified until the output is “good enough.”

To demonstrate, the transcript builds a digital assistant for a library. The system prompt instructs the assistant to greet politely, help users find books, and—if a requested book isn’t available—collect the user’s phone number and promise a callback. The additional context is a small library database containing book names, authors, and shelf locations (book self numbers). When a user asks for motivational books, the assistant returns titles from that database; when asked where to find a specific book, it responds with the shelf location (e.g., “book self D1”), matching the provided context.

Finally, the implementation is shown in a Google Colab-hosted Jupyter notebook using OpenAI’s API with the GPT 3.5 turbo model and a low temperature (0.2). A helper function sends a message list containing the system prompt and the evolving user input to the model, and a loop keeps accepting user queries until the assistant’s responses meet the library-finding task.

Cornell Notes

Prompt engineering is about designing the instructions and context sent to a language model so its responses align with a specific goal. The transcript distinguishes system (instruction) prompts—which set role, tone, and rules—from user prompts, which contain the actual request. A system prompt can also include context that the model must use instead of guessing from general knowledge. Building a useful application follows an iterative loop: generate results, verify them against requirements, then adjust prompts or context until the output is acceptable. The example implements a library digital assistant that uses a small book database (title, author, shelf location) and includes a fallback behavior to collect a phone number when a book is unavailable.

How does a “prompt” differ from the model’s output, and why does prompt clarity matter?

A prompt is the user’s question or instruction given to the model. In the coffee-shop analogy, asking “how do I grow tomatoes successfully” is the prompt, while the friend’s advice (soil, sunlight, watering) is the output. The transcript stresses that clearer, more specific prompts lead to more relevant and detailed responses because the model can better interpret what task is being requested.

What role does a system (instruction) prompt play compared with a user prompt?

A system prompt functions like a rulebook: it defines the assistant’s role, tone, scope, and response behavior. A user prompt is the visitor’s request (e.g., asking for books on gardening or renewable energy papers). The transcript also notes that system prompts can steer behavior within boundaries like professionalism or creativity.

How can system prompts force answers to come from provided context rather than “memory”?

The transcript uses a context example about Patty having two cats and one dog. When the user asks “how many cats Patty has,” the model should answer using the supplied context (“two cats”) rather than relying on its own general knowledge. This is achieved by instructing the model to answer the user query from the given context.

Why is prompt engineering described as iterative?

Requirements are translated into prompts and context, the model generates results, and then those results are checked against what the user actually needs. If the output isn’t good enough, the system prompt, the in-context data, or other prompt elements are modified, and the same user query is tried again until the response matches the requirement.

How does the library assistant use system prompt instructions and a book database?

The system prompt tells the assistant to greet politely, help users find books, and if a requested book isn’t available, collect the user’s phone number and say it will call back later. The additional context is a small database listing book name, author, and shelf location (book self number). When users ask for motivational books, the assistant pulls titles from that database; when users ask where a specific book is located, it returns the shelf location (e.g., “book self D1”) based on the stored context.

What implementation details are used in the example assistant?

The transcript shows a Google Colab notebook that installs and imports OpenAI libraries, sets an API key, defines the system prompt and library data, and uses a helper function to call the OpenAI API. It uses the GPT 3.5 turbo model with temperature set to 0.2, sends a message list containing the system prompt plus the user’s input, and prints the model’s response inside a loop that accepts new user queries.

Review Questions

  1. What are the practical differences between system prompts and user prompts, and how do they affect model behavior?
  2. In the library assistant example, what specific instructions and context are included in the system prompt, and how do they change the assistant’s responses?
  3. Why does the workflow require verification and prompt modification, and what kinds of prompt elements might be changed during iteration?

Key Points

  1. 1

    Prompt engineering improves language model outputs by specifying instructions and context that match the task’s real requirements.

  2. 2

    A system (instruction) prompt sets the assistant’s role, tone, and rules, while a user prompt contains the actual request.

  3. 3

    Clear, detailed user requests reduce ambiguity and make results more aligned with expectations.

  4. 4

    System prompts can require the model to answer from provided context rather than guessing from general knowledge.

  5. 5

    Prompt engineering is iterative: generate results, verify against requirements, then adjust prompts or context until the output is acceptable.

  6. 6

    The library assistant example uses a system prompt for behavior (greeting, fallback phone-number collection) and a book database for factual retrieval (title/author/shelf location).

  7. 7

    The implementation uses OpenAI’s API with GPT 3.5 turbo and a low temperature (0.2) to keep responses more controlled.

Highlights

A vague request like “please provide me a book” can lead to mismatched recommendations, while a specific prompt like “motivational books from latest bestsellers” narrows the model’s interpretation.
System prompts act like a rulebook—defining role, tone, and what to do when information is missing—while user prompts supply the task request.
The library assistant’s accuracy comes from pairing instructions with a concrete in-context database of book names, authors, and shelf locations.
The workflow is iterative: prompts and context are adjusted after checking whether outputs meet the original requirement.
In the implementation, GPT 3.5 turbo is called via a message list that includes the system prompt and the evolving user input, with temperature set to 0.2.

Topics

Mentioned