Get AI summaries of any video or article — Sign up free
Prompts in LangChain | Generative AI using LangChain | Video 4 | CampusX thumbnail

Prompts in LangChain | Generative AI using LangChain | Video 4 | CampusX

CampusX·
5 min read

Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Temperature near 0 makes LLM outputs repeatable for the same input; higher temperature (e.g., ~1.5) increases variation and creativity.

Briefing

LangChain prompts are the control layer that determines what an LLM produces, and the practical way to make that control reliable is to stop asking end users to type full prompts. Instead, the system should collect structured inputs (like paper title, explanation style, and length) and then assemble the final prompt using LangChain’s prompt templates. The transcript also corrects a common misunderstanding about the LLM “temperature” parameter: setting temperature to 0 (or near 0) makes outputs repeatable for the same input, while higher values (around 1.5 in the example) increase variation and creativity.

After a quick recap of earlier LangChain components—especially models—the walkthrough shifts into what “prompts” really mean in practice. Prompts are simply the messages sent to an LLM: text, images, audio, or video can all be used depending on the model. For the current lesson, the focus stays on text-based prompts, which remain the most common approach. Because LLM outputs are highly sensitive to prompt wording, prompt engineering has become its own job category, and LangChain provides tools to design prompts more safely and consistently.

A major distinction follows: static prompts versus dynamic prompts. Static prompting means the user writes the entire instruction (for example, “Summarize this research paper in five lines”). That approach is fragile—misspellings, wrong paper names, or even small wording changes can lead to unpredictable or undesirable results. The transcript argues that real applications need consistency across users, so the better pattern is dynamic prompting: a template is created once, and runtime variables fill in the blanks.

To demonstrate, the lesson builds a small Streamlit “research assistant” app. The UI collects three structured inputs: a paper selection (via dropdown), an explanation style (e.g., code-heavy, math-heavy, simple), and a summary length (short/medium/long). A LangChain PromptTemplate then injects these values into a prewritten template that also enforces requirements like including relevant mathematical equations when present, using intuitive explanations, and ensuring the output matches the requested style and length. The transcript further justifies PromptTemplate over raw f-strings: templates provide built-in validation (catching missing or extra placeholders early) and improve reusability by allowing templates to be saved and loaded from separate files.

Next comes multi-turn chat. A simple console chat bot initially fails at context because it sends only the latest user message, so follow-up questions (like “multiply the bigger number by 10”) can’t be resolved. The fix is to maintain chat history and send the full conversation each time. The transcript then introduces LangChain’s message types—SystemMessage, HumanMessage, and AIMessage—and shows how labeling each message solves ambiguity as the conversation grows.

Finally, the lesson extends the same template philosophy to more advanced prompt construction: ChatPromptTemplate for dynamic sets of messages, and MessagePlaceholder for inserting stored chat history into a prompt at runtime. The result is a blueprint for building LLM applications that remain consistent, context-aware, and maintainable as conversations and requirements scale.

Cornell Notes

Prompts are the messages sent to an LLM, and small changes can strongly affect outputs—so production apps should avoid “static prompts” where users type the whole instruction. Instead, the transcript recommends “dynamic prompts” built with LangChain PromptTemplate: collect structured inputs (e.g., paper title, style, length) and fill a reusable template at runtime. It also corrects temperature usage: temperature near 0 yields repeatable outputs for the same input, while higher values (like ~1.5) increase creativity and variation. For chatbots, maintaining context requires sending chat history each turn, and LangChain’s SystemMessage, HumanMessage, and AIMessage types help the model understand who said what. MessagePlaceholder and ChatPromptTemplate extend this pattern to insert stored history into prompts cleanly.

Why does temperature matter, and what does temperature=0 vs ~1.5 change in practice?

Temperature controls randomness in the LLM’s generation. With temperature set to 0 (or near 0), the same input tends to produce the same output on repeated runs. When temperature is increased (the transcript demonstrates 1.5), the output becomes more varied and creative even if the input prompt stays identical.

What’s the core problem with static prompts in real LLM apps?

Static prompting forces users to write the full instruction. That creates high variability: misspellings, wrong paper names, or even small wording differences can lead to different—and sometimes undesirable—LLM outputs. The transcript frames this as a consistency problem: applications typically need a stable user experience, not “whatever the user typed.”

How does dynamic prompting with PromptTemplate improve reliability?

Dynamic prompting uses a predefined template with placeholders. The app collects structured inputs via UI (e.g., dropdown for paper, dropdown for style, dropdown for length) and then fills the template at runtime. This reduces user error and keeps output formatting consistent. The transcript also highlights PromptTemplate’s validation: missing placeholders or extra variables can trigger errors during development rather than failing unpredictably at runtime.

Why did the first chat bot fail on follow-up questions, and how was it fixed?

The initial bot sent only the latest user message, so it lacked conversational context. When the user asked a follow-up like “multiply the bigger number by 10,” the model couldn’t know which numbers were discussed earlier. The fix was to maintain chat history and send the full list of prior messages (or the relevant history) with each new query.

How do SystemMessage, HumanMessage, and AIMessage help in multi-turn conversations?

They label each message by role: SystemMessage sets global instructions (e.g., “You are a helpful assistant”), HumanMessage represents user inputs, and AIMessage represents model replies. When chat history grows, these labels let the LLM distinguish who said what, preventing confusion that would otherwise arise if all messages were stored without role metadata.

What do ChatPromptTemplate and MessagePlaceholder add beyond basic PromptTemplate?

ChatPromptTemplate supports building prompts that consist of multiple message objects (useful for multi-turn flows). MessagePlaceholder is a special placeholder used inside a chat prompt to inject a list of stored messages (like chat history) at runtime—so the model receives prior context automatically when generating the next response.

Review Questions

  1. In your own words, contrast static prompts and dynamic prompts, and give one concrete failure mode for static prompting.
  2. What changes in a chatbot’s behavior when it starts sending full chat history instead of only the latest user message?
  3. How do SystemMessage, HumanMessage, and AIMessage roles affect the model’s understanding of a long conversation?

Key Points

  1. 1

    Temperature near 0 makes LLM outputs repeatable for the same input; higher temperature (e.g., ~1.5) increases variation and creativity.

  2. 2

    Prompts are the actual messages sent to an LLM; for text-focused apps, the prompt is typically the user instruction plus any formatting constraints.

  3. 3

    Static prompts are fragile because users control the full wording; dynamic prompting uses templates with runtime variables to enforce consistency.

  4. 4

    PromptTemplate provides placeholder validation and reusability (templates can be saved/loaded), reducing runtime surprises compared with ad-hoc string building.

  5. 5

    Multi-turn chat requires maintaining and sending chat history; otherwise follow-up questions lose the context needed to answer correctly.

  6. 6

    LangChain’s SystemMessage, HumanMessage, and AIMessage types label conversation turns so the model can reliably interpret who said what.

  7. 7

    MessagePlaceholder enables injecting stored message lists (like chat history) into a chat prompt at runtime without manually concatenating everything.

Highlights

Temperature=0 yields the same output for the same input; raising temperature (the transcript uses 1.5) produces noticeably different, more creative responses.
Static prompts put too much control in the user’s hands; dynamic prompts shift control to a reusable template filled with structured inputs.
A chat bot that doesn’t send prior turns can’t resolve follow-ups—maintaining chat history fixes that.
Role-labeled messages (SystemMessage/HumanMessage/AIMessage) prevent confusion as conversations grow.
MessagePlaceholder is the mechanism for inserting stored chat history into a prompt cleanly at runtime.

Topics