Prompts in LangChain | Generative AI using LangChain | Video 4 | CampusX
Based on CampusX's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Temperature near 0 makes LLM outputs repeatable for the same input; higher temperature (e.g., ~1.5) increases variation and creativity.
Briefing
LangChain prompts are the control layer that determines what an LLM produces, and the practical way to make that control reliable is to stop asking end users to type full prompts. Instead, the system should collect structured inputs (like paper title, explanation style, and length) and then assemble the final prompt using LangChain’s prompt templates. The transcript also corrects a common misunderstanding about the LLM “temperature” parameter: setting temperature to 0 (or near 0) makes outputs repeatable for the same input, while higher values (around 1.5 in the example) increase variation and creativity.
After a quick recap of earlier LangChain components—especially models—the walkthrough shifts into what “prompts” really mean in practice. Prompts are simply the messages sent to an LLM: text, images, audio, or video can all be used depending on the model. For the current lesson, the focus stays on text-based prompts, which remain the most common approach. Because LLM outputs are highly sensitive to prompt wording, prompt engineering has become its own job category, and LangChain provides tools to design prompts more safely and consistently.
A major distinction follows: static prompts versus dynamic prompts. Static prompting means the user writes the entire instruction (for example, “Summarize this research paper in five lines”). That approach is fragile—misspellings, wrong paper names, or even small wording changes can lead to unpredictable or undesirable results. The transcript argues that real applications need consistency across users, so the better pattern is dynamic prompting: a template is created once, and runtime variables fill in the blanks.
To demonstrate, the lesson builds a small Streamlit “research assistant” app. The UI collects three structured inputs: a paper selection (via dropdown), an explanation style (e.g., code-heavy, math-heavy, simple), and a summary length (short/medium/long). A LangChain PromptTemplate then injects these values into a prewritten template that also enforces requirements like including relevant mathematical equations when present, using intuitive explanations, and ensuring the output matches the requested style and length. The transcript further justifies PromptTemplate over raw f-strings: templates provide built-in validation (catching missing or extra placeholders early) and improve reusability by allowing templates to be saved and loaded from separate files.
Next comes multi-turn chat. A simple console chat bot initially fails at context because it sends only the latest user message, so follow-up questions (like “multiply the bigger number by 10”) can’t be resolved. The fix is to maintain chat history and send the full conversation each time. The transcript then introduces LangChain’s message types—SystemMessage, HumanMessage, and AIMessage—and shows how labeling each message solves ambiguity as the conversation grows.
Finally, the lesson extends the same template philosophy to more advanced prompt construction: ChatPromptTemplate for dynamic sets of messages, and MessagePlaceholder for inserting stored chat history into a prompt at runtime. The result is a blueprint for building LLM applications that remain consistent, context-aware, and maintainable as conversations and requirements scale.
Cornell Notes
Prompts are the messages sent to an LLM, and small changes can strongly affect outputs—so production apps should avoid “static prompts” where users type the whole instruction. Instead, the transcript recommends “dynamic prompts” built with LangChain PromptTemplate: collect structured inputs (e.g., paper title, style, length) and fill a reusable template at runtime. It also corrects temperature usage: temperature near 0 yields repeatable outputs for the same input, while higher values (like ~1.5) increase creativity and variation. For chatbots, maintaining context requires sending chat history each turn, and LangChain’s SystemMessage, HumanMessage, and AIMessage types help the model understand who said what. MessagePlaceholder and ChatPromptTemplate extend this pattern to insert stored history into prompts cleanly.
Why does temperature matter, and what does temperature=0 vs ~1.5 change in practice?
What’s the core problem with static prompts in real LLM apps?
How does dynamic prompting with PromptTemplate improve reliability?
Why did the first chat bot fail on follow-up questions, and how was it fixed?
How do SystemMessage, HumanMessage, and AIMessage help in multi-turn conversations?
What do ChatPromptTemplate and MessagePlaceholder add beyond basic PromptTemplate?
Review Questions
- In your own words, contrast static prompts and dynamic prompts, and give one concrete failure mode for static prompting.
- What changes in a chatbot’s behavior when it starts sending full chat history instead of only the latest user message?
- How do SystemMessage, HumanMessage, and AIMessage roles affect the model’s understanding of a long conversation?
Key Points
- 1
Temperature near 0 makes LLM outputs repeatable for the same input; higher temperature (e.g., ~1.5) increases variation and creativity.
- 2
Prompts are the actual messages sent to an LLM; for text-focused apps, the prompt is typically the user instruction plus any formatting constraints.
- 3
Static prompts are fragile because users control the full wording; dynamic prompting uses templates with runtime variables to enforce consistency.
- 4
PromptTemplate provides placeholder validation and reusability (templates can be saved/loaded), reducing runtime surprises compared with ad-hoc string building.
- 5
Multi-turn chat requires maintaining and sending chat history; otherwise follow-up questions lose the context needed to answer correctly.
- 6
LangChain’s SystemMessage, HumanMessage, and AIMessage types label conversation turns so the model can reliably interpret who said what.
- 7
MessagePlaceholder enables injecting stored message lists (like chat history) into a chat prompt at runtime without manually concatenating everything.