ChatGPT Prompt Engineering: Zero, One and Few Shot Prompting
Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Zero-shot prompting produces results by guessing the desired format without examples, which can be effective but less controlled.
Briefing
Prompt engineering in ChatGPT hinges on how much guidance the model gets about the exact output format. In zero-shot prompting, the model must “guess” what the user wants without any prior examples, so results can be close but often miss specific constraints. In one-shot prompting, a single example of the desired output teaches the model the target structure, improving consistency. Few-shot prompting goes further by providing several examples, which typically yields the most reliable adherence to both style and formatting—especially when the output must match a strict template.
The transcript walks through these three approaches using a practical Midjourney image prompt. The task is to generate an image description featuring “a female cyborg working in [a] winter landscape in Norway,” using adjectives and nouns, and ending with an aspect ratio parameter. With zero-shot prompting, ChatGPT (or GPT-3) receives only the plain instruction to write the description. Since it has no example of the exact format the user expects, it produces a strong first attempt—described as “a very good guess”—but not perfectly aligned with the intended structure.
To demonstrate the difference, the same prompt is then tested in Midjourney. The results are compared across prompting styles. One-shot prompting changes the setup: ChatGPT is given one explicit example of the desired output format, including the instruction to return only adjectives and nouns and to place the aspect ratio at the end. When the prompt is run again, the model produces a noticeably more compressed, more structured description that is “almost perfect,” though it may still omit a detail or two. The example is then copied into Midjourney to verify that the improved formatting translates into better image prompt behavior.
Few-shot prompting adds a small set of additional examples—three in the transcript. This gives the model a clearer pattern for what the final text should look like, including the consistent placement of the aspect ratio. The resulting output is described as starting similarly to the earlier versions and matching the expected format more reliably. The transcript frames few-shot prompting as especially useful when the user needs a highly specific output.
In the final comparison of Midjourney outputs, all three look good, with the zero-shot result noted as surprisingly effective even though it initially reads like a “wall of text.” The few-shot result is presented as the favorite, while the one-shot result sits between the two. Overall, the takeaway is that adding examples—first one, then a few—reduces formatting drift and increases control over the final prompt text, which is crucial when downstream tools like Midjourney depend on precise prompt structure.
Cornell Notes
Zero-shot, one-shot, and few-shot prompting differ by how many examples the model receives of the desired output format. Zero-shot prompting relies on the model’s best guess, which can work well but may miss specific formatting constraints. One-shot prompting improves results by providing a single example that teaches the model the structure, such as returning only adjectives and nouns and placing an aspect ratio at the end. Few-shot prompting uses multiple examples (three in the transcript) to reinforce the pattern, producing the most consistent adherence to the requested template. This matters because Midjourney’s image outcomes depend heavily on the exact wording and formatting of the prompt text.
What distinguishes zero-shot prompting from one-shot prompting in practical terms?
Why does the transcript emphasize formatting constraints like “adjectives and nouns” and an “aspect ratio at the end”?
How does few-shot prompting change the model’s behavior compared with one-shot prompting?
What was the example task used to compare the three prompting methods?
What conclusion does the transcript draw from comparing the Midjourney outputs?
Review Questions
- When would zero-shot prompting be sufficient, based on the transcript’s comparison?
- How would you modify a prompt to move from one-shot to few-shot prompting in this workflow?
- What specific formatting element (mentioned in the transcript) is most important for consistency in Midjourney prompts?
Key Points
- 1
Zero-shot prompting produces results by guessing the desired format without examples, which can be effective but less controlled.
- 2
One-shot prompting improves output consistency by providing a single example that teaches the target structure.
- 3
Few-shot prompting uses several examples (three in the transcript) to reinforce formatting patterns and reduce drift.
- 4
When prompts must end with an aspect ratio parameter, example-based prompting helps ensure it appears in the correct position.
- 5
Restricting output to adjectives and nouns can make prompts more compact and more aligned with downstream tool expectations.
- 6
Midjourney prompt quality can vary with how strictly the prompt text follows the requested template, even if all versions still generate usable images.