You Are Using ChatGPT The Wrong Way
Based on FromSergio's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
State the goal and intended audience at the start of a prompt to align tone, relevance, and expectations.
Briefing
ChatGPT performs far better when users treat prompts like a brief for a human professional: spell out the goal and audience up front, then narrow the output with clear constraints. The central takeaway is that vague instructions produce broad, unpredictable results, while specific context and “do/don’t” boundaries sharply reduce the model’s degrees of freedom—making it much more likely to deliver what you actually want.
The first major technique is to include context by stating both the intended outcome and who the answer is for. Instead of jumping straight into a task, the prompt should begin with what the user wants to achieve (“get better at public speaking,” “reduce stress,” or “start writing a novel”) and then describe the audience in detail. The transcript emphasizes that tone and content shift dramatically depending on whether the target is a friend, a coworker, a 30-year-old learning programming, or a group of high school students with ADHD trying to improve public speaking. Even when the audience is “you,” the model needs enough information about the user’s background and preferences to tailor the response.
Next comes constraints—explicitly telling the model what to do and what to avoid. The transcript uses an online shopping analogy: filters narrow choices. For example, asking for “Japanese recipes” is broad, but specifying “Japanese recipe based on these ingredients” removes most irrelevant options. Adding time limits (“ready in another 40 minutes”) narrows further, and excluding preferences (“not vegetarian,” “not spicy”) reduces the chance of an outcome that misses the mark. The same logic applies to other tasks: summarizing a 1,000-word article becomes more controllable when the user specifies a target length (200 words), a format (four bullet points), and even title requirements (e.g., 50 characters, avoiding complex words).
A third technique—priming—targets writing style. Users can provide examples such as YouTube scripts or past blog posts, ask ChatGPT to analyze them, and then instruct it to write new material in that same style. The transcript warns that this approach may not feel fully natural for producing original content, but it can still be useful, especially when paired with examples. Priming also works through examples of preferred titles or keyword focus: showing what you like often guides the model more powerfully than merely describing the desired outcome.
The transcript then highlights personas, where ChatGPT is asked to adopt a specific role or viewpoint—such as acting as a Portuguese tutor for an intermediate learner, brainstorming as a creative expert, or impersonating a celebrity’s perspective. In all cases, users can ask “why” to get reasoning behind the output.
Finally, the transcript stresses workflow discipline: avoid requesting everything at once. Draft section-by-section—perfect the first paragraph or two, then move forward—so each step becomes a new form of priming. Across techniques, the model’s only input is the prompt, so specificity matters, ambiguity should be minimized, and follow-ups are encouraged. Keeping a record of prompts that work (in tools like Notion or Obsidian) helps users reuse and improve their best-performing instructions over time.
Cornell Notes
ChatGPT delivers better results when prompts include clear context, defined constraints, and targeted guidance like priming and personas. Start by stating the goal and the intended audience so the model can match tone and relevance. Then narrow outputs with “do” and “don’t” constraints—such as ingredient lists, time limits, format requirements, and exclusions—to reduce irrelevant options. For writing, priming can teach the model a preferred style by analyzing past scripts or blog posts, and examples can guide titles and keyword choices. Because ChatGPT is interactive, users should iterate: refine sections one at a time, correct what’s wrong, and document effective prompts for reuse.
Why does adding context (goal + audience) change the quality of ChatGPT outputs?
How do constraints reduce bad outputs?
What is priming, and how can it be used for writing style?
What does using personas accomplish?
Why should users write or generate content section-by-section instead of all at once?
What habits help users improve prompts over time?
Review Questions
- When would you include audience details in a prompt, and what specific details matter most?
- Give an example of three constraints you could add to improve a recipe request or a summarization request.
- How would you prime ChatGPT to match your writing style, and what follow-up steps would you take if the result still feels off?
Key Points
- 1
State the goal and intended audience at the start of a prompt to align tone, relevance, and expectations.
- 2
Describe the audience with concrete details (age, interests, learning needs) so the model can tailor its response.
- 3
Use constraints to narrow outputs: specify inputs, formats, lengths, time limits, and explicit exclusions.
- 4
Prime style and preferences by providing examples (past scripts/blog posts, sample titles, keyword focus) rather than only describing them.
- 5
Generate content iteratively—perfect early sections first—so the draft itself primes the next steps.
- 6
Keep prompts specific and low-ambiguity, then follow up to correct errors or request variations.
- 7
Document prompts that work in a system like Notion or Obsidian to reuse and improve them later.