Get AI summaries of any video or article — Sign up free
You Are Using ChatGPT The Wrong Way thumbnail

You Are Using ChatGPT The Wrong Way

FromSergio·
5 min read

Based on FromSergio's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

State the goal and intended audience at the start of a prompt to align tone, relevance, and expectations.

Briefing

ChatGPT performs far better when users treat prompts like a brief for a human professional: spell out the goal and audience up front, then narrow the output with clear constraints. The central takeaway is that vague instructions produce broad, unpredictable results, while specific context and “do/don’t” boundaries sharply reduce the model’s degrees of freedom—making it much more likely to deliver what you actually want.

The first major technique is to include context by stating both the intended outcome and who the answer is for. Instead of jumping straight into a task, the prompt should begin with what the user wants to achieve (“get better at public speaking,” “reduce stress,” or “start writing a novel”) and then describe the audience in detail. The transcript emphasizes that tone and content shift dramatically depending on whether the target is a friend, a coworker, a 30-year-old learning programming, or a group of high school students with ADHD trying to improve public speaking. Even when the audience is “you,” the model needs enough information about the user’s background and preferences to tailor the response.

Next comes constraints—explicitly telling the model what to do and what to avoid. The transcript uses an online shopping analogy: filters narrow choices. For example, asking for “Japanese recipes” is broad, but specifying “Japanese recipe based on these ingredients” removes most irrelevant options. Adding time limits (“ready in another 40 minutes”) narrows further, and excluding preferences (“not vegetarian,” “not spicy”) reduces the chance of an outcome that misses the mark. The same logic applies to other tasks: summarizing a 1,000-word article becomes more controllable when the user specifies a target length (200 words), a format (four bullet points), and even title requirements (e.g., 50 characters, avoiding complex words).

A third technique—priming—targets writing style. Users can provide examples such as YouTube scripts or past blog posts, ask ChatGPT to analyze them, and then instruct it to write new material in that same style. The transcript warns that this approach may not feel fully natural for producing original content, but it can still be useful, especially when paired with examples. Priming also works through examples of preferred titles or keyword focus: showing what you like often guides the model more powerfully than merely describing the desired outcome.

The transcript then highlights personas, where ChatGPT is asked to adopt a specific role or viewpoint—such as acting as a Portuguese tutor for an intermediate learner, brainstorming as a creative expert, or impersonating a celebrity’s perspective. In all cases, users can ask “why” to get reasoning behind the output.

Finally, the transcript stresses workflow discipline: avoid requesting everything at once. Draft section-by-section—perfect the first paragraph or two, then move forward—so each step becomes a new form of priming. Across techniques, the model’s only input is the prompt, so specificity matters, ambiguity should be minimized, and follow-ups are encouraged. Keeping a record of prompts that work (in tools like Notion or Obsidian) helps users reuse and improve their best-performing instructions over time.

Cornell Notes

ChatGPT delivers better results when prompts include clear context, defined constraints, and targeted guidance like priming and personas. Start by stating the goal and the intended audience so the model can match tone and relevance. Then narrow outputs with “do” and “don’t” constraints—such as ingredient lists, time limits, format requirements, and exclusions—to reduce irrelevant options. For writing, priming can teach the model a preferred style by analyzing past scripts or blog posts, and examples can guide titles and keyword choices. Because ChatGPT is interactive, users should iterate: refine sections one at a time, correct what’s wrong, and document effective prompts for reuse.

Why does adding context (goal + audience) change the quality of ChatGPT outputs?

Context tells the model what success looks like and who the answer is for. The transcript frames it like a consultation: the first questions are the goals and what the user wants to get out of the process. In practice, that means stating the objective in plain English (e.g., improve public speaking, reduce stress, start a novel) and describing the audience in detail (age, interests, and constraints). A prompt aimed at a 30-year-old learning programming should differ from one aimed at high school students with ADHD trying to become better public speakers.

How do constraints reduce bad outputs?

Constraints narrow the model’s search space, similar to filtering products online. Instead of “Japanese recipes,” specify “Japanese recipe based on these ingredients,” then add a time constraint (e.g., ready in 40 minutes) and exclusions (e.g., not vegetarian, not spicy). For summarization, constraints can include target length (200 words), structure (four bullet points), and title rules (50 characters, avoid complex words).

What is priming, and how can it be used for writing style?

Priming means giving examples so the model can mimic a desired pattern. For writing style, users can feed ChatGPT past material (YouTube scripts or blog posts), ask it to analyze the style, then request new writing in that style. The transcript notes it may not feel fully natural for producing original content, but it can still guide tone and structure. Priming also works with examples of preferred titles or keyword focus—showing what you like often outperforms describing it abstractly.

What does using personas accomplish?

Personas let ChatGPT answer from a specific role or viewpoint. Examples include acting as a Portuguese tutor for an intermediate learner and correcting mistakes during conversation, or adopting a creative expert/fellow artist persona to brainstorm ideas. The transcript also mentions impersonating a celebrity’s perspective and then asking for the reasoning behind the answer. This works because the persona steers both tone and content.

Why should users write or generate content section-by-section instead of all at once?

Requesting a full article in one shot can reduce control. The transcript recommends focusing on the first paragraph or first two paragraphs, getting them right, then moving to the next section. Once early sections match the desired style, they effectively prime the model for what comes next, lowering the chance of a mismatch later in the draft.

What habits help users improve prompts over time?

Specificity and iteration are key: avoid vague wording, minimize ambiguity, and follow up when outputs miss the mark. Users can correct the model by pointing to what was wrong and asking for more variations of the best option. The transcript also emphasizes documenting effective prompts in a note system such as Notion or Obsidian so they can be reused and refined.

Review Questions

  1. When would you include audience details in a prompt, and what specific details matter most?
  2. Give an example of three constraints you could add to improve a recipe request or a summarization request.
  3. How would you prime ChatGPT to match your writing style, and what follow-up steps would you take if the result still feels off?

Key Points

  1. 1

    State the goal and intended audience at the start of a prompt to align tone, relevance, and expectations.

  2. 2

    Describe the audience with concrete details (age, interests, learning needs) so the model can tailor its response.

  3. 3

    Use constraints to narrow outputs: specify inputs, formats, lengths, time limits, and explicit exclusions.

  4. 4

    Prime style and preferences by providing examples (past scripts/blog posts, sample titles, keyword focus) rather than only describing them.

  5. 5

    Generate content iteratively—perfect early sections first—so the draft itself primes the next steps.

  6. 6

    Keep prompts specific and low-ambiguity, then follow up to correct errors or request variations.

  7. 7

    Document prompts that work in a system like Notion or Obsidian to reuse and improve them later.

Highlights

The biggest quality jump comes from treating prompts like a consultation brief: goal first, then audience.
Constraints act like filters—ingredients, time limits, and exclusions dramatically reduce irrelevant answers.
Priming can transfer writing style by analyzing prior scripts or blog posts, and examples can guide titles better than descriptions alone.
Personas let ChatGPT adopt roles (tutor, creative expert, celebrity viewpoint) and users can ask for reasoning behind outputs.
Section-by-section drafting improves control and effectively primes the model as the style locks in.

Topics

Mentioned