Get AI summaries of any video or article — Sign up free
ChatGPT 101: The No BS Guide to How to Actually Make AI Work for You thumbnail

ChatGPT 101: The No BS Guide to How to Actually Make AI Work for You

5 min read

Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Add labeled context (situation), explicit deliverables (what to produce), constraints (tone/relationship limits/no fluff), and success criteria (what “good” looks like) to get usable outputs.

Briefing

AI becomes genuinely useful at work when prompts supply enough context, constraints, and success criteria—turning generic outputs into scripts, summaries, and decision-ready analysis. Across multiple real scenarios, the core pattern is consistent: short prompts often produce cold or incomplete results, while slightly richer prompts (with labeled inputs like situation, goal, constraints, and what “good” looks like) produce outputs people can actually paste into emails, trackers, and meeting follow-ups.

The clearest before-and-after comes from a workplace email: “Help me tell my manager to stop changing my deadlines.” A one-line prompt yields an overly formal, almost robotic message and even raises questions like whether the AI’s suggested “freeze dates 48 hours before kickoff” is real or invented. Adding “meat on the bone” changes everything. The improved prompt specifies that deadlines shift weekly, asks for a simple process that doesn’t damage the relationship, and defines success as predictable delivery without sounding accusatory. The AI then offers usable options—such as a lightweight shared tracker, a “sandwich approach” talk track, and a weekly snapshot to verify deadlines—plus follow-up handling if the manager pushes back. The takeaway is not that “magic words” exist, but that the AI mirrors what’s provided: more relevant detail yields more actionable output.

The same mechanism shows up in meeting-note summarization. A basic instruction—“Summarize my meeting notes”—can capture outcomes and action items, but it often misses the risks that quietly accumulate until launch day. When the prompt demands a structured summary that explicitly includes risks with likelihood, impact, and owners (and instructs the model to skip empty sections and avoid fluff), the output becomes operational. It surfaces concrete items like onboarding bottlenecks, performance fixes, GA launch details, pricing tiers, and—most importantly—risk assignments that prompt follow-up with specific people.

The transcript also tackles how to handle tasks without an existing prompt template. For comparing two financial reports, the approach is to ask the AI to generate the prompt itself: provide the goal (compare reports, identify drivers), constraints (100 words or less), audience (financial professionals), and a verification step. The AI responds with two structured analysis methods—driver delta versus factor attribution (volume/price/mix/cost controls)—and can swap between them when the user specifies which method to use. The same “prompt-for-a-prompt” logic is demonstrated in a parenting scenario where the user needs conversational scripts for a teacher discussion; structured prompting produces collaborative and observation-based scripts, then even explains the method “like I’m 12.”

Finally, the transcript emphasizes progressive, multi-turn prompting for delicate communication. Instead of accepting a first draft, the user iteratively asks for alternative ways to decline a rushed client request, chooses a preferred strategy (clear referral versus timing emphasis), then requests targeted edits like removing buzzwords, clarifying the subject line, and checking whether dates/steps are specific enough. The result is a near-ready email for a high-value client—arrived at through a short series of focused revisions rather than a single perfect prompt.

Overall, the practical message is straightforward: AI outputs improve when users describe the situation, specify what they need, list constraints, and define success in clear terms. With that structure, even complex work—emails, risk tracking, financial driver analysis, and conversation scripts—becomes repeatable and dependable.

Cornell Notes

Work-ready AI results come from prompts that include more than a task name. When users add labeled context (situation), clear instructions (what to produce), constraints (tone, relationship limits, no fluff), and success criteria (what “good” looks like), outputs shift from generic or cold to usable—scripts, trackers, and structured summaries. Examples include rewriting a deadline-change email with process options, turning meeting notes into a risk-aware checklist with likelihood/impact/owner fields, and comparing financial reports by asking the AI to generate a tailored prompt for driver analysis methods. Multi-turn prompting matters too: iterating on drafts, requesting alternatives, and asking for targeted edits produces better delicate client communication than a single one-shot prompt.

Why does a one-line prompt like “Help me tell my manager to stop changing my deadlines” often fall short?

It tends to produce generic, overly formal language and may invent specifics without grounding. In the example, the first draft sounded cold and even suggested a “freeze dates 48 hours before kickoff” idea without evidence that it matched the user’s real constraints. The fix was adding context (deadlines change weekly), success criteria (predictable delivery without sounding accusatory), and boundaries (don’t damage the relationship), which led to concrete, usable options like a shared tracker and a weekly snapshot process.

How can meeting-note summarization be made more useful than a basic “summarize” request?

By explicitly requiring risk extraction in a structured format. The improved prompt asked for: main events, key decisions, open questions, risks, and next seven days—plus instructions to define risks with likelihood, impact, and owner, and to skip empty sections and avoid fluff. That structure turns “what happened” notes into “what could go wrong” action items, including assigning who should watch performance fixes to prevent launch-day surprises.

What’s the practical method for comparing two financial reports when you don’t already have a prompt template?

Ask the AI to generate the prompt for the task. Provide the goal (compare reports and explain drivers), constraints (e.g., 100 words or less), audience (financial professionals), and a verification step. The AI then returns a ready-to-use prompt and offers two analysis approaches: a “driver delta” method (highlighting changes in revenue/cost/margins/cash flows) or a “factor attribution” method (volume/price/mix/cost controls with quantified impact).

How does structured prompting improve a sensitive conversation request (like speaking to a teacher)?

It forces the model to follow a conversational plan rather than guessing tone. The example used constraints like maintaining the relationship and asked for two script styles (collaborative ask and observation ask), then merged them into a recommended version. The model also handled a typo and later explained the approach “like I’m 12,” reinforcing that structure tells the AI what question to answer and what actions to produce.

Why does multi-turn prompting outperform a single draft for a client email that needs to decline a rush request?

Because it lets the user steer strategy and then refine presentation. The example started with a short decline that felt wordy, then added context about the relationship and desired tone. The user then asked for two ways to handle the decline, chose a strategy (clear referral), and requested targeted edits: remove buzzwords, make the subject line clear, and check whether the message includes specific dates/steps using placeholders. The final email was “90–95% there” after several focused iterations.

Review Questions

  1. When should a user switch from a one-shot prompt to a multi-turn edit loop, and what kinds of follow-up questions are most effective?
  2. What elements should be included in a prompt to reliably extract risks from meeting notes, and why do likelihood/impact/owner fields matter?
  3. How can asking the AI to generate a prompt for a new task reduce the need to memorize many templates?

Key Points

  1. 1

    Add labeled context (situation), explicit deliverables (what to produce), constraints (tone/relationship limits/no fluff), and success criteria (what “good” looks like) to get usable outputs.

  2. 2

    Treat first drafts as starting points; request alternatives, then choose a strategy and iterate with targeted edits (e.g., subject line clarity, removing buzzwords).

  3. 3

    For meeting notes, require a structured risk section with likelihood, impact, and owner to prevent launch-day surprises.

  4. 4

    When comparing financial reports without a template, ask the AI to generate the prompt itself using your goal, audience, constraints, and a verification step.

  5. 5

    Use the AI’s tendency to mirror provided details: if you specify what matters (speed, flexibility, relationship preservation), the output will align with those priorities.

  6. 6

    Structured prompting can also produce better conversational scripts by specifying the conversation style (collaborative vs observation-based) and constraints on tone.

Highlights

Adding context and boundaries to a deadline-change email prompt turns a cold, generic draft into multiple practical talk tracks and process options (tracker, weekly snapshot).
Meeting-note summaries become genuinely operational when prompts demand risks with likelihood, impact, and owner—turning vague risks into follow-up assignments.
Asking the AI to write the prompt for a new task (like financial report comparison) yields ready-to-use, tailored methods such as driver delta versus factor attribution.
Progressive prompting—asking for alternatives, selecting one, then requesting targeted edits—produces a near-ready client email without needing a perfect first attempt.

Topics

  • Prompting Framework
  • Work Email Scripts
  • Meeting Risk Summaries
  • Financial Driver Analysis
  • Multi-Turn Editing

Mentioned