Get AI summaries of any video or article — Sign up free
ChatGPT Prompt Engineering Level Up in 8 Minutes thumbnail

ChatGPT Prompt Engineering Level Up in 8 Minutes

All About AI·
5 min read

Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Generic diet prompts tend to produce broad, non-actionable guidance; adding specific goals and deadlines improves structure.

Briefing

Prompt engineering moves from generic requests to highly tailored plans by stacking three ingredients: clearer goals, richer personal context, and explicit instructions that define the model’s role and constraints. The biggest practical takeaway is that vague prompts produce vague guidance—while specific, structured prompts can yield actionable diet, exercise, and accountability plans.

At “Level 0,” the request is broad: “what is a good diet to lose weight.” The response stays general, offering common advice like focusing on whole foods, eating fruits, watching portion sizes, choosing lean proteins and healthy fats, and adding exercise. It also includes a typical limitation: it’s not a dietitian, so the guidance remains high-level.

“Level 1” improves results by adding measurable targets and a time horizon. Instead of asking generally about dieting, the prompt sets a goal—lose 11 pounds (5 kilograms) in 60 days. With that specificity, the model shifts from general tips to a more structured outline: estimating daily calorie needs, maintaining a calorie deficit, making healthier food choices, tracking intake, adding regular physical activity, and getting enough sleep—plus the reminder to consult healthcare professionals.

“Level 2” goes further by personalizing the plan with background details and constraints. The prompt includes age (40), current weight (213 pounds / 97 kilos), a target weight (“under 200 pounds”), current activity level (cardio about 20 minutes per day), and the user’s concern about eating too much. It also asks for an eight-week diet plan with a realistic tone (“I don’t want to die” is used as a humorous constraint) and willingness to exercise more. The resulting output becomes schedule-based: weeks 1–2 emphasize breakfast/lunch/dinner meal suggestions and walking increases to 30 minutes, then weeks 3–4 keep the diet while raising exercise and strength training, and weeks 7–8 maintain the structure while emphasizing adjustment based on how the body responds.

The highest level shown (“Level 3”) combines role assignment, research grounding, and a final personalized deliverable. The prompt instructs the model to ignore prior instructions and adopt a “weight loss and diet expert” persona, then to produce a detailed, easy-to-follow diet and exercise plan plus an accountability system. It also injects external research—intermittent fasting findings sourced from Harvard—by pasting the material into the prompt and requiring confirmation that the model has read it. The final request asks for a plan to lose 13 pounds in 60 days based on all provided context.

The output becomes notably more specific: it uses intermittent fasting with the 16:8 method (16 hours fasting, 8-hour eating window), proposes meal planning across days (lunch, snacks, dinner, additional snack), and lays out exercise elements including strength training, flexibility, and recovery. It also adds accountability via SMART goals, progress tracking, a support system, scheduled check-ins, and celebrating milestones.

Finally, the workflow extends beyond text prompting. GPT-4 is used to generate a narrated motivational speech, then the speech is fed into 11 Labs Voice Studio to create an AI voice recording. The speech is paired with image generation to produce motivational visuals, which are then assembled into a cohesive end-to-end motivation package—showing how prompt engineering can drive both health planning and supportive content creation.

Cornell Notes

The transcript demonstrates a step-by-step upgrade in prompt engineering for weight-loss planning. Early prompts are broad and produce generic advice, but adding measurable goals and timeframes improves structure. Adding personal context (age, weight, activity level, constraints) turns guidance into an eight-week schedule with diet and exercise progression. The top tier adds a defined expert role, research grounding (intermittent fasting material from Harvard), and an explicit requirement to confirm reading before generating a plan. The result is a detailed, personalized program using the 16:8 method plus an accountability system, and the same prompting approach is extended to generate motivational speech and AI voice content via 11 Labs.

Why does a broad weight-loss prompt produce generic advice, and what changes at Level 1?

A broad prompt like “what is a good diet to lose weight” lacks constraints, so the model stays at general principles (whole foods, fruits, portion control, lean proteins, healthy fats, exercise). At Level 1, the prompt adds a measurable target and deadline—lose 11 pounds (5 kilograms) in 60 days—which pushes the model toward a more structured plan: calorie needs and a calorie deficit, healthier food choices, tracking intake, regular physical activity, and sleep, along with a healthcare-consultation reminder.

What specific personal details in Level 2 make the plan feel more actionable?

Level 2 includes age (40), current weight (213 pounds / 97 kilos), a target (“under 200 pounds”), current cardio (about 20 minutes/day), and a constraint that eating too much is a concern. It also asks for an eight-week plan and expresses willingness to exercise more. Those details let the model output week-by-week structure: weeks 1–2 focus on meal options and increasing daily walks to 30 minutes; weeks 3–4 keep the diet while increasing strength training; weeks 7–8 emphasize ongoing adjustments based on how the body responds.

What does Level 3 add beyond personalization?

Level 3 adds three layers: (1) a role/persona instruction (“weight loss and diet expert”) with a requirement to produce a detailed plan and an accountability system; (2) research grounding by pasting intermittent fasting findings sourced from Harvard; and (3) a procedural check that the model must confirm it has read the research (“answering yes”). This combination yields a plan that’s not just personalized, but also explicitly tied to cited material.

How does the intermittent fasting plan work in the Level 3 output?

The plan uses the 16:8 intermittent fasting method: fast for 16 hours and eat within an 8-hour window. It also frames calorie intake around that window (reducing intake by controlling the eating period) and includes meal planning suggestions across the day (lunch, snack, dinner, and another snack).

What does the accountability plan include, and why is it part of the prompt?

The prompt explicitly demands accountability, so the output includes SMART goals, progress tracking, a support system, scheduled check-ins, and celebrating achievements. Those elements turn the plan from a static diet/exercise outline into something the user can manage over time.

How does the transcript extend prompt engineering beyond diet planning?

After generating the health plan, the workflow uses GPT-4 to create a short narrated motivational speech. That text is then pasted into 11 Labs Voice Studio to generate an AI voice recording in about 20 seconds. The speech is also paired with generated motivational images, and the final assembly combines voice and visuals into a motivation package.

Review Questions

  1. How do measurable goals and timeframes change the structure of the model’s response compared with a generic diet question?
  2. Which three additions in Level 3 most strongly increase specificity (role/persona, research grounding, procedural confirmation, or something else)?
  3. What accountability mechanisms are included in the Level 3 output, and how do they support adherence over an eight-week period?

Key Points

  1. 1

    Generic diet prompts tend to produce broad, non-actionable guidance; adding specific goals and deadlines improves structure.

  2. 2

    Personal context (age, weight, activity level, constraints) enables week-by-week diet and exercise progression rather than one-size-fits-all tips.

  3. 3

    Defining a role/persona and requiring the model to produce both a plan and an accountability system increases completeness.

  4. 4

    Grounding prompts in external research (e.g., intermittent fasting material from Harvard) can make the output more targeted and defensible.

  5. 5

    Using explicit procedural checks (confirming the research was read) helps keep the model aligned with the provided material.

  6. 6

    The same prompting approach can generate supportive content—like motivational speeches—then convert it into audio via 11 Labs Voice Studio.

  7. 7

    Combining diet, exercise, and accountability elements is presented as the path to better adherence than diet advice alone.

Highlights

A vague question (“good diet to lose weight”) yields generic wellness advice, while adding a measurable target (11 pounds in 60 days) shifts the response toward calorie deficit planning.
Level 2 turns advice into an eight-week schedule by injecting personal details like current weight, daily cardio, and willingness to increase exercise.
Level 3 produces a detailed plan by stacking role assignment, Harvard-sourced intermittent fasting research, and a requirement to confirm reading before generating the final program.
The 16:8 intermittent fasting method appears as a core mechanism in the personalized plan, paired with meal timing and exercise routines.
Prompt engineering extends into multimedia: GPT-4 generates a motivational speech, 11 Labs Voice Studio creates an AI voice, and generated images reinforce the message.

Topics

Mentioned