Get AI summaries of any video or article — Sign up free
Steal My 2-Prompt Blueprint: Turn ChatGPT Into Your Personal AI Tutor (Live Demo) thumbnail

Steal My 2-Prompt Blueprint: Turn ChatGPT Into Your Personal AI Tutor (Live Demo)

5 min read

Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Hard mode builds a custom AI tutor prompt by forcing a structured blueprint workflow: Purpose, Instructions, Reference, Output.

Briefing

A two-prompt “blueprint” turns ChatGPT into a personal AI tutor by treating prompting as a learning system—not a one-off request. The core move is to structure prompts around a repeatable workflow (diagnose → teach → practice → escalate) and to use “switches” (hard vs. easy, agentic vs. lighter modes, effort levels, output formats) so the model adapts to a learner’s level and pace. That matters because small wording and sequencing changes can dramatically reshape what the model does next—whether it quizzes methodically and gatekeeps until answers are complete, or it delivers micro-lessons immediately with single-question diagnostics.

The “hard mode” prompt is built to generate a custom learning tutor prompt for the user. It starts by assigning a role (“prompt coach”) and a shared mission: craft a prompt blueprint that quizzes methodically to diagnose the learner’s current level and then delivers progressively harder lessons. The prompt’s architecture follows a four-part framework—Purpose, Instructions, Reference, Output—and it insists on workflow discipline: go section by section, don’t skip ahead, ask the full question set, gatekeep until all answers are sufficiently clear, and carry confirmed answers forward using memory so the user isn’t repeatedly re-asked. It also forces the user to specify learning “stance” and constraints (e.g., interrogative tone, allowed references, output length in words or tokens, and formatting like Markdown or JSON). To deepen the model’s reasoning, it includes reference examples as placeholders that signal the desired depth across different domains (e.g., pricing strategy, content calendar, agentic monitoring), without pasting full prompts that could hijack the model into the wrong task.

The “easy mode” prompt targets learners who don’t want to complete a long setup. It keeps the same overall mission—diagnose current level and deliver progressively harder lessons—but adds constraints designed for quick consumption: single-question mode, micro-lessons, and a cap of no more than five questions at a time. Each micro-lesson follows a tight loop: ask a diagnostic question, teach a concept, give a practice task or code snippet, and optionally add a harder challenge only after the learner scores above 80% on the prior practice. It also introduces pacing and control features (batch up to three questions, shorten lessons, ask for progress summaries) and emphasizes active learning tactics like mini-projects, code snippets, thought experiments, and authoritative sources.

In a live run, hard mode first produces a table of many setup questions and example answers, then assembles the final “learning tutor” prompt once the user completes the required inputs. Easy mode, by contrast, begins teaching immediately: it responds to a learner’s answer, expands it with structured explanations (including flashcard-style framing), then continues with the next diagnostic question—illustrated with backpropagation and validation/test data.

The takeaway is practical: both prompts achieve the same tutoring goal, but flipping early “switches” (question cadence, micro-lesson structure, gatekeeping, and default effort) changes the learning experience. Hard mode effectively becomes a meta-prompt generator—prompting that builds a custom prompt—while easy mode fills in defaults so learners can start right away. The result is a reusable approach for crafting AI tutors tailored to different knowledge levels and time constraints.

Cornell Notes

Two prompting templates turn an AI assistant into a personal tutor by treating prompting as a learning system. “Hard mode” builds a custom tutor prompt from scratch: it quizzes the learner methodically, gatekeeps until answers are clear, carries confirmed answers forward, and then outputs a structured blueprint using Purpose/Instructions/Reference/Output. “Easy mode” keeps the same tutoring mission but adds beginner-friendly constraints—single-question diagnostics and micro-lessons—so learning starts immediately. Micro-lessons escalate only after the learner scores above 80% on practice, and the prompt includes pacing controls and active-learning tasks. Together, the prompts demonstrate how small changes in structure and wording reshape the model’s behavior and learning flow.

Why does the “hard mode” prompt insist on a role and a mission at the top?

The role (“prompt coach”) is used to steer the conversation into the right semantic space so the dialogue flows toward building a tutoring blueprint. The mission is explicit: craft a prompt blueprint that quizzes methodically to diagnose the learner’s current level and then delivers progressively harder lessons. That mission then drives the rest of the structure—Purpose, Instructions, Reference, Output—so the model knows both the end state (a usable tutor prompt) and the order of operations (diagnose first, then escalate).

What workflow rules make hard mode feel like a “system” rather than a single response?

Hard mode includes operational constraints: follow the four sections in order (Purpose → Instructions → Reference → Output), ask the full question set, gatekeep until all answers are sufficiently clear, and retain confirmed answers so the user isn’t repeatedly re-queried. It also requires the model to assemble and display the final prompt blueprint only after the sections are completed, turning the interaction into a repeatable build-and-iterate process.

How does easy mode reduce friction while still keeping learning adaptive?

Easy mode preloads defaults and limits interaction complexity: it uses single-question mode, micro-lessons, and restricts clarification to one pointed question at a time (with a cap of no more than five questions). Instead of waiting for a full setup, it starts teaching immediately after each diagnostic answer, then continues with the next question—so the learner gets value during the process, not after it.

What exactly triggers escalation to harder material in easy mode?

Escalation is tied to performance: after a practice task (including code snippets), the prompt only adds an optional harder challenge when the learner scores more than 80% on the prior practice. This creates a feedback loop that adjusts difficulty based on demonstrated understanding rather than time spent or intuition.

Why are example references included as placeholders instead of pasting full prompts?

The prompt uses example “depth signals” to teach the model how deeply to think across different subjects (e.g., pricing strategy, content calendar, agentic monitoring). It avoids pasting full prompts to reduce the risk of task hijacking—where the model might start acting like a pricing-strategy prompt instead of building a tutoring blueprint. The placeholders provide the desired reasoning depth without overwhelming the system.

Review Questions

  1. In hard mode, what are the four sections of the prompt blueprint, and how does the workflow enforce their order?
  2. Compare the diagnostic and teaching loops in hard mode vs. easy mode. What changes when the prompt switches from full question sets to single-question micro-lessons?
  3. In easy mode, how does the prompt decide when to escalate difficulty, and what role does the 80% threshold play?

Key Points

  1. 1

    Hard mode builds a custom AI tutor prompt by forcing a structured blueprint workflow: Purpose, Instructions, Reference, Output.

  2. 2

    Hard mode uses gatekeeping and memory to ensure the learner answers the full diagnostic set clearly before the final blueprint is assembled.

  3. 3

    Easy mode starts teaching immediately by using single-question diagnostics and micro-lessons, reducing setup burden for beginners.

  4. 4

    Easy mode escalates difficulty only when practice performance exceeds 80%, creating an adaptive learning loop.

  5. 5

    Example references are used as depth placeholders to steer reasoning without risking task hijacking from pasted full prompts.

  6. 6

    Prompt “switches” (question cadence, effort defaults, mode behavior) can produce materially different tutoring experiences even when the overall goal stays the same.

Highlights

The prompts treat tutoring as a repeatable learning system: diagnose first, then teach and escalate based on performance.
Hard mode gatekeeps until the user completes a full question set, then outputs a ready-to-use learning blueprint.
Easy mode delivers micro-lessons one question at a time and only escalates after scoring above 80% on practice.
Reference examples are included as placeholders to signal depth while avoiding model hijacking from full pasted prompts.

Topics