Steal My 2-Prompt Blueprint: Turn ChatGPT Into Your Personal AI Tutor (Live Demo)
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Hard mode builds a custom AI tutor prompt by forcing a structured blueprint workflow: Purpose, Instructions, Reference, Output.
Briefing
A two-prompt “blueprint” turns ChatGPT into a personal AI tutor by treating prompting as a learning system—not a one-off request. The core move is to structure prompts around a repeatable workflow (diagnose → teach → practice → escalate) and to use “switches” (hard vs. easy, agentic vs. lighter modes, effort levels, output formats) so the model adapts to a learner’s level and pace. That matters because small wording and sequencing changes can dramatically reshape what the model does next—whether it quizzes methodically and gatekeeps until answers are complete, or it delivers micro-lessons immediately with single-question diagnostics.
The “hard mode” prompt is built to generate a custom learning tutor prompt for the user. It starts by assigning a role (“prompt coach”) and a shared mission: craft a prompt blueprint that quizzes methodically to diagnose the learner’s current level and then delivers progressively harder lessons. The prompt’s architecture follows a four-part framework—Purpose, Instructions, Reference, Output—and it insists on workflow discipline: go section by section, don’t skip ahead, ask the full question set, gatekeep until all answers are sufficiently clear, and carry confirmed answers forward using memory so the user isn’t repeatedly re-asked. It also forces the user to specify learning “stance” and constraints (e.g., interrogative tone, allowed references, output length in words or tokens, and formatting like Markdown or JSON). To deepen the model’s reasoning, it includes reference examples as placeholders that signal the desired depth across different domains (e.g., pricing strategy, content calendar, agentic monitoring), without pasting full prompts that could hijack the model into the wrong task.
The “easy mode” prompt targets learners who don’t want to complete a long setup. It keeps the same overall mission—diagnose current level and deliver progressively harder lessons—but adds constraints designed for quick consumption: single-question mode, micro-lessons, and a cap of no more than five questions at a time. Each micro-lesson follows a tight loop: ask a diagnostic question, teach a concept, give a practice task or code snippet, and optionally add a harder challenge only after the learner scores above 80% on the prior practice. It also introduces pacing and control features (batch up to three questions, shorten lessons, ask for progress summaries) and emphasizes active learning tactics like mini-projects, code snippets, thought experiments, and authoritative sources.
In a live run, hard mode first produces a table of many setup questions and example answers, then assembles the final “learning tutor” prompt once the user completes the required inputs. Easy mode, by contrast, begins teaching immediately: it responds to a learner’s answer, expands it with structured explanations (including flashcard-style framing), then continues with the next diagnostic question—illustrated with backpropagation and validation/test data.
The takeaway is practical: both prompts achieve the same tutoring goal, but flipping early “switches” (question cadence, micro-lesson structure, gatekeeping, and default effort) changes the learning experience. Hard mode effectively becomes a meta-prompt generator—prompting that builds a custom prompt—while easy mode fills in defaults so learners can start right away. The result is a reusable approach for crafting AI tutors tailored to different knowledge levels and time constraints.
Cornell Notes
Two prompting templates turn an AI assistant into a personal tutor by treating prompting as a learning system. “Hard mode” builds a custom tutor prompt from scratch: it quizzes the learner methodically, gatekeeps until answers are clear, carries confirmed answers forward, and then outputs a structured blueprint using Purpose/Instructions/Reference/Output. “Easy mode” keeps the same tutoring mission but adds beginner-friendly constraints—single-question diagnostics and micro-lessons—so learning starts immediately. Micro-lessons escalate only after the learner scores above 80% on practice, and the prompt includes pacing controls and active-learning tasks. Together, the prompts demonstrate how small changes in structure and wording reshape the model’s behavior and learning flow.
Why does the “hard mode” prompt insist on a role and a mission at the top?
What workflow rules make hard mode feel like a “system” rather than a single response?
How does easy mode reduce friction while still keeping learning adaptive?
What exactly triggers escalation to harder material in easy mode?
Why are example references included as placeholders instead of pasting full prompts?
Review Questions
- In hard mode, what are the four sections of the prompt blueprint, and how does the workflow enforce their order?
- Compare the diagnostic and teaching loops in hard mode vs. easy mode. What changes when the prompt switches from full question sets to single-question micro-lessons?
- In easy mode, how does the prompt decide when to escalate difficulty, and what role does the 80% threshold play?
Key Points
- 1
Hard mode builds a custom AI tutor prompt by forcing a structured blueprint workflow: Purpose, Instructions, Reference, Output.
- 2
Hard mode uses gatekeeping and memory to ensure the learner answers the full diagnostic set clearly before the final blueprint is assembled.
- 3
Easy mode starts teaching immediately by using single-question diagnostics and micro-lessons, reducing setup burden for beginners.
- 4
Easy mode escalates difficulty only when practice performance exceeds 80%, creating an adaptive learning loop.
- 5
Example references are used as depth placeholders to steer reasoning without risking task hijacking from pasted full prompts.
- 6
Prompt “switches” (question cadence, effort defaults, mode behavior) can produce materially different tutoring experiences even when the overall goal stays the same.