Create Your Perfect PhD Supervisor with GPT 5
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use study mode to convert supervisor feedback into prioritized, time-bounded action plans with step-by-step execution.
Briefing
ChatGPT 5 is positioned as a PhD productivity tool that reduces mental overload by shifting routine work—planning, rewriting, and drafting—into purpose-built “modes.” The core pitch is simple: PhD success doesn’t require more raw brain power; it requires saving limited attention for high-impact research tasks while using AI to handle the administrative and cognitive grunt work that otherwise drains focus.
The most emphasized feature is “study mode.” When a supervisor’s feedback arrives—often a messy mix of comments, priorities, and next steps—study mode can convert it into an actionable plan. Instead of staring at a long list of revisions, users can paste in the feedback (including text copied from an email or transcript) and receive a structured, prioritized set of tasks. The example workflow turns supervisor notes into a two-week action plan broken down into step-by-step items, including day-by-day sequencing such as restructuring sections, revising introductions, and justifying sampling. Study mode also supports iterative refinement: users can highlight a specific part of the plan and ask for expanded detail, producing a more granular roadmap without starting from scratch.
A second major lever is “thinking mode,” which is framed as the antidote to low-effort outputs. Auto-style responses are described as tending toward the “lowest effort” result, while thinking mode forces deeper processing before generating answers. In an academic use case, a draft method section is fed into thinking mode with instructions to identify missing elements and risks. The output is presented as a checklist of what needs attention—critical blockers, ambiguous terms that could confuse readers, and hidden assumptions embedded in the writing. The practical takeaway is that tasks requiring judgment (what’s unclear, what’s missing, what assumptions are being made) benefit from a mode that spends time reasoning rather than responding instantly.
Beyond modes, the workflow includes personalization and dictation. Custom instructions in settings let users tailor how ChatGPT interacts—choosing a tone such as “listener,” “thoughtful and supportive,” or more direct “straight shooting” styles—plus preferences like how the assistant should address the user and what motivational framing works best (e.g., “carrot and stick”). For capturing content quickly, dictation is highlighted as a way to offload transcription and drafting effort. The transcript also recommends pairing dictation with Text Blaze, a shortcut tool for repetitive writing tasks. A common example is using short codes to “respond to this email,” then dictating the message content and copying the resulting text into place.
Taken together, the guidance is to match the task to the right tool: use study mode for turning feedback into plans, thinking mode for analysis-heavy writing improvements, dictation and automation for repetitive communication, and personalization to keep the assistant aligned with individual working styles. The result is a workflow designed to keep researchers moving forward—without burning cognitive energy on routine, draining tasks.
Cornell Notes
ChatGPT 5 can be configured to reduce PhD workload by routing different tasks into specialized modes. Study mode turns supervisor feedback into prioritized, step-by-step action plans (including timelines like a two-week breakdown) and can expand details on demand. Thinking mode is recommended for higher-stakes writing and analysis, because it produces more thorough outputs by spending time reasoning—such as flagging ambiguous terms, hidden assumptions, and critical blockers in a draft method section. Custom instructions let users choose a tone (e.g., listener vs. direct) and motivational style, while dictation and Text Blaze shortcuts help automate repetitive email drafting. The overall aim is to conserve brainpower for research decisions and idea generation.
How does study mode help when supervisor feedback feels overwhelming?
Why is thinking mode preferred over “auto” for academic writing tasks?
What does personalization change in how ChatGPT interacts with a PhD student?
How can dictation and Text Blaze reduce repetitive PhD communication work?
What’s the practical rule for choosing which mode to use?
Review Questions
- When would study mode be most useful in a PhD workflow, and what kind of output should it produce?
- What types of writing problems are better handled by thinking mode, according to the transcript’s examples?
- How do custom instructions and dictation work together to reduce friction in day-to-day research tasks?
Key Points
- 1
Use study mode to convert supervisor feedback into prioritized, time-bounded action plans with step-by-step execution.
- 2
Ask for expansion on specific highlighted parts of a study plan to turn broad guidance into detailed next actions.
- 3
Prefer thinking mode for analysis-heavy academic writing tasks where judgment is required (e.g., spotting ambiguous terms and hidden assumptions).
- 4
Avoid relying on instant/auto-style responses for work that needs deeper reasoning and careful critique.
- 5
Personalize ChatGPT’s tone and motivational style through custom instructions so guidance matches individual preferences.
- 6
Use dictation to reduce typing and mental load for drafting, especially for repetitive communication like emails.
- 7
Pair dictation with Text Blaze shortcuts to automate common response workflows and speed up email drafting.