How I Ranked 1st at Monash University - 2 Mindset Shifts
Based on Justin Sung's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Treat exam grades as feedback about hidden gaps, not as a direct scoreboard of learning ability.
Briefing
Finishing a Master’s while working full-time and still ranking first came down to two mindset shifts: stop treating exam grades as a measure of learning ability, and become aggressively critical about what’s being learned. The payoff was not just better marks, but a more reliable way to close knowledge gaps before they show up under exam pressure.
The first shift starts with a problem: study time and exam outcomes weren’t tracking in a predictable way. In undergrad, more hours sometimes produced only marginal gains—or even worse results—while other times lighter studying still led to strong grades. That uncertainty bred a familiar pre-exam feeling: confusion, anxiety, and a sense of “hoping” the test wouldn’t ask the wrong things. Instead of treating grades as the scoreboard, he reframed exam results as a symptom of underlying gaps.
The practical method was to replace delayed, outcome-based feedback with frequent, objective self-testing. The goal was to enter exams already knowing what performance to expect because the knowledge check had happened earlier and often. He recommends building exams with a study buddy using marking criteria and “exam-like” test conditions, then doing them as part of a regular revision cycle rather than rereading notes. Crucially, he warns against relying on answer keys to get through practice. If confidence is low, attempt the question anyway, surface mistakes, and then extract what went wrong—why the error happened and which related concepts are likely to fail next. Each mistake becomes multiple learning opportunities, turning revision into targeted repair instead of broad re-study.
He also describes a workflow during his Master’s: intensive studying early on, then shifting to intermittent testing and challenging himself before submitting assignments. When he couldn’t ask for direct grading feedback on drafts, he sought deep conversations with lecturers—explaining his understanding and probing for misunderstandings—so errors would surface in advance.
The second shift is about building a foundation that can handle future demands, not just current assessments. Early on, he optimized for exam performance by consuming information, integrating it, and regurgitating it—achieving consistent A-minus averages. But that approach collapsed in real-world settings during medical training, where exam knowledge didn’t transfer to patient care. The lesson: “good learning” requires skepticism toward what’s taught and preparation for what will be needed later.
That skepticism became a deliberate habit: be the “annoying person” who challenges ideas with humility and open-mindedness. In practice, it means asking probing questions, fact-checking while reading, and forming independent judgments rather than waiting to be corrected. He links this to higher-level learning processes associated with Bloom’s revised taxonomy—particularly the critical evaluation step.
A final bonus tip ties both shifts together: set aside 30 minutes a day for learning and reflection (“priority time”). The point isn’t short-term exam cramming; it’s carving out space to plan, think, and adjust the system so improvements compound over weeks rather than fading after the next deadline.
Cornell Notes
The core insight is that exam grades should be treated as a symptom, not a direct measure of learning ability. Instead of waiting for results, frequent, objective self-testing reveals knowledge gaps early—before forgetting accumulates and revision turns into relearning. The second mindset shift is to be critically skeptical while learning, because exam-focused regurgitation can fail when real-world application arrives. Together, regular testing and critical evaluation build a stronger foundation that supports future performance. A daily 30-minute “priority time” for learning and reflection helps turn these ideas into consistent action.
Why does treating exam grades as “the measure” of learning create problems?
What replaces delayed feedback in this approach?
How should mistakes be handled to maximize learning?
What does “be a prick” mean in the learning context?
How can a learner practice critical skepticism without access to lecturers?
What role does daily reflection (“priority time”) play?
Review Questions
- How would you redesign your study schedule if exam results are treated as a symptom rather than a measure of learning?
- What specific behaviors turn “critical skepticism” into a practical routine during reading or lectures?
- Why might frequent testing reduce both anxiety and wasted study time compared with rereading notes?
Key Points
- 1
Treat exam grades as feedback about hidden gaps, not as a direct scoreboard of learning ability.
- 2
Increase learning reliability by testing yourself frequently under exam-like conditions rather than waiting for final results.
- 3
Build practice exams using marking criteria and do them in test conditions, ideally with a study buddy.
- 4
Don’t use answer keys as a shortcut—attempt first, then analyze mistakes to extract targeted fixes.
- 5
Use mistakes to generate a focused revision map of related concepts likely to fail next.
- 6
Be critically skeptical while learning—challenge ideas with humility and open-mindedness to build transferable understanding.
- 7
Protect daily time for learning and reflection (30 minutes) so improvements compound instead of resetting after each deadline.