Get AI summaries of any video or article — Sign up free
How to Study using Obsidian thumbnail

How to Study using Obsidian

Liam Gower·
5 min read

Based on Liam Gower's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use a two-phase workflow: capture lecture context during learning, then curate and connect notes afterward.

Briefing

Studying effectively in Obsidian hinges on splitting note-taking into two phases: fast, context-based capture during learning, then deliberate, connected “long-term knowledge management” afterward. The payoff is a living knowledge base where key concepts evolve over time and link back to the original contexts that shaped them—so understanding doesn’t stay trapped inside one lecture or one folder.

In the first phase, notes look familiar: a course folder with subfolders by week, and within each week, notes for each lecture. While watching “Introduction to Machine Learning,” the workflow mirrors typical study behavior—write down definitions, formulas, and explanations that seem salient in the moment. For example, while covering regression models, the notes include supervised learning framing, training data, the mathematical form for linear regression, cost functions (including what they mean with diagrams), and gradient descent. Markdown structure—subheadings and sections—helps the notes stay readable and searchable later, but the emphasis remains on capturing what matters during the learning session.

The second phase is where Obsidian’s connected-note approach becomes central. After a class or at the end of a week, the notes get reviewed with a new goal: identify the key concepts worth turning into durable “atomic” notes, then link them together like a personal Wikipedia. Instead of leaving “training data” as a one-off definition inside a lecture note, the workflow creates a dedicated note for training data and back-links it to where the idea first appeared. Even if this feels duplicative at the start, it enables future updates: when later lectures reference training data in a new way, the atomic note can be expanded or refined, while the backlinks preserve the original context.

This curation-and-linking process also forces deeper understanding. When the cost function section grows dense, the workflow extracts a clear definition in the learner’s own words, then links it back to the lecture material. For gradient descent, the workflow demonstrates a “live” state: some parts are curated and defined, while other items remain placeholders for later refinement—such as noting that other gradient descent variants exist even if they weren’t covered yet. Crucially, the notes can link to specific sections within source notes, not just entire documents.

As the course progresses, the network starts to pay off. Gradient descent reappears in “multiple linear regression,” bringing new details like the normal equation as an alternative to gradient descent and practical guidance on convergence checks via the learning curve. Those additions get folded back into the gradient descent atomic note, so knowledge accumulates in one place rather than fragmenting across separate lecture notes. The result is a continuously updated concept map: local graphs summarize what was learned in a course (e.g., training data, cost function, linear regression, gradient descent) and show how related concepts connect (e.g., learning curve, batch gradient descent, normal equation). The central claim is practical: connected note-taking turns study notes into a system that supports retrieval, revision, and long-term understanding across any subject—not just data science.

Cornell Notes

The workflow divides studying into two phases: capture context during learning, then curate long-term knowledge afterward. Lecture notes are organized by course, week, and lecture, using Markdown to record definitions, formulas, and key ideas. Afterward, the system extracts durable “atomic” concept notes (e.g., training data, cost function, gradient descent) and links them with backlinks to the original lecture sections. Later lectures add new angles—like the normal equation and convergence checks—so the atomic notes evolve instead of staying fragmented across separate classes. This matters because it preserves context while building a connected knowledge graph that improves recall and understanding over time.

How does the workflow distinguish between “context-based note-taking” and “long-term knowledge management”?

Context-based note-taking happens while learning: notes are captured in the moment inside a structured course folder (course → week → lecture). The goal is to record what seems important—definitions, formulas, and explanations—without over-optimizing structure. Long-term knowledge management begins after the session: the notes get reviewed to identify key concepts worth extracting into atomic notes, then those atomic notes are connected using links and backlinks to the lecture sources (including links to specific sections). This turns one-off lecture understanding into a reusable, updateable knowledge base.

Why create an atomic note for a concept like “training data” even if it duplicates what’s already written in a lecture note?

Duplication is temporary and strategic. The atomic note becomes the stable home for the concept’s definition and future refinements. The lecture note can link back to the atomic note via backlinks, preserving where the idea came from. When later material references training data differently, the atomic note can be updated while the backlinks keep the original context accessible—so the learner can see both the refined definition and the circumstances under which it was first learned.

What does “curating” look like when a lecture section is dense, such as cost functions?

Curating means extracting a clearer definition and organizing it as its own note. In the workflow, cost function content is turned into a dedicated atomic note with a definition written in the learner’s own words, plus a list of common cost functions learned so far (e.g., mean squared error). The atomic note then links back to the lecture material, so the learner can retrieve the deeper context while keeping the concept itself concise and durable.

How does the workflow handle partial understanding, such as gradient descent variants not yet covered?

It supports incremental refinement. Gradient descent is turned into an atomic note with a solid definition and some curated details, while other items are left as placeholders or partial entries. For example, it records that other types of gradient descent exist even if they weren’t covered yet, and the note is set up so future lectures can revisit and update those sections. This prevents the knowledge base from being “frozen” at the time of first learning.

What concrete updates happen when gradient descent is revisited in “multiple linear regression”?

New details get merged into the existing gradient descent atomic note. The workflow adds that the normal equation can serve as an alternative to gradient descent, including a practical constraint: the normal equation approach is summarized with a condition like when the number of features is large (greater than 10,000). It also adds guidance on convergence checking—using the learning curve to determine whether gradient descent is reaching optimum values. These additions reduce fragmentation by keeping gradient descent knowledge centralized.

How does the local graph reinforce learning in this system?

The local graph visualizes the concept network built from links and backlinks. It can summarize what the learner took away from a course (e.g., training data, cost function, linear regression, gradient descent) and then show the connected concepts around a specific topic (e.g., learning curve, multiple linear regression, normal equation, batch gradient descent). Over time, the graph becomes a map of how ideas relate, helping the learner reason about connections rather than treating notes as isolated pages.

Review Questions

  1. What steps would you take after a lecture to convert context notes into long-term atomic notes with backlinks?
  2. Give one example of how later material should update an existing atomic note rather than creating a new, separate fragment.
  3. How does linking to specific sections (not just whole notes) improve retrieval and context preservation?

Key Points

  1. 1

    Use a two-phase workflow: capture lecture context during learning, then curate and connect notes afterward.

  2. 2

    Organize initial notes by course → week → lecture so retrieval stays simple while studying.

  3. 3

    Extract durable atomic notes for key concepts (e.g., training data, cost function, gradient descent) and connect them with links and backlinks.

  4. 4

    Write concept definitions in your own words during curation to force understanding, not just transcription.

  5. 5

    Link atomic notes back to the exact lecture sections where the ideas were learned to preserve context.

  6. 6

    When later lectures add new details, update the existing atomic note so knowledge accumulates in one place.

  7. 7

    Use Obsidian’s local graph to see how concepts connect and to reinforce what you learned at both course and topic levels.

Highlights

The workflow’s core move is turning lecture-specific notes into atomic concept notes that can be updated as understanding grows.
Backlinks aren’t just decoration: they preserve the original learning context while letting definitions evolve over time.
Gradient descent becomes a living hub—first defined from regression models, then expanded with normal equation alternatives and convergence checks from multiple linear regression.
Local graphs act like a concept map, summarizing course takeaways and showing how related ideas connect across weeks.

Topics

Mentioned