Flexible Systems for Synthesis in Roam Research with Professor Joel Chan
Based on Robert Haisfield's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Treat synthesis as a pipeline: observation notes (contextualized results) must feed synthesis notes (generalized claims), and each observation should be grounded with context snippets.
Briefing
A synthesis workflow built around Roam Research is aimed at turning the “black box” of literature review into a learnable, debuggable process—so researchers can trace claims back to evidence, revisit earlier conclusions as new context arrives, and share a reusable structure with others. The core idea is to separate notes into distinct roles—observation notes (grounded, contextualized results), synthesis notes (claims that generalize or interpret across observations), and context snippets (the specific details—tables, quotes, figures, page locations—that justify each observation). That separation creates a disciplined path from reading to insight, while still leaving room for creativity and later correction.
Joel Chan, an assistant professor of information science at the University of Maryland, frames knowledge synthesis as a “hidden dark art” that many researchers struggle to systematize. His broader research focuses on tools and social practices for creative knowledge work, especially how people integrate information across disciplines to form coherent problem frames, theories of change, and actionable recommendations. In that setting, synthesis isn’t just summarizing papers; it’s constructing something new—like Darwin’s natural selection—by reorganizing relationships among observations into an explanation or design argument. He contrasts “assemblage” reviews that list what different authors found with synthesis that produces a new viewpoint, model, or decision-relevant set of conditions.
The workflow’s practical target audience is researchers who feel stuck between reading too many papers and reaching the endpoint where a literature review yields promising angles of attack. Chan’s goal is to publish a conceptual model and practical guide that lowers the barrier for others to replicate the method, adapt it to their constraints, and stress-test it. He emphasizes that the system should not force copy-paste behavior; instead, it should make the rationale and theoretical grounding visible enough that users can modify the process when it doesn’t fit.
A key mechanism is the “evidence ladder” from specific to general: observation notes capture results in past tense and context-specific language (e.g., a contact-tracing study finding lower secondary attack rates for children under 10). Synthesis notes then state a more general claim (e.g., children are about half as likely to contract given equivalent exposure), while also allowing counter-claims when evidence conflicts. This structure supports both rigor and creativity: it slows down premature conclusions and makes it possible to backtrack to the underlying observations when new studies, different contexts, or longer time horizons change the interpretation.
Questions act as the navigational layer. Instead of treating notes as a flat library, the system organizes work around open questions; sources and observations feed those questions, and syntheses accumulate as provisional answers. Speculation is treated as a first-class activity too, but it lives in linked references and question-linked spaces until it earns grounding. Chan also highlights a Roam-specific implementation detail: indentation and block structure can represent argument steps even when it reduces certain query conveniences, so the system balances writing ergonomics with retrieval.
Overall, the approach aims for long-term reuse and collective intelligence: other researchers can start from a structured subset of questions, observations, syntheses, and context rather than extracting everything from scratch. The method is presented as simple in principle but powerful in practice—designed to help ideas “marinate” over time until they crystallize into claims worth sharing and revisiting.
Cornell Notes
The synthesis model centers on a disciplined pipeline from reading to insight: observation notes (contextualized, past-tense results) feed synthesis notes (generalized claims or interpretations), and context snippets (tables, quotes, figures, page locations) ground each observation. Questions sit at the top of the structure, acting as the navigational layer that determines what gets collected and how provisional answers evolve. This separation reduces the temptation to rush to conclusions and makes it possible to debug past reasoning by tracing syntheses back to their evidence and context. The approach is designed for long-term reuse—so future work (including by other people) can recontextualize, challenge, and extend earlier conclusions as new evidence arrives.
How does the workflow distinguish observation from synthesis so that claims remain traceable?
Why are context snippets treated as essential rather than optional “extra detail”?
What role do questions play compared with tags or concept pages?
How does the system handle speculation without letting it contaminate evidence-based claims?
What is the practical benefit of using block references and indentation to represent argument structure?
How does the model support long-term reuse and collective intelligence?
Review Questions
- What specific note types (observation, synthesis, context snippet, question) would you create for a new literature review, and what would each contain?
- How would you backtrack from a synthesis claim to verify whether it still holds under new evidence or different study conditions?
- In what ways could the question-first structure change how you decide what to read next and when to stop reading?
Key Points
- 1
Treat synthesis as a pipeline: observation notes (contextualized results) must feed synthesis notes (generalized claims), and each observation should be grounded with context snippets.
- 2
Use questions as the top-level organizing layer so reading and note-taking are driven by what remains unknown, not by a static concept library.
- 3
Add friction to prevent premature conclusions: require evidence and context before upgrading an idea into a synthesis note.
- 4
Store concrete justification next to claims (tables, quotes, figures, page locations) so future reinterpretation is possible when methods, samples, or time horizons change.
- 5
Represent argument structure in the note graph (e.g., indentation/sibling blocks) to preserve traceability from claims back to sources.
- 6
Allow speculation to exist, but keep it linked to questions and separate from evidence-based syntheses until grounded.
- 7
Design for long-term reuse by making claims debuggable—so students and collaborators can inherit reasoning, not just conclusions.