Get AI summaries of any video or article — Sign up free
Literature Review with Litmaps, July 2024 Webinar thumbnail

Literature Review with Litmaps, July 2024 Webinar

Litmaps·
6 min read

Based on Litmaps's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Litmaps frames literature review as a loop: search, evaluate, read/analyze, organize, write, then return to search as new gaps appear.

Briefing

Litmaps positions literature review as a structured, visualization-driven workflow for finding the most relevant papers fast—then staying organized and expanding coverage without getting buried in thousands of results. The core idea is to start with a clear research question, identify one strong “starting paper,” and use Litmaps to generate recommendations based on how papers interconnect through citations and references. Instead of treating search as a keyword lottery, Litmaps turns it into a repeatable strategy that helps researchers prioritize what to read first, especially when entering a new or interdisciplinary field.

The webinar walks through the standard literature review steps—defining the topic, searching for literature, evaluating sources, reading and analyzing, organizing notes, and writing with proper citation—but stresses that these steps often loop rather than follow a straight line. That looping is where researchers typically lose track. Litmaps is presented as a way to prevent that drift by combining three functions: (1) relevant literature discovery, (2) source evaluation support before full reading, and (3) organization using tags plus lightweight citation/export features.

For “simple search” in a brand-new area, the recommended workflow is straightforward: create a workspace for the project, run a quick search by keywords or by selecting a known paper, then click “explore related articles.” Litmaps builds an interactive map where recommended papers are ranked by “top shared citations and references”—meaning the system looks for papers that are highly interconnected with the input paper(s). The visualization is not decorative; it’s used to triage. Node connections help identify papers most linked to the starting set, while the map’s axes (publication date on the x-axis and citation count on the y-axis) and node size (reference count by default) make it easy to spot historically influential work (top-left) versus newer developments (right side). The webinar’s example on forest carbon sequestration and biodiversity illustrates how a highly connected review-like paper (“10 golden rules for reforestation…”) can serve as a launch point for a broader network of related literature.

As researchers move from discovery to management, Litmaps’ tagging system is framed as the practical antidote to overwhelm. Papers can be grouped into subtopics as they’re reviewed, so the growing collection stays navigable and can later support more comprehensive searching.

The second major workflow—“comprehensive searching”—targets the moment when researchers already have a curated library and need to check for what’s missing, including newer work. Litmaps supports this by importing large sets of papers (from files or existing collections), then running recommendation generation across the entire library. Filters help narrow results: date ranges to focus on recent years, keyword filters to target specific terms in titles/abstracts, and algorithm switching when citation networks are sparse. A “monitor” feature adds automation by re-running searches on a schedule (weekly or monthly) as the Litmaps database updates.

Finally, the webinar addresses practical constraints and expectations: Litmaps relies on open-access metadata aggregated from Semantic Scholar, OpenAlex, and Crossref (with a corpus reported as 270M+ records, updated weekly). It exports citations/metadata and supports basic citation workflows, while more complex bibliography formatting is better handled by dedicated reference managers. Differences in citation counts across tools are attributed to how citations are calculated and deduplicated. Pricing is split into Free, Pro, Team, and Institutional plans, with Pro enabling advanced inputs/filters and offering performance advantages via a priority queue for computationally heavy searches. A discount code and country-based pricing are also mentioned, along with the note that sanctioned countries may be limited to Free.

Cornell Notes

Litmaps is presented as a literature review system that turns paper discovery into a structured workflow: start with a research topic, pick a strong starting paper, then generate recommendations using citation/reference interconnections. The interactive “map” helps triage results by showing connections, publication date, citation counts, and reference counts, so researchers can quickly find foundational work or newer advances. As collections grow, tagging keeps papers organized by subtopic, reducing the risk of losing track. For deeper coverage, Litmaps supports comprehensive searching by importing a library of papers, applying date/keyword filters, and using different recommendation algorithms when citation networks are thin. A monitor feature can automatically alert researchers to newly published, connected papers on a weekly or monthly schedule.

How does Litmaps recommend papers when starting a new literature review?

The simple workflow begins with a quick search (keywords or a known paper), then selecting a starting paper and clicking “explore related articles.” Litmaps generates recommendations by ranking papers that are highly interconnected with the input set using “top shared citations and references.” In the map, recommended papers appear as nodes connected through highlighted lines, reflecting shared citation/reference relationships rather than only direct citation counts.

What does the Litmaps visualization help a researcher do beyond listing papers?

The map supports prioritization. Connections indicate how many links a paper has to the input set (more connections generally signal stronger relevance). The axes provide quick context: the x-axis is publication date (older to newer left to right), and the y-axis is citation count (lower to higher bottom to top). Node size is tied to reference count by default, and the legend allows changing what node size represents. This lets researchers target “top-left” highly cited historical papers or “right-side” newer work.

Why are tags emphasized during the literature review process?

Tags are presented as the organization layer that prevents a growing set of papers from becoming unmanageable. As researchers review recommendations, they can assign papers to subtopic categories (e.g., a niche aspect within a broader theme). The goal is to maintain curated lists that are useful both for personal tracking and for later comprehensive searches that depend on the quality of the imported library.

What changes when moving from simple search to comprehensive search?

Comprehensive search assumes the researcher already has a curated library (possibly hundreds of papers). Litmaps imports that library and generates recommendations across the entire set, then uses filters to narrow results—especially date filters (e.g., 2022 and later) and keyword filters (matching terms in titles/abstracts). This reduces noise and helps identify what might be missing, including recent developments.

When would a researcher switch recommendation algorithms in Litmaps?

The webinar notes that citation/reference interconnection works best when papers are well connected. For newer or poorly connected papers with few citations, interconnection-based recommendations may be less effective. In those cases, Litmaps offers an alternative algorithm based on semantic similarity of titles and abstracts (AI-based similarity search), which can surface relevant papers even when citation links are sparse.

What is “monitor” and how does it relate to staying up to date?

Monitor is described as an automated scheduled search. After enabling it, Litmaps re-runs the researcher’s search on a weekly or monthly cadence against an updated database. If new papers appear that connect to the topic, the system alerts the researcher without requiring manual re-searching each time.

Review Questions

  1. In a new field, what combination of starting paper selection and map-based prioritization would help you decide what to read first?
  2. How do date and keyword filters change the output of a comprehensive search compared with relying on interconnections alone?
  3. What trade-offs might lead you to use semantic similarity (title/abstract) instead of citation/reference interconnection for recommendations?

Key Points

  1. 1

    Litmaps frames literature review as a loop: search, evaluate, read/analyze, organize, write, then return to search as new gaps appear.

  2. 2

    A simple starting strategy is to find one well-connected review/top-level paper, then use “explore related articles” to generate recommendations via shared citations and references.

  3. 3

    The interactive map supports triage using connections, publication date (x-axis), citation count (y-axis), and reference count (node size by default).

  4. 4

    Tagging is the mechanism for keeping large paper collections organized by subtopic, enabling later, higher-quality comprehensive searches.

  5. 5

    Comprehensive searching imports an existing library and uses date and keyword filters to narrow results to what’s missing or most recent.

  6. 6

    The monitor feature automates updates by re-running searches on a schedule as the database grows.

  7. 7

    Litmaps’ coverage is based on open-access metadata aggregated from Semantic Scholar, OpenAlex, and Crossref, so citation counts and availability can differ from tools like Google Scholar.

Highlights

Litmaps’ recommendations are driven by “top shared citations and references,” turning a single starting paper into a network of highly interconnected literature.
The map’s axes (date and citation count) plus connection density make it possible to quickly target foundational work versus the newest research.
Comprehensive searching works by importing a curated library, then filtering by date range and keyword to reduce overwhelm.
Monitor automates staying current by checking for newly published, connected papers on a weekly or monthly schedule.
Citation counts can differ across platforms because of deduplication and how citations are calculated, not necessarily because one source is “wrong.”

Topics