Get AI summaries of any video or article — Sign up free
Find Research Gaps with Litmaps thumbnail

Find Research Gaps with Litmaps

Litmaps·
5 min read

Based on Litmaps's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Start with a focused research question and use Litmaps “Quick search” to find a specific seed paper before judging whether a gap exists.

Briefing

Litmaps helps researchers spot three common kinds of research gaps—underexplored topics, interdisciplinary overlaps that don’t get connected, and missing papers within an existing literature set—by turning starting references into expanding “literature maps” of highly connected work. The practical payoff is straightforward: better gap detection can steer what to study next, strengthen a literature review, and reduce the risk of overlooking relevant publications.

The first gap type is the broadest: an area where no clear body of literature answers a specific question. The workflow starts by choosing a focused research question and then using Litmaps’ “Quick search” to find a valid starting paper using keywords (or a known paper title). From that seed, Litmaps generates a lit map that places the starting article in a shaded circle and adds recommended papers based on how strongly they connect to it. Because the map emphasizes interconnectedness—not just citation lists—researchers can quickly judge whether the surrounding literature is thin. In the Zoroastrianism example, the map ends up with only six articles once keyword filtering is applied to require Zoroastrianism in titles or abstracts, signaling that the intersection with ecology/environmentalism may be relatively underexamined.

The second gap type targets intersectionalities: cases where two fields study similar phenomena but use different terms, leaving overlaps hard to find. Litmaps supports this by letting researchers build separate maps for each domain, tag papers by topic, and then visualize both sets together. A shared example centers on “moral injury” in veterans versus “moral distress” in nurses and other healthcare workers. After collecting and tagging papers from each side, the combined visualization shows that papers in the veterans stream tend to cite within that stream, while links to the healthcare space are scarce. To test the connection further, the workflow narrows one side (e.g., healthcare workers) and adds a keyword constraint like “veterans”; the small number of resulting cross-domain papers becomes another indicator that the literature is not well integrated.

The third gap type is internal to a researcher’s own work: missing links or missing papers that should have been captured when assembling a literature library. Instead of treating gaps as entirely new questions, this approach uses an existing paper’s reference list as a starting point and then uses Litmaps to find connected literature that might not have been included. The transcript’s example uses a well-cited 2017 paper on urban light pollution and nocturnally migrating birds. By uploading the paper’s references into Litmaps and limiting results to 2017 or earlier, researchers can check whether additional relevant papers appear that could plausibly have been cited at the time. This doesn’t prove an omission, but it’s a useful stress test during literature reviews.

Finally, Litmaps includes a “stay up to date” monitor feature. Enabling monitoring can alert researchers when new papers are added to the database that connect to their existing literature library—helping keep the gap-finding process current as new publications arrive.

Cornell Notes

Litmaps can identify research gaps by building “lit maps” from one or more starting papers and then expanding, filtering, and visualizing the connected literature. For underexplored topics, researchers seed the map with a focused paper and use keyword filters to see whether few recommendations remain, as in the Zoroastrianism–ecotheology example where the map shrinks to six articles. For interdisciplinary gaps, researchers create separate tagged maps for each field (e.g., moral injury in veterans vs moral distress in nurses) and then visualize overlap; sparse cross-links suggest a missing bridge between domains. For gaps inside an existing literature set, researchers upload references from a key paper and check for additional connected papers—optionally restricting by publication year—to see what might have been overlooked. Monitoring can keep the search alive as new papers enter the database.

How can a researcher tell whether a topic is genuinely underexplored rather than just hard to search?

Seed Litmaps with a highly specific starting paper using “Quick search” and relevant keywords, then generate a lit map of recommended papers based on connectivity. Next, apply the keyword filter (via the target cursor) so results must mention the key concept in the title or abstract. In the Zoroastrianism example, the map initially contains some related work, but after filtering for Zoroastrianism the recommendations drop sharply—an early signal that the intersection with ecology/environmentalism may be thin.

What makes an “intersectionality gap” different from a generic research gap?

Intersectionality gaps arise when two disciplines study similar phenomena but label them differently, so overlap stays hidden. The workflow is to build separate maps for each side of the intersection, tag papers by domain, and then visualize both tags together. If the combined visualization shows that papers mostly cite within their own cluster—rather than linking across clusters—that pattern suggests the literature hasn’t been integrated.

How does Litmaps help test whether two fields are actually connected in the literature?

After tagging papers from each domain, researchers can select both tags and use “visualize” to create a combined lit map. Then they can hover over papers to observe citation/connection patterns. The transcript’s moral injury vs moral distress case shows veterans-related papers rarely connecting to healthcare-related papers. A second test narrows one side using keyword search (e.g., adding “veterans” while focusing on healthcare workers); few cross-domain results reinforce the gap signal.

How can Litmaps be used to find missing papers inside an existing literature review?

Use an existing paper’s references as the starting set. In the example, the researcher uploads a BibTeX file of references (via “upload file”) and runs “explore related articles,” which searches for connected literature that may not be in the reference list. The map’s outer ring provides recommendations; researchers then review and decide what might plausibly have been cited.

Why limit results to a publication year when checking for missing references?

Limiting to the publication year (or earlier) turns the exercise into a time-appropriate plausibility check. The transcript describes a “cheating” experiment: for a 2017 paper, results are restricted to 2017 or earlier to ask whether additional relevant papers existed at the time. If many plausible candidates appear, it suggests the reference set may be incomplete—though it still doesn’t prove an actual omission.

How can researchers keep gap-finding from going stale during a long literature review?

Enable the “stay up to date” monitor. By opening “Monitor” and clicking enable, Litmaps can alert researchers when new papers are added that connect to their existing literature library. That keeps the search current up to submission or project completion.

Review Questions

  1. When would keyword filtering be most useful for diagnosing an underexplored research area in Litmaps?
  2. Describe a step-by-step method for detecting an interdisciplinary gap using tags and visualization.
  3. How does the “upload file” approach differ from starting with “Quick search,” and what kind of gap does each method target?

Key Points

  1. 1

    Start with a focused research question and use Litmaps “Quick search” to find a specific seed paper before judging whether a gap exists.

  2. 2

    Use keyword filtering to test whether recommendations remain when the topic constraint is strict; a sharp drop can indicate underexplored intersections.

  3. 3

    For interdisciplinary gaps, build separate tagged maps for each field and then visualize them together to check whether cross-domain connections are sparse.

  4. 4

    Use keyword constraints (e.g., adding “veterans” while focusing on nurses) as a second check for whether overlap exists in the literature.

  5. 5

    To find missing papers within your own work, upload an existing paper’s reference list and review Litmaps recommendations for plausible omissions.

  6. 6

    Run year-restricted checks (e.g., limiting to 2017 or earlier) to make missing-reference searches temporally realistic.

  7. 7

    Enable “stay up to date” monitoring so new connected papers trigger alerts during an ongoing literature review.

Highlights

Litmaps’ lit maps emphasize interconnected papers, not just citation lists, making it easier to spot when a research area has few surrounding connections.
Sparse cross-links in a combined visualization (e.g., veterans moral injury vs nurses moral distress) can signal an interdisciplinary research gap.
Uploading a key paper’s references and limiting by publication year turns gap-finding into a practical “what might have been missed?” audit.
Keyword filtering can quickly distinguish a genuinely thin literature from a broad search that returns irrelevant material.

Topics

  • Research Gaps
  • Litmaps Workflow
  • Keyword Filtering
  • Interdisciplinary Overlap
  • Missing References

Mentioned