Get AI summaries of any video or article — Sign up free
Finding Research Gaps with Free AI Tools || How to find Paper Limitations || Hindi || 2024 thumbnail

Finding Research Gaps with Free AI Tools || How to find Paper Limitations || Hindi || 2024

eSupport for Research·
5 min read

Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use AI tools to extract candidate limitations and research gaps from a paper’s PDF, but verify every claim directly in the paper’s methods, dataset, and results sections.

Briefing

Research gaps and paper limitations can be identified faster and more systematically by feeding a paper’s PDF into a set of free (or freemium) AI tools—then cross-checking the outputs instead of copying generated text into a thesis.

The workflow starts with a clear research goal: literature review isn’t just about summarizing findings; it’s about locating what a study can’t fully address—its limitations—and translating those gaps into the “next step” for a project. The transcript warns against a common shortcut: using AI-generated text as-is. Instead, the outputs should be used to understand the paper’s constraints and then rewritten in the researcher’s own words with proper citation, because unedited reuse can create plagiarism risk.

To demonstrate the approach, the process is built around four main tool paths. First is PDF-focused AI interaction using PDFGear with GPT-enabled features. After installing PDFGear and activating its GPT/AI options, the user can chat with the uploaded PDF and ask targeted prompts such as “What are the limitations of this paper?” The results may vary by prompt, so the transcript suggests trying different prompt styles and using example-question formats. In one case, the tool initially didn’t return a direct “limitations” section, but it still surfaced potential constraints indirectly—based on the methods, dataset, and other factors discussed in the paper.

Second is ChatPDF, where a PDF is uploaded for analysis. The transcript notes that ChatPDF can generate limitation-style answers even when the paper doesn’t explicitly list them, by inferring constraints from the study design. A concrete example mentioned is a limitation tied to the study’s focus—such as diagnostic measurement limited to a specific subject group—along with the suggestion to verify such claims in the paper’s dataset or methods sections.

Third is Elicit, positioned as a research assistant for extracting structured “gap” and “limitation” signals across papers. The transcript describes searching by topic or keywords, filtering by year, and then using a “limitations” query to generate specific limitation bullets (e.g., modest sample size). Elicit also supports uploading a particular PDF for direct analysis, returning a compact abstract-style summary and then generating limitation and research-gap points. The key instruction is to treat these as hypotheses and cross-verify against the paper itself.

Fourth is a Microsoft Copilot path (ChatGPT through Microsoft Bing) and SCI Space (including a free extension for Copilot). With Microsoft Bing/Copilot, the paper content is read and limitation-style points are generated to support decision-making and interpretation. With SCI Space, the transcript emphasizes uploading papers to a library or reading PDFs directly, then using built-in question prompts (e.g., “Is there a limitation of the paper?”) to extract limitation and future-scope items. The transcript repeatedly returns to the same standard: outputs should be cross-checked across tools and then mapped to the researcher’s own thesis framing.

Overall, the central takeaway is practical: use AI tools to accelerate extraction of limitations and research gaps, but verify every claim in the original PDF and rewrite the final thesis content in original language with citations. The payoff is a faster, more defensible literature review that leads to clearer “what’s missing” and what the next research contribution should be.

Cornell Notes

The transcript lays out a repeatable method for finding research gaps and paper limitations by using AI tools to extract constraints from a paper’s PDF, then cross-checking those points in the original text. It emphasizes speed for literature review and thesis work, but also warns against copy-pasting AI output directly into academic writing due to plagiarism risk. Tools highlighted include PDFGear (chat with a PDF), ChatPDF (upload-and-analyze), Elicit (structured limitation/gap extraction and filtering), Microsoft Copilot via Bing (paper interpretation), and SCI Space (question-driven limitation and future-scope extraction). The key learning is to treat AI-generated limitations and gaps as drafts that must be verified in the paper’s methods, dataset, and results sections before being used to justify a new research contribution.

Why does identifying “limitations” matter for finding a research gap in a literature review?

Limitations define what a study can’t fully support—often due to the methods used, dataset scope, sample size, or what variables were (or weren’t) considered. Once those constraints are clear, the “gap” becomes the next logical research step: what remains unanswered or underexplored given the study’s boundaries. The transcript frames this as the basis for a thesis contribution—building on what exists while addressing what the literature hasn’t adequately covered.

How can a PDF-chat tool still help when a paper doesn’t explicitly list limitations?

The transcript describes cases where the tool didn’t return a direct “limitations” section, but still produced limitation-style points indirectly by analyzing the paper’s methods, dataset, and other discussed factors. For example, it suggests that limitations can be inferred from study design—such as whether the dataset includes only a certain subject group or whether the diagnostic measurements are limited in scope—then verified by checking the paper’s dataset/methods sections.

What makes Elicit’s approach useful for research gaps and limitations?

Elicit supports both keyword/topic search with filters (including year sorting) and direct PDF upload for analysis. It can generate structured outputs like limitations (e.g., “number of samples is modest”) and research-gap bullets, plus it provides an abstract-style summary. The transcript stresses that these outputs should be cross-verified against the original paper to ensure they match the study’s actual claims.

What is the ethical risk the transcript repeatedly warns about?

Using AI output verbatim in a thesis is flagged as a plagiarism risk. The transcript advises using AI to generate understanding and bullet points, then rewriting the final content in the researcher’s own words and citing the paper properly. It also recommends articulating points independently rather than copy-pasting AI phrasing.

How do multiple tools improve reliability when extracting limitations and future scope?

The transcript treats tool outputs as hypotheses and recommends cross-checking across platforms. If one tool suggests a limitation (like dataset scope or modest sample size), the researcher should confirm it in the paper. If multiple tools converge on similar limitations or future-scope items, confidence increases; if they diverge, the paper must be re-examined.

What should a researcher do after extracting limitations and research gaps?

Map the extracted limitations and gaps into thesis framing: identify what is missing, what future work could address, and how the researcher’s project will contribute. The transcript also suggests using follow-up questions (e.g., drawbacks, disadvantages, future work) and then translating those into a coherent “next step” argument for the literature review and proposal.

Review Questions

  1. When a tool doesn’t find an explicit “limitations” section, what types of evidence can still be used to infer limitations, and how should those inferences be verified?
  2. How would you turn an AI-generated research gap bullet into a thesis contribution statement without copy-pasting the AI’s wording?
  3. Which parts of a paper (methods, dataset, results, variables) should be checked first to confirm a limitation claim extracted by AI tools?

Key Points

  1. 1

    Use AI tools to extract candidate limitations and research gaps from a paper’s PDF, but verify every claim directly in the paper’s methods, dataset, and results sections.

  2. 2

    Avoid copy-pasting AI-generated text into a thesis; rewrite in original wording and cite the source paper properly to reduce plagiarism risk.

  3. 3

    Try multiple prompt styles (and example prompts) when a PDF-chat tool returns weak or indirect limitation answers.

  4. 4

    Use Elicit’s structured outputs and filtering (e.g., year/keyword search) to quickly locate relevant papers and generate limitation/gap bullets, then cross-check them.

  5. 5

    Combine tool outputs for reliability: if multiple platforms point to the same limitation or future-scope issue, confidence increases.

  6. 6

    Translate extracted limitations into a clear “next research step” so the literature review directly supports the thesis contribution.

  7. 7

    When extracting limitations, focus on concrete study constraints such as sample size, dataset scope, subject inclusion, and what variables or practical applications were not explored.

Highlights

The transcript’s core workflow is: upload a paper → ask targeted questions for limitations/gaps → cross-verify in the PDF → rewrite in original thesis language with citations.
Even when a paper doesn’t explicitly list limitations, AI tools can infer constraints from methods and dataset details—then those inferences must be checked in the original text.
Elicit can generate structured limitation and research-gap points (including modest sample size examples), but the reliability depends on confirming them against the paper.
The repeated ethical warning is clear: AI output should accelerate understanding, not replace academic writing through verbatim copy-paste.
Using multiple platforms (PDFGear, ChatPDF, Elicit, Copilot/Bing, SCI Space) helps triangulate limitations and future-scope items.