Get AI summaries of any video or article — Sign up free
AI Tools for #Literature - How to use #ChatGPT and #Elicit for Literature Search and Writing thumbnail

AI Tools for #Literature - How to use #ChatGPT and #Elicit for Literature Search and Writing

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use ChatGPT to draft and refine argument ideas, but rely on reading to integrate evidence and maintain academic rigor.

Briefing

AI-assisted literature search can speed up the early stages of a literature review, but it can’t replace reading, critical judgment, or the work of integrating sources into a coherent argument. The core workflow presented pairs ChatGPT for drafting and argument development with Elicit for evidence gathering—using each tool for what it does best.

The session begins with a caution: AI tools can summarize and surface relevant papers, yet they won’t tell researchers how to place specific evidence in the right part of a thesis, how to connect findings across sections, or how to write with academic rigor. Reading remains essential because it’s the only way to understand how studies relate, evaluate quality, and decide what belongs in an introduction, literature review, or argument chain.

To demonstrate the approach, the example research topic is servant leadership and its influence on environmental behavior, framed within leadership-focused journals. A researcher first asks ChatGPT for the “value/importance of servant leadership for modern organizations,” then repeats the question in Elicit. Elicit returns a more research-oriented output: it aggregates referenced papers, provides summaries (including abstracts and measured outcomes), and can flag missing bibliographic details such as study type or funding source—signals that help decide whether a paper is trustworthy.

Elicit also supports deeper steps needed for a systematic or structured review. The workflow includes asking follow-up questions about a specific paper, requesting PDFs when available, and extracting practical metadata such as interventions, outcomes, and participant counts. Beyond summarizing individual studies, Elicit can be used to check whether a concept has already been studied—helpful when a researcher initially believes there is “no research” on a given relationship. In the example, searching for servant leadership and environmental behavior yields results and even journal ranking information, which helps with assessing where the evidence comes from.

The session then shows how to combine tools for stronger writing. ChatGPT can generate argument content—such as how servant leadership might increase employee engagement and motivation—but it often lacks robust referencing. The workaround is to use Elicit to find supporting citations for each claim. For instance, after drafting an argument about engagement and motivation, the researcher queries Elicit for references that connect servant leadership to those outcomes. The same pattern is applied to environmental behavior: ChatGPT helps articulate mechanisms (e.g., collaboration, empowerment, sustainability-oriented teamwork), while Elicit supplies the papers to substantiate those mechanisms.

Overall, the method is a practical division of labor: Elicit for locating, summarizing, and retrieving research evidence; ChatGPT for reconceptualizing text, developing argument phrasing, and building the narrative structure. The payoff is a faster, more informed literature review—so long as the final step still depends on reading the original studies and integrating them critically into the thesis.

Cornell Notes

The workflow pairs ChatGPT and Elicit to accelerate literature review work while keeping reading and critical evaluation at the center. ChatGPT is used to draft and refine argument ideas—such as why servant leadership matters and how it could affect outcomes like employee engagement or environmental behavior. Elicit is used to retrieve research evidence: it returns paper summaries, abstracts, measured outcomes, and sometimes missing details (like study type or funding), and it can provide PDFs when available. When ChatGPT produces claims, Elicit is then queried to find supporting citations. This division of labor helps researchers situate their topic, test whether prior studies exist, and build a reference-backed literature review without relying on AI to do the integration work.

Why is reading still necessary even when AI tools provide summaries and references?

Summaries and citation lists don’t automatically tell a researcher where evidence belongs in the thesis structure or how to link findings across sections. The transcript emphasizes that AI tools can supply information, but only reading enables critical judgment—evaluating study quality, understanding context, and deciding how to connect results to the specific argument being built.

How does Elicit help researchers assess whether a paper is trustworthy before using it in a review?

Elicit can surface paper-level details such as the abstract, outcomes measured, and citation information. It may also reveal missing elements—like whether study type or funding source is mentioned—so researchers can flag gaps in reporting. The workflow also includes opening PDFs when available and checking how many times a work has been cited.

What is the role of Elicit when a researcher suspects there may be little or no prior research on a topic?

Elicit can be used to search for relationships directly (e.g., servant leadership and environmental behavior) and return evidence that the relationship has been studied. It can also provide journal ranking information, helping researchers situate their topic and adjust their framing based on what the literature already shows.

How should researchers handle the reference limitation of ChatGPT?

ChatGPT can generate argument text, but it may not provide enough solid references. The transcript’s workaround is to copy or paraphrase the claim and then query Elicit for papers that specifically support that mechanism or outcome (e.g., servant leadership and employee engagement/motivation). Those retrieved papers become the citations used to back the argument.

How do the tools work together to build a literature review on environmental behavior?

ChatGPT helps articulate mechanisms and narrative phrasing (such as collaboration, empowerment, and sustainability-oriented teamwork). Then Elicit is used to find supporting studies for each mechanism and outcome. The result is an argument that is both well-written and anchored in research evidence.

Review Questions

  1. In what ways can Elicit’s paper summaries help with early-stage literature review decisions, and what limitations remain?
  2. Describe a two-step workflow for turning a ChatGPT-generated claim into a reference-backed argument using Elicit.
  3. What kinds of missing bibliographic or methodological details should researchers look for when evaluating whether to trust a study?

Key Points

  1. 1

    Use ChatGPT to draft and refine argument ideas, but rely on reading to integrate evidence and maintain academic rigor.

  2. 2

    Use Elicit to locate relevant papers, generate research-oriented summaries, and extract details like interventions, outcomes, and participant counts.

  3. 3

    Check for missing reporting elements (such as study type or funding source) surfaced through Elicit before treating a paper as reliable.

  4. 4

    When unsure whether prior research exists, run targeted Elicit searches (e.g., servant leadership and environmental behavior) to verify what has already been studied.

  5. 5

    For each claim produced by ChatGPT, query Elicit for supporting citations so the literature review is reference-backed.

  6. 6

    Use Elicit’s PDF retrieval and citation information to move from summaries to primary-source evaluation.

Highlights

Elicit can flag missing study details (like study type or funding source), helping researchers decide whether a paper is usable in a review.
A practical workflow emerges: ChatGPT drafts mechanisms and claims; Elicit supplies the citations that substantiate them.
Targeted searches in Elicit can quickly reveal whether a supposed “gap” actually exists in the literature.
The method still depends on reading original papers to correctly link evidence across thesis sections.

Topics

Mentioned