AI Tools for #Literature - How to use #ChatGPT and #Elicit for Literature Search and Writing
Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use ChatGPT to draft and refine argument ideas, but rely on reading to integrate evidence and maintain academic rigor.
Briefing
AI-assisted literature search can speed up the early stages of a literature review, but it can’t replace reading, critical judgment, or the work of integrating sources into a coherent argument. The core workflow presented pairs ChatGPT for drafting and argument development with Elicit for evidence gathering—using each tool for what it does best.
The session begins with a caution: AI tools can summarize and surface relevant papers, yet they won’t tell researchers how to place specific evidence in the right part of a thesis, how to connect findings across sections, or how to write with academic rigor. Reading remains essential because it’s the only way to understand how studies relate, evaluate quality, and decide what belongs in an introduction, literature review, or argument chain.
To demonstrate the approach, the example research topic is servant leadership and its influence on environmental behavior, framed within leadership-focused journals. A researcher first asks ChatGPT for the “value/importance of servant leadership for modern organizations,” then repeats the question in Elicit. Elicit returns a more research-oriented output: it aggregates referenced papers, provides summaries (including abstracts and measured outcomes), and can flag missing bibliographic details such as study type or funding source—signals that help decide whether a paper is trustworthy.
Elicit also supports deeper steps needed for a systematic or structured review. The workflow includes asking follow-up questions about a specific paper, requesting PDFs when available, and extracting practical metadata such as interventions, outcomes, and participant counts. Beyond summarizing individual studies, Elicit can be used to check whether a concept has already been studied—helpful when a researcher initially believes there is “no research” on a given relationship. In the example, searching for servant leadership and environmental behavior yields results and even journal ranking information, which helps with assessing where the evidence comes from.
The session then shows how to combine tools for stronger writing. ChatGPT can generate argument content—such as how servant leadership might increase employee engagement and motivation—but it often lacks robust referencing. The workaround is to use Elicit to find supporting citations for each claim. For instance, after drafting an argument about engagement and motivation, the researcher queries Elicit for references that connect servant leadership to those outcomes. The same pattern is applied to environmental behavior: ChatGPT helps articulate mechanisms (e.g., collaboration, empowerment, sustainability-oriented teamwork), while Elicit supplies the papers to substantiate those mechanisms.
Overall, the method is a practical division of labor: Elicit for locating, summarizing, and retrieving research evidence; ChatGPT for reconceptualizing text, developing argument phrasing, and building the narrative structure. The payoff is a faster, more informed literature review—so long as the final step still depends on reading the original studies and integrating them critically into the thesis.
Cornell Notes
The workflow pairs ChatGPT and Elicit to accelerate literature review work while keeping reading and critical evaluation at the center. ChatGPT is used to draft and refine argument ideas—such as why servant leadership matters and how it could affect outcomes like employee engagement or environmental behavior. Elicit is used to retrieve research evidence: it returns paper summaries, abstracts, measured outcomes, and sometimes missing details (like study type or funding), and it can provide PDFs when available. When ChatGPT produces claims, Elicit is then queried to find supporting citations. This division of labor helps researchers situate their topic, test whether prior studies exist, and build a reference-backed literature review without relying on AI to do the integration work.
Why is reading still necessary even when AI tools provide summaries and references?
How does Elicit help researchers assess whether a paper is trustworthy before using it in a review?
What is the role of Elicit when a researcher suspects there may be little or no prior research on a topic?
How should researchers handle the reference limitation of ChatGPT?
How do the tools work together to build a literature review on environmental behavior?
Review Questions
- In what ways can Elicit’s paper summaries help with early-stage literature review decisions, and what limitations remain?
- Describe a two-step workflow for turning a ChatGPT-generated claim into a reference-backed argument using Elicit.
- What kinds of missing bibliographic or methodological details should researchers look for when evaluating whether to trust a study?
Key Points
- 1
Use ChatGPT to draft and refine argument ideas, but rely on reading to integrate evidence and maintain academic rigor.
- 2
Use Elicit to locate relevant papers, generate research-oriented summaries, and extract details like interventions, outcomes, and participant counts.
- 3
Check for missing reporting elements (such as study type or funding source) surfaced through Elicit before treating a paper as reliable.
- 4
When unsure whether prior research exists, run targeted Elicit searches (e.g., servant leadership and environmental behavior) to verify what has already been studied.
- 5
For each claim produced by ChatGPT, query Elicit for supporting citations so the literature review is reference-backed.
- 6
Use Elicit’s PDF retrieval and citation information to move from summaries to primary-source evaluation.