Use these FREE AI tools in your Literature Review / SciSpace, ChatGPT, Google Gemini
Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
typeset.io’s free version performed best for literature searching because it combined article discovery with synthesis (main summary) and per-article summaries.
Briefing
Free AI tools can speed up a literature review, but their reliability varies sharply—especially when the task shifts from generating article lists to providing usable links and citations. In a side-by-side test focused on literature about “migrant anxiety when speaking English with native English speakers,” ChatGPT, Google Gemini (formerly Google Bard), and typeset.io were prompted with the same request: find relevant studies, identify which ones use qualitative methods, and provide citations (and ideally links) for the articles.
ChatGPT produced strong, relevant results quickly and often returned detailed outputs, including which articles used qualitative methods. It also sometimes supplied full citations that could be pasted directly into a reference list. However, link availability was inconsistent. In most attempts, ChatGPT responded that it couldn’t provide direct links, offering guidance on where to find the articles instead. On at least one run, it did provide links and exact citations—yet the overall pattern was described as unpredictable.
Google Gemini delivered similarly detailed initial results and also identified qualitative methods in the retrieved articles. The performance was again characterized as “random,” with outcomes changing across trials. Where Gemini tended to differ was in citation and link behavior: it was described as more likely than ChatGPT to provide specific links and full citations usable in a reference list. Still, Gemini often failed to provide direct access to the actual articles. The likely reason offered was practical rather than technical—many of the articles surfaced were not open access, making direct linking or full retrieval difficult. Gemini also sometimes added photos or images, making outputs feel more visually varied.
The biggest separation came from typeset.io, a tool designed specifically for literature searching. Using the free version, it returned not only a list of relevant articles but also a higher-level “main summary” of what the set of studies suggests—useful for quickly orienting a researcher to a new topic. It also provided per-article mini-summaries and formatted outputs in a citation-friendly way. The workflow extended beyond discovery: users could ask follow-up questions about individual articles, click through when available, and read content within the platform. Additional built-in functions included summarizing text, explaining text, and translating it.
By the end, the practical takeaway was clear: typeset.io performed best for literature searching and synthesis in the free tier, while ChatGPT and Gemini remained viable but less dependable—particularly for getting direct links to the full articles. The recommended strategy was to try both general-purpose tools on a given day, since results could swing, and rely on the literature-first tool when the goal is structured discovery plus rapid understanding.
Cornell Notes
The test compared three free AI tools—ChatGPT, Google Gemini (formerly Google Bard), and typeset.io—for finding literature on migrant anxiety in English conversations with native speakers. All three could generate relevant article lists and identify which studies used qualitative methods, but their reliability differed. ChatGPT and Gemini produced detailed outputs yet were inconsistent about providing direct links to articles; citations were sometimes complete and sometimes not. Gemini was described as more likely than ChatGPT to include specific links and full citations. typeset.io, built for literature search, delivered the most useful workflow: main topic summaries, per-article summaries, citation-friendly formatting, and convenient follow-up Q&A and reading when articles were available.
How did ChatGPT perform when asked to find studies on migrant anxiety in English with native speakers?
What changed when the same prompt was run through Google Gemini (formerly Google Bard)?
Why did typeset.io stand out compared with ChatGPT and Gemini?
What does “random” performance mean in this comparison, and where did it matter most?
What practical strategy does the comparison recommend for researchers starting a literature review?
Review Questions
- When the prompt required both qualitative-method identification and usable citations, which tool was most consistent, and why?
- What were the main differences between ChatGPT and Google Gemini regarding links and citations?
- How does typeset.io’s workflow (main summary, per-article summaries, and follow-up Q&A) change the early stages of a literature review?
Key Points
- 1
typeset.io’s free version performed best for literature searching because it combined article discovery with synthesis (main summary) and per-article summaries.
- 2
ChatGPT and Google Gemini could identify qualitative methods in retrieved studies, but their outputs—especially links—were inconsistent across repeated runs.
- 3
ChatGPT often declined to provide direct links, though it sometimes returned full citations and, occasionally, links.
- 4
Google Gemini was more likely than ChatGPT to include specific links and full citations, but direct access still depended on whether articles were open access.
- 5
For early literature review work, researchers can use typeset.io to quickly orient themselves and then use ChatGPT/Gemini as supplementary discovery tools.
- 6
Because link and citation behavior can vary day-to-day, running multiple attempts with ChatGPT or Gemini can improve odds of getting usable references.