Get AI summaries of any video or article — Sign up free
Use these FREE AI tools in your Literature Review / SciSpace, ChatGPT, Google Gemini thumbnail

Use these FREE AI tools in your Literature Review / SciSpace, ChatGPT, Google Gemini

5 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

typeset.io’s free version performed best for literature searching because it combined article discovery with synthesis (main summary) and per-article summaries.

Briefing

Free AI tools can speed up a literature review, but their reliability varies sharply—especially when the task shifts from generating article lists to providing usable links and citations. In a side-by-side test focused on literature about “migrant anxiety when speaking English with native English speakers,” ChatGPT, Google Gemini (formerly Google Bard), and typeset.io were prompted with the same request: find relevant studies, identify which ones use qualitative methods, and provide citations (and ideally links) for the articles.

ChatGPT produced strong, relevant results quickly and often returned detailed outputs, including which articles used qualitative methods. It also sometimes supplied full citations that could be pasted directly into a reference list. However, link availability was inconsistent. In most attempts, ChatGPT responded that it couldn’t provide direct links, offering guidance on where to find the articles instead. On at least one run, it did provide links and exact citations—yet the overall pattern was described as unpredictable.

Google Gemini delivered similarly detailed initial results and also identified qualitative methods in the retrieved articles. The performance was again characterized as “random,” with outcomes changing across trials. Where Gemini tended to differ was in citation and link behavior: it was described as more likely than ChatGPT to provide specific links and full citations usable in a reference list. Still, Gemini often failed to provide direct access to the actual articles. The likely reason offered was practical rather than technical—many of the articles surfaced were not open access, making direct linking or full retrieval difficult. Gemini also sometimes added photos or images, making outputs feel more visually varied.

The biggest separation came from typeset.io, a tool designed specifically for literature searching. Using the free version, it returned not only a list of relevant articles but also a higher-level “main summary” of what the set of studies suggests—useful for quickly orienting a researcher to a new topic. It also provided per-article mini-summaries and formatted outputs in a citation-friendly way. The workflow extended beyond discovery: users could ask follow-up questions about individual articles, click through when available, and read content within the platform. Additional built-in functions included summarizing text, explaining text, and translating it.

By the end, the practical takeaway was clear: typeset.io performed best for literature searching and synthesis in the free tier, while ChatGPT and Gemini remained viable but less dependable—particularly for getting direct links to the full articles. The recommended strategy was to try both general-purpose tools on a given day, since results could swing, and rely on the literature-first tool when the goal is structured discovery plus rapid understanding.

Cornell Notes

The test compared three free AI tools—ChatGPT, Google Gemini (formerly Google Bard), and typeset.io—for finding literature on migrant anxiety in English conversations with native speakers. All three could generate relevant article lists and identify which studies used qualitative methods, but their reliability differed. ChatGPT and Gemini produced detailed outputs yet were inconsistent about providing direct links to articles; citations were sometimes complete and sometimes not. Gemini was described as more likely than ChatGPT to include specific links and full citations. typeset.io, built for literature search, delivered the most useful workflow: main topic summaries, per-article summaries, citation-friendly formatting, and convenient follow-up Q&A and reading when articles were available.

How did ChatGPT perform when asked to find studies on migrant anxiety in English with native speakers?

ChatGPT returned relevant literature quickly and often produced detailed outputs. It also identified which retrieved articles used qualitative methods. For citations, it sometimes provided exact, paste-ready references. Links were the weak point: most attempts resulted in a refusal to provide direct links, with alternative guidance on where to find the articles instead. On at least one attempt, it did provide links and exact citations, but the overall behavior was described as random.

What changed when the same prompt was run through Google Gemini (formerly Google Bard)?

Gemini produced similarly detailed initial results and could again identify qualitative methods in the retrieved articles. Performance was still described as variable across runs. Gemini was more likely than ChatGPT to provide full citations and specific links, and it sometimes added photos/images to the output. Even so, it often did not provide direct access to the full articles, likely because many results were not open access.

Why did typeset.io stand out compared with ChatGPT and Gemini?

typeset.io was designed specifically for literature searching, and the free version reflected that. It provided a main summary synthesizing what can be learned from the set of articles, plus a short summary next to each article. Outputs were formatted for citations, and users could ask follow-up questions about individual articles. When articles were available, users could click through and read them within the platform, with extra tools for summarizing, explaining, and translating text.

What does “random” performance mean in this comparison, and where did it matter most?

“Random” referred to how results and especially link/citation behavior changed across repeated trials. The variability mattered most when the task required actionable next steps—getting direct links to articles and providing fully usable citations. ChatGPT and Gemini could both be strong on relevance and qualitative-method identification, but their ability to deliver links and exact references fluctuated.

What practical strategy does the comparison recommend for researchers starting a literature review?

Use typeset.io first when the goal is structured literature discovery plus rapid synthesis, since it delivered main and per-article summaries and a convenient Q&A workflow in the free tier. For ChatGPT and Gemini, treat them as useful but inconsistent: run them and compare outputs on a given day, especially for citations and link retrieval, because results can swing depending on the attempt.

Review Questions

  1. When the prompt required both qualitative-method identification and usable citations, which tool was most consistent, and why?
  2. What were the main differences between ChatGPT and Google Gemini regarding links and citations?
  3. How does typeset.io’s workflow (main summary, per-article summaries, and follow-up Q&A) change the early stages of a literature review?

Key Points

  1. 1

    typeset.io’s free version performed best for literature searching because it combined article discovery with synthesis (main summary) and per-article summaries.

  2. 2

    ChatGPT and Google Gemini could identify qualitative methods in retrieved studies, but their outputs—especially links—were inconsistent across repeated runs.

  3. 3

    ChatGPT often declined to provide direct links, though it sometimes returned full citations and, occasionally, links.

  4. 4

    Google Gemini was more likely than ChatGPT to include specific links and full citations, but direct access still depended on whether articles were open access.

  5. 5

    For early literature review work, researchers can use typeset.io to quickly orient themselves and then use ChatGPT/Gemini as supplementary discovery tools.

  6. 6

    Because link and citation behavior can vary day-to-day, running multiple attempts with ChatGPT or Gemini can improve odds of getting usable references.

Highlights

typeset.io delivered a main summary of what the retrieved studies collectively suggest—useful for quickly framing a new research topic.
ChatGPT and Gemini were strong at generating relevant results and identifying qualitative methods, yet both were described as “random” when it came to links and exact citation usability.
Gemini was more likely than ChatGPT to provide full citations and specific links, but it still often couldn’t provide direct access to the full articles.
The most convenient workflow came from typeset.io: click-through reading when available plus built-in summarizing, explaining, and translating tools.

Topics

Mentioned