Get AI summaries of any video or article — Sign up free
5 Mind blowing AI tools every researcher should know about *but doesn't* thumbnail

5 Mind blowing AI tools every researcher should know about *but doesn't*

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Sourcely generates topic-driven publication lists and produces citation-ready summaries, but it still requires researchers to read the original papers before relying on claims.

Briefing

Five AI tools aimed at researchers are presented as practical workmates for literature work, data analysis, manuscript review, and model selection—each with a distinct workflow and tradeoffs around cost, access, and output quality.

Sourcely is positioned as a source-finding and citation helper that can summarize and format academic papers for use in writing. Users can paste an essay or an essay title, then generate a set of publications tied to the topic. The tool produces short summaries that can be cited, while still pushing users to read the original papers themselves. Pricing is framed as accessible—$7 per month—and the service emphasizes “quality control” by surfacing sources from reputable outlets. The pitch is that reliable reference discovery is often the hardest early-stage research task, and Sourcely reduces that friction, especially when paired with other literature-search tools like illicit.org.

Mirrorthink (described as “general AI for science”) targets the full research lifecycle: literature reviews, fact checking, mathematical accuracy via Wolfram Alpha, technology scouting, science funding, and experimental protocols. It also includes “find papers” and “find patents” features. A key limitation appears during testing: free access is restricted (“no longer eligible”), pushing users to “pay what you want.” After upgrading, the workflow shifts toward long-form, agent-like generation. When asked to write a literature review on organic photovoltaic devices, it pulls multiple papers across sub-areas, reads them, summarizes each, and then composes a longer synthesis with paragraph-level references. The output is presented as more expansive than typical chat-style responses.

Julius is introduced as an AI data analysis tool that accepts file uploads and acts as a “personal AI data analyst.” Data handling is addressed directly: uploaded files are said to remain available for the current session and persist for about an hour after last use, after which they’re permanently removed from servers. Use cases span marketing, healthcare, and academia. In a demonstration, a dataset described as the 2022 General Social Survey is used to analyze which features contribute most to happiness—an interaction compared to ChatGPT’s code interpreter, but described as available for free at the time of testing.

Hey Science is presented as an AI research assistant that can read millions of scientific papers, though it is not yet available publicly. The standout feature is an “AI Reviewer” that functions like a supervisor or peer review—flagging strengths and weaknesses, suggesting keywords, and offering journal recommendations before a manuscript reaches a human desk. The tool also provides guidance on novelty concerns (including similarity to existing work) and revision priorities, with the added suggestion that it can be used before talks to anticipate critique.

Finally, versaill.ai is framed as a real-time comparison layer for multiple language models. By sending the same prompt to different models (including OpenAI and Meta Llama), researchers can compare summaries side-by-side and choose the model that best fits their question type. The core value is reducing guesswork about which model performs best for a specific research task.

Cornell Notes

The transcript highlights five AI tools built for research workflows, from finding and summarizing papers to analyzing datasets and stress-testing drafts. Sourcely helps generate citation-ready summaries and formatted sources from an essay topic, with an emphasis on reputable references and low-cost access. Mirrorthink focuses on science-specific tasks—literature reviews, fact checking, math accuracy via Wolfram Alpha, and experimental protocols—while also offering long-form, agent-like literature synthesis after a paywall. Julius provides file-based data analysis with session-limited storage (about an hour after use). Hey Science (not yet broadly available) aims to deliver reviewer-style feedback and journal guidance, and versaill.ai compares outputs across multiple language models using the same prompt.

How does Sourcely turn a research topic into usable academic references?

Sourcely lets users paste an essay or essay title, then generates a list of publications tied to that topic. For each paper, it can provide a summary that’s intended to be citation-ready, while still warning that users should read the original work. The transcript also emphasizes pricing ($7/month) and a “quality control” angle—sources are described as coming from reputable outlets—and notes that downloaded PDFs or reference-manager workflows are supported.

What makes Mirrorthink different from a general chat assistant for scientific work?

Mirrorthink is presented as science-focused and tool-rich: it includes literature reviews, fact checking, and mathematical accuracy using Wolfram Alpha, plus features like technology scouting, science funding, and experimental protocols. It also supports “find papers” and “find patents.” After upgrading (“pay what you want”), it behaves more like an agent: it reads multiple papers across sub-sections, summarizes them, and then composes a long-form literature review with references embedded in the paragraphs.

What access and output limitations appear when trying Mirrorthink for free?

During testing, the user is blocked from free use (“no longer eligible to use mirror thing for free”). The workflow then continues only after upgrading (“pay what you want”). The transcript frames the paid tier as unlocking additional options such as web research and literature reviews, which are central to producing the long-form literature synthesis.

What data privacy claim is made for Julius, and why does it matter?

Julius is described as storing uploaded files only for the current session and keeping them available for about an hour after the last use, after which files are permanently removed from servers. That matters because researchers often handle sensitive or unpublished data; the transcript directly addresses security concerns by tying the tool’s usefulness to session-limited retention.

How does Hey Science’s “AI Reviewer” aim to help before human peer review?

Hey Science’s AI Reviewer is pitched as a pre-submission critique similar to what supervisors or peers would deliver—without the back-and-forth of office responses. It can analyze a manuscript, recommend a journal (the transcript mentions polymers solar energy materials and solar cells and Nature Communications as outcomes), suggest keywords for the abstract and other sections, and flag novelty issues by pointing to similarity with existing work. It also provides strengths and weaknesses that can guide revisions or even slide preparation for talks.

Why would a researcher use versaill.ai instead of sticking with one model?

versaill.ai compares multiple language models in real time by sending the same prompt to each (the transcript mentions OpenAI and Meta Llama). The goal is to see which model performs better for a specific task—like summarizing organic photovoltaic devices—since different models can have different strengths depending on the question type.

Review Questions

  1. Which specific Mirrorthink features are described as covering the research lifecycle beyond literature review (name at least three)?
  2. What retention window does Julius claim for uploaded files, and how is that framed in relation to user trust?
  3. How does the transcript suggest using Hey Science’s AI Reviewer differently for manuscripts versus conference talks?

Key Points

  1. 1

    Sourcely generates topic-driven publication lists and produces citation-ready summaries, but it still requires researchers to read the original papers before relying on claims.

  2. 2

    Sourcely’s pricing is presented as low-cost ($7/month) and the service emphasizes reputable sources plus formatting and download/reference-manager workflows.

  3. 3

    Mirrorthink bundles science-specific capabilities—fact checking, Wolfram Alpha-backed mathematical accuracy, funding and protocol support—rather than only chat-style answers.

  4. 4

    Free access to Mirrorthink can be restricted; upgrading (“pay what you want”) unlocks additional research and literature-review features.

  5. 5

    Julius supports file upload for data analysis and claims session-limited storage, with files removed permanently about an hour after last use.

  6. 6

    Hey Science’s AI Reviewer is designed to deliver supervisor/peer-style feedback early, including journal recommendations, keyword guidance, and novelty/overlap concerns.

  7. 7

    versaill.ai helps researchers compare outputs across models (e.g., OpenAI vs Meta Llama) by running the same prompt and selecting the best response for the task.

Highlights

Sourcely turns an essay title into a set of publications with summaries intended for citation—positioned as a shortcut for the hardest part of early research: finding reliable sources.
Mirrorthink’s paid workflow is described as agent-like: it reads multiple papers, summarizes them, and then composes a long-form literature review with in-paragraph references.
Julius addresses a common barrier—data security—by claiming uploaded files persist only for the session and are deleted after roughly an hour.
Hey Science’s AI Reviewer is pitched as a pre-submission critique that can recommend journals and flag novelty problems before a supervisor ever sees the manuscript.
versaill.ai’s core utility is side-by-side model comparison using the same prompt, reducing uncertainty about which LLM works best for a given research question.

Topics

  • Academic Source Discovery
  • Science Literature Review Agents
  • AI Data Analysis
  • Manuscript Peer Review
  • LLM Model Comparison

Mentioned