5 Mind blowing AI tools every researcher should know about *but doesn't*
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Sourcely generates topic-driven publication lists and produces citation-ready summaries, but it still requires researchers to read the original papers before relying on claims.
Briefing
Five AI tools aimed at researchers are presented as practical workmates for literature work, data analysis, manuscript review, and model selection—each with a distinct workflow and tradeoffs around cost, access, and output quality.
Sourcely is positioned as a source-finding and citation helper that can summarize and format academic papers for use in writing. Users can paste an essay or an essay title, then generate a set of publications tied to the topic. The tool produces short summaries that can be cited, while still pushing users to read the original papers themselves. Pricing is framed as accessible—$7 per month—and the service emphasizes “quality control” by surfacing sources from reputable outlets. The pitch is that reliable reference discovery is often the hardest early-stage research task, and Sourcely reduces that friction, especially when paired with other literature-search tools like illicit.org.
Mirrorthink (described as “general AI for science”) targets the full research lifecycle: literature reviews, fact checking, mathematical accuracy via Wolfram Alpha, technology scouting, science funding, and experimental protocols. It also includes “find papers” and “find patents” features. A key limitation appears during testing: free access is restricted (“no longer eligible”), pushing users to “pay what you want.” After upgrading, the workflow shifts toward long-form, agent-like generation. When asked to write a literature review on organic photovoltaic devices, it pulls multiple papers across sub-areas, reads them, summarizes each, and then composes a longer synthesis with paragraph-level references. The output is presented as more expansive than typical chat-style responses.
Julius is introduced as an AI data analysis tool that accepts file uploads and acts as a “personal AI data analyst.” Data handling is addressed directly: uploaded files are said to remain available for the current session and persist for about an hour after last use, after which they’re permanently removed from servers. Use cases span marketing, healthcare, and academia. In a demonstration, a dataset described as the 2022 General Social Survey is used to analyze which features contribute most to happiness—an interaction compared to ChatGPT’s code interpreter, but described as available for free at the time of testing.
Hey Science is presented as an AI research assistant that can read millions of scientific papers, though it is not yet available publicly. The standout feature is an “AI Reviewer” that functions like a supervisor or peer review—flagging strengths and weaknesses, suggesting keywords, and offering journal recommendations before a manuscript reaches a human desk. The tool also provides guidance on novelty concerns (including similarity to existing work) and revision priorities, with the added suggestion that it can be used before talks to anticipate critique.
Finally, versaill.ai is framed as a real-time comparison layer for multiple language models. By sending the same prompt to different models (including OpenAI and Meta Llama), researchers can compare summaries side-by-side and choose the model that best fits their question type. The core value is reducing guesswork about which model performs best for a specific research task.
Cornell Notes
The transcript highlights five AI tools built for research workflows, from finding and summarizing papers to analyzing datasets and stress-testing drafts. Sourcely helps generate citation-ready summaries and formatted sources from an essay topic, with an emphasis on reputable references and low-cost access. Mirrorthink focuses on science-specific tasks—literature reviews, fact checking, math accuracy via Wolfram Alpha, and experimental protocols—while also offering long-form, agent-like literature synthesis after a paywall. Julius provides file-based data analysis with session-limited storage (about an hour after use). Hey Science (not yet broadly available) aims to deliver reviewer-style feedback and journal guidance, and versaill.ai compares outputs across multiple language models using the same prompt.
How does Sourcely turn a research topic into usable academic references?
What makes Mirrorthink different from a general chat assistant for scientific work?
What access and output limitations appear when trying Mirrorthink for free?
What data privacy claim is made for Julius, and why does it matter?
How does Hey Science’s “AI Reviewer” aim to help before human peer review?
Why would a researcher use versaill.ai instead of sticking with one model?
Review Questions
- Which specific Mirrorthink features are described as covering the research lifecycle beyond literature review (name at least three)?
- What retention window does Julius claim for uploaded files, and how is that framed in relation to user trust?
- How does the transcript suggest using Hey Science’s AI Reviewer differently for manuscripts versus conference talks?
Key Points
- 1
Sourcely generates topic-driven publication lists and produces citation-ready summaries, but it still requires researchers to read the original papers before relying on claims.
- 2
Sourcely’s pricing is presented as low-cost ($7/month) and the service emphasizes reputable sources plus formatting and download/reference-manager workflows.
- 3
Mirrorthink bundles science-specific capabilities—fact checking, Wolfram Alpha-backed mathematical accuracy, funding and protocol support—rather than only chat-style answers.
- 4
Free access to Mirrorthink can be restricted; upgrading (“pay what you want”) unlocks additional research and literature-review features.
- 5
Julius supports file upload for data analysis and claims session-limited storage, with files removed permanently about an hour after last use.
- 6
Hey Science’s AI Reviewer is designed to deliver supervisor/peer-style feedback early, including journal recommendations, keyword guidance, and novelty/overlap concerns.
- 7
versaill.ai helps researchers compare outputs across models (e.g., OpenAI vs Meta Llama) by running the same prompt and selecting the best response for the task.