5 Mind-Blowing AI Tools for Research You’ve Never Heard Of
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Neural Lumi combines question-based generation with relevance-scored paper lists that include year and abstracts to speed up reference discovery.
Briefing
A new wave of AI research tools is moving beyond “chatbots” into systems that search literature, organize references, and even draft review-style outputs—often with free tiers. The most practical takeaway is that researchers can now run end-to-end workflows: pose a question, retrieve relevant papers, visualize influence and citation patterns, and generate structured summaries that can be refined rather than built from scratch.
Neural Lumi (spelled in the transcript as “Neural Lumia”/“Noral Loomi”) is presented as an all-in-one platform for researchers. After logging in, it offers multiple output modes—Search Paper, Spark, and Pulse—where users enter a research question and receive generated material such as a topic, a “read the full report summary,” and an organized set of references. The interface emphasizes relevance scoring and a paper list with year and abstracts, sorted to surface highly relevant work first. A “research assistant” workspace is also mentioned as a place to link papers into the workflow, aiming to streamline reference discovery and organization.
Underminded is positioned as an agentic research assistant that “condenses weeks of research into minutes,” but it comes with a major caveat: it dives deep quickly. The tool asks clarifying questions and then prompts the user to answer follow-ups, producing a detailed literature review-style output. For users already familiar with a field, the results are described as strong and highly structured—using accordion-style sections, tables, and a distinctive visualization called “Top references over time.” Hovering reveals citation relationships (red markers indicate cited-by links), helping users quickly identify influential papers and foundational work. A chat area then allows deeper interrogation of findings and comparisons of top results.
Future House targets “automating scientific discovery” with an “AI scientist” mission. Users create a new task, choose from multiple models (including crow, falcon, phoenix, and owl), and submit a scientific question—example given: gut microbiome and colorectal cancer. Outputs are described as a dense wall of text with small typography and limited formatting, but the tool provides references and DOIs so users can verify claims directly. The transcript also notes that references are included (example: 23 references) and that some results are labeled by quality signals such as publication venue (e.g., a top-quality item in the journal Gut). The emphasis is on experimentation with different model/task types (including “deep search” and “chemistry task”).
Inra.ai focuses on structured review workflows. It offers options for narrative literature review, systematic literature review, meta-analysis, and gap analysis, each with an estimated time-to-complete. Users can type a research question and upload papers, with Zotero integration “soon.” The tool generates review-style text, including an executive-summary-like abstract, methods, key findings, and an introduction with linked references. For systematic reviews, it can produce a Prisma flow diagram. Users can also continue via conversation prompts such as explaining methodology and limitations.
Smartress (smartressresearch-ai.com) is highlighted for discovery. Its “discover” search scans millions of papers and returns an outline of relevant works, with citation information and links to where papers appear on Semantic Scholar. It also lets users filter sources such as Semantic Scholar, Archive, or OpenAlex. The transcript closes with a practical warning: performance varies by research field, so trying multiple tools is encouraged to find the best fit.
Cornell Notes
Several AI tools are presented as research workbenches that do more than summarize text: they search literature, organize references, and generate review-style outputs. Neural Lumi emphasizes relevance-scored paper discovery with multiple generation modes. Underminded stands out for agentic, highly structured literature synthesis plus a “Top references over time” visualization that helps identify influential and foundational papers. Future House and Inra.ai focus on scientific discovery and structured review workflows (including systematic-review artifacts like a Prisma flow diagram). Smartress adds a “discover” search across millions of papers with relevance outlines and citation/source pointers (e.g., Semantic Scholar). Together, they suggest researchers can assemble faster literature reviews—then verify and refine using provided references and DOIs.
What workflow does Neural Lumi aim to streamline for researchers?
Why does Underminded feel powerful to some users but confusing to others?
What is the distinctive “Top references over time” feature used for?
How does Future House support verification of its generated scientific outputs?
What structured capabilities does Inra.ai offer for literature reviews?
What does Smartress emphasize in its “discover” search?
Review Questions
- Which tool(s) provide explicit citation-relationship visualizations, and what does that visualization help a researcher do?
- How do Inra.ai and Future House differ in their approach to structuring outputs and supporting verification (e.g., Prisma flow diagram vs. DOIs)?
- Why might an agentic tool like Underminded be less suitable for researchers who are new to a field?
Key Points
- 1
Neural Lumi combines question-based generation with relevance-scored paper lists that include year and abstracts to speed up reference discovery.
- 2
Underminded’s agentic workflow can produce highly structured literature synthesis, but it may overwhelm newcomers because it dives deep quickly.
- 3
Underminded’s “Top references over time” visualization helps identify influential and foundational papers using citation markers.
- 4
Future House positions itself as an “AI scientist” and provides DOIs and references so users can verify generated claims.
- 5
Inra.ai supports multiple review types (narrative, systematic, meta-analysis, gap analysis) and can generate systematic-review artifacts like a Prisma flow diagram.
- 6
Smartress emphasizes large-scale literature discovery with relevance outlines and source/citation pointers (including Semantic Scholar and OpenAlex filters).
- 7
Tool performance likely varies by research field, so testing multiple options can be necessary to find the best fit.