Get AI summaries of any video or article — Sign up free
5 Mind-Blowing AI Tools for Research You’ve Never Heard Of thumbnail

5 Mind-Blowing AI Tools for Research You’ve Never Heard Of

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Neural Lumi combines question-based generation with relevance-scored paper lists that include year and abstracts to speed up reference discovery.

Briefing

A new wave of AI research tools is moving beyond “chatbots” into systems that search literature, organize references, and even draft review-style outputs—often with free tiers. The most practical takeaway is that researchers can now run end-to-end workflows: pose a question, retrieve relevant papers, visualize influence and citation patterns, and generate structured summaries that can be refined rather than built from scratch.

Neural Lumi (spelled in the transcript as “Neural Lumia”/“Noral Loomi”) is presented as an all-in-one platform for researchers. After logging in, it offers multiple output modes—Search Paper, Spark, and Pulse—where users enter a research question and receive generated material such as a topic, a “read the full report summary,” and an organized set of references. The interface emphasizes relevance scoring and a paper list with year and abstracts, sorted to surface highly relevant work first. A “research assistant” workspace is also mentioned as a place to link papers into the workflow, aiming to streamline reference discovery and organization.

Underminded is positioned as an agentic research assistant that “condenses weeks of research into minutes,” but it comes with a major caveat: it dives deep quickly. The tool asks clarifying questions and then prompts the user to answer follow-ups, producing a detailed literature review-style output. For users already familiar with a field, the results are described as strong and highly structured—using accordion-style sections, tables, and a distinctive visualization called “Top references over time.” Hovering reveals citation relationships (red markers indicate cited-by links), helping users quickly identify influential papers and foundational work. A chat area then allows deeper interrogation of findings and comparisons of top results.

Future House targets “automating scientific discovery” with an “AI scientist” mission. Users create a new task, choose from multiple models (including crow, falcon, phoenix, and owl), and submit a scientific question—example given: gut microbiome and colorectal cancer. Outputs are described as a dense wall of text with small typography and limited formatting, but the tool provides references and DOIs so users can verify claims directly. The transcript also notes that references are included (example: 23 references) and that some results are labeled by quality signals such as publication venue (e.g., a top-quality item in the journal Gut). The emphasis is on experimentation with different model/task types (including “deep search” and “chemistry task”).

Inra.ai focuses on structured review workflows. It offers options for narrative literature review, systematic literature review, meta-analysis, and gap analysis, each with an estimated time-to-complete. Users can type a research question and upload papers, with Zotero integration “soon.” The tool generates review-style text, including an executive-summary-like abstract, methods, key findings, and an introduction with linked references. For systematic reviews, it can produce a Prisma flow diagram. Users can also continue via conversation prompts such as explaining methodology and limitations.

Smartress (smartressresearch-ai.com) is highlighted for discovery. Its “discover” search scans millions of papers and returns an outline of relevant works, with citation information and links to where papers appear on Semantic Scholar. It also lets users filter sources such as Semantic Scholar, Archive, or OpenAlex. The transcript closes with a practical warning: performance varies by research field, so trying multiple tools is encouraged to find the best fit.

Cornell Notes

Several AI tools are presented as research workbenches that do more than summarize text: they search literature, organize references, and generate review-style outputs. Neural Lumi emphasizes relevance-scored paper discovery with multiple generation modes. Underminded stands out for agentic, highly structured literature synthesis plus a “Top references over time” visualization that helps identify influential and foundational papers. Future House and Inra.ai focus on scientific discovery and structured review workflows (including systematic-review artifacts like a Prisma flow diagram). Smartress adds a “discover” search across millions of papers with relevance outlines and citation/source pointers (e.g., Semantic Scholar). Together, they suggest researchers can assemble faster literature reviews—then verify and refine using provided references and DOIs.

What workflow does Neural Lumi aim to streamline for researchers?

Neural Lumi is described as an all-in-one platform where users enter a research question and choose output modes such as Search Paper, Spark, or Pulse. The generated results include a topic and a “read the full report summary,” plus an organized set of references. Papers are shown with relevance scoring, year, and abstracts, sorted to surface highly relevant work first. A “research assistant” workspace is also mentioned as a place to organize papers and link them into the workflow.

Why does Underminded feel powerful to some users but confusing to others?

Underminded behaves like an agent: it asks clarifying questions, then quickly goes deep with follow-up prompts that the user must answer. The transcript warns that newcomers to a field may get overwhelmed by technical detail (examples include terms like polymer), making the output harder to interpret. For researchers already familiar with the domain, the tool’s structured output—accordion sections, tables, and citation visualizations—makes it easier to navigate literature.

What is the distinctive “Top references over time” feature used for?

Underminded’s “Top references over time” visualization provides a quick, visual snapshot of influential papers. The transcript describes hover behavior that shows whether a paper is cited (red markers indicate citation relationships). One referenced item is described as being cited by many papers (red), while another shows no citations, helping users rapidly spot foundational work and highly cited studies.

How does Future House support verification of its generated scientific outputs?

Future House outputs a dense text response, but it includes references and DOIs so users can check sources directly. In the gut microbiome/colorectal cancer example, the tool provides a set of references (23 mentioned) and indicates quality signals such as the journal venue (e.g., a top-quality item in Gut). The transcript also notes that references are available even if the main text formatting is minimal.

What structured capabilities does Inra.ai offer for literature reviews?

Inra.ai provides multiple review types—narrative literature review, systematic literature review, meta-analysis, and gap analysis—each with an estimated time to complete. Users can enter a research question and upload papers, and it mentions Zotero integration “soon.” The tool generates structured review text (abstract, methods, key findings, introduction) with linked references, and for systematic reviews it can produce a Prisma flow diagram. It also supports follow-up conversation prompts like explaining methodology and limitations.

What does Smartress emphasize in its “discover” search?

Smartress highlights discovery across millions of papers. The “discover” feature returns an outline/list of relevant papers for a user’s query, including relevance information and citation details. It also indicates where papers were found on Semantic Scholar and allows filtering by sources such as Semantic Scholar, Archive, or OpenAlex. Users can add results to a library for later use.

Review Questions

  1. Which tool(s) provide explicit citation-relationship visualizations, and what does that visualization help a researcher do?
  2. How do Inra.ai and Future House differ in their approach to structuring outputs and supporting verification (e.g., Prisma flow diagram vs. DOIs)?
  3. Why might an agentic tool like Underminded be less suitable for researchers who are new to a field?

Key Points

  1. 1

    Neural Lumi combines question-based generation with relevance-scored paper lists that include year and abstracts to speed up reference discovery.

  2. 2

    Underminded’s agentic workflow can produce highly structured literature synthesis, but it may overwhelm newcomers because it dives deep quickly.

  3. 3

    Underminded’s “Top references over time” visualization helps identify influential and foundational papers using citation markers.

  4. 4

    Future House positions itself as an “AI scientist” and provides DOIs and references so users can verify generated claims.

  5. 5

    Inra.ai supports multiple review types (narrative, systematic, meta-analysis, gap analysis) and can generate systematic-review artifacts like a Prisma flow diagram.

  6. 6

    Smartress emphasizes large-scale literature discovery with relevance outlines and source/citation pointers (including Semantic Scholar and OpenAlex filters).

  7. 7

    Tool performance likely varies by research field, so testing multiple options can be necessary to find the best fit.

Highlights

Underminded pairs deep, agentic literature synthesis with a “Top references over time” view that visually flags citation influence.
Future House’s outputs may be hard to read due to formatting, but DOIs and reference lists enable source checking.
Inra.ai’s workflow targets review structure—complete with Prisma flow diagrams for systematic reviews.
Smartress’s “discover” mode is built for fast scanning across millions of papers, then saving results to a library.

Topics

Mentioned