Get AI summaries of any video or article — Sign up free
Manus AI Might Be the Most Powerful Tool for Researchers Yet thumbnail

Manus AI Might Be the Most Powerful Tool for Researchers Yet

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Manus AI is an agentic research tool that performs multi-step tasks (searching, synthesizing, and structuring outputs) rather than only generating text.

Briefing

Manus AI (Manis.im) is positioning itself as an “agentic” research assistant for academia—one that doesn’t just draft text, but runs multi-step tasks, searches for sources, and produces structured research outputs that look ready for scholarly use. In testing, it delivered unusually comprehensive literature-gap analysis, an in-depth literature review on organic photovoltaic (OPV) devices, and even a first-draft peer-reviewed-style paper built from uploaded figures—while showing its intermediate steps along the way.

For literature-gap analysis, the workflow started with three provided papers and a request for research gaps and future directions. Manus AI then generated research questions, assessed whether the proposed gaps were appropriate, and produced a “comprehensive report” with an executive summary, key findings, paper-by-paper overviews, and—most importantly—specific potential research directions and recommendations. The process took several minutes and ran in the background on cloud compute, with the interface showing the sequence of actions it performed. The output wasn’t limited to generic suggestions; it included concrete future-research recommendations spanning areas like materials, stability, enhancement, and manufacturing.

The literature review test pushed further. With a straightforward prompt—“write a literature review for a thesis about OPV devices”—Manus AI executed an extended research cycle that included searching the internet, synthesizing findings, and organizing the review around performance metrics, fabrication methods, and research challenges. The resulting document was described as a 105-page PDF with a full table of contents, fundamentals, materials development, measurement methods, degradation imaging techniques, and spectroscopic methods. While the language sometimes leaned toward grandiosity, the structure and breadth were treated as academically usable, including extensive references intended to support the write-up.

The most striking demonstration involved turning figures into a paper. After uploading five figures (with captions, though low resolution), Manus AI produced a “story structure” and then a formatted academic paper draft. The draft included a title, abstract, keywords placeholders, an introduction with reference placeholders, experimental methods derived from figure captions (including materials and fabrication process details), and a results-and-discussion section that followed the figure order it deemed appropriate. The draft also included an “application” framing—arguing that the proposed flexible transparent electrodes could support flexible solar cells, citing an approximate power conversion efficiency around 7% and emphasizing mechanical durability. The output was presented as close to a peer-reviewed first draft, though it still lacked some reference completeness and nuanced interpretation.

Despite the strong results, the testing also surfaced friction points: export/download bugs, occasional document-upload failures, and weak image generation for a graphical abstract compared with ChatGPT’s output. Manus AI’s capabilities appear strongest for deep research, literature review, and figure-to-paper structuring, but it’s also described as expensive—consuming hundreds to nearly a thousand credits for major tasks, with a potential cost reaching $200 per month for large credit allotments. Overall, Manus AI is framed as a powerful, agentic research workflow tool for academics, with clear promise and clear rough edges still in early access.

Cornell Notes

Manus AI (Manis.im) functions as an agentic research assistant for academia, running multi-step tasks rather than only generating text. In tests, it produced (1) a detailed literature-gap analysis from three uploaded papers, including executive summaries, key findings, and specific future research directions; (2) a long, structured literature review on OPV devices after an extended web search and synthesis; and (3) a peer-reviewed-style paper draft built from uploaded figures, including an outline, narrative flow, and a formatted manuscript with methods drawn from figure captions. The tool’s outputs are academically structured and show intermediate steps, but it still has early-access bugs and weaker image generation than specialized tools. Cost is also a factor because major tasks consume large numbers of credits.

How did Manus AI handle literature-gap analysis when given multiple papers?

It accepted three uploaded papers and generated research gaps by first producing research questions it believed needed answering, then performing additional research to judge whether each proposed gap was appropriate. The workflow took several minutes and produced a comprehensive report with an executive summary, key findings, and overviews of each provided paper. The most useful section was the set of potential research directions and recommendations for future work, including themes like stability, enhancement, and manufacturing, plus more specific, gap-focused suggestions.

What made the OPV literature review output stand out compared with typical AI drafts?

The OPV review was generated after an extended process (about half an hour) that included searching the internet, identifying and synthesizing relevant papers, and organizing the review around fundamentals, materials, performance metrics, fabrication methods, and research challenges. The final deliverable was described as a 105-page PDF with a table of contents, an abstract, and extensive coverage of measurement and degradation analysis methods (including imaging and spectroscopic techniques). It also included a large reference list intended to support the academic structure.

How did Manus AI turn uploaded figures into a paper draft?

After uploading five figures (with captions but low resolution), Manus AI analyzed the visual content and produced a paper “story structure” before generating a formatted academic manuscript. The draft included sections such as an abstract, keywords placeholders, an introduction with reference placeholders, and experimental methods derived from figure captions (including materials and fabrication process details like diameters). In results and discussion, it organized the narrative in the order it judged appropriate for the figures, and it added an application framing that connected the findings to flexible solar cells and mechanical durability.

Where did Manus AI underperform in the testing?

Image generation for an academic graphical abstract was weak: the produced graphic didn’t resemble an AFM tip and didn’t match the expected scientific visual. The tool also showed early-access reliability issues, including export/download failures (PDF export not working at the time), document uploads sometimes getting stuck, and other interface bugs such as elements not collapsing properly.

What cost signals appeared during the tests, and why do they matter?

Manus AI uses credits tied to task complexity. The OPV literature review consumed roughly 920 credits, while the “story structure” task used about 40 credits. The tester noted that credits can add up quickly, and an upgrade tier was described as potentially costing up to $200 per month for nearly 20,000 credits. That makes the tool most practical for high-value deep research tasks rather than frequent lightweight drafting.

Review Questions

  1. What specific sections and recommendation types appeared in Manus AI’s literature-gap report, and how did it justify gaps beyond listing ideas?
  2. How did the OPV literature review process (search, synthesis, structure) influence the usefulness of the final 105-page document?
  3. In the figure-to-paper workflow, what information was successfully extracted from figure captions, and what key elements were still incomplete (e.g., references)?

Key Points

  1. 1

    Manus AI is an agentic research tool that performs multi-step tasks (searching, synthesizing, and structuring outputs) rather than only generating text.

  2. 2

    Literature-gap analysis from multiple uploaded papers produced executive summaries, key findings, and specific future research directions with justification steps.

  3. 3

    An OPV literature review was generated as a long, structured academic-style PDF after an extended web research and synthesis cycle.

  4. 4

    Uploaded figures can be converted into a peer-reviewed-style paper draft, including methods and narrative flow, though results-and-discussion nuance may lag behind a human researcher.

  5. 5

    Early-access reliability issues included export/download bugs and occasional document-upload failures.

  6. 6

    Image generation for academic graphics (e.g., graphical abstracts) was weaker than specialized alternatives like ChatGPT in the tester’s comparison.

  7. 7

    Credits-based pricing can make deep research outputs expensive, with major tasks consuming hundreds to nearly a thousand credits.

Highlights

Manus AI’s literature-gap workflow generated research questions, then ran additional research to validate whether proposed gaps were appropriate—producing a structured report with actionable directions.
The OPV literature review output was described as a 105-page, academically structured PDF with detailed coverage of measurement and degradation analysis methods.
From five uploaded figures, Manus AI produced both a narrative “story structure” and a formatted paper draft, including methods derived from figure captions and an application-focused framing.

Topics

Mentioned