Manus AI Might Be the Most Powerful Tool for Researchers Yet
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Manus AI is an agentic research tool that performs multi-step tasks (searching, synthesizing, and structuring outputs) rather than only generating text.
Briefing
Manus AI (Manis.im) is positioning itself as an “agentic” research assistant for academia—one that doesn’t just draft text, but runs multi-step tasks, searches for sources, and produces structured research outputs that look ready for scholarly use. In testing, it delivered unusually comprehensive literature-gap analysis, an in-depth literature review on organic photovoltaic (OPV) devices, and even a first-draft peer-reviewed-style paper built from uploaded figures—while showing its intermediate steps along the way.
For literature-gap analysis, the workflow started with three provided papers and a request for research gaps and future directions. Manus AI then generated research questions, assessed whether the proposed gaps were appropriate, and produced a “comprehensive report” with an executive summary, key findings, paper-by-paper overviews, and—most importantly—specific potential research directions and recommendations. The process took several minutes and ran in the background on cloud compute, with the interface showing the sequence of actions it performed. The output wasn’t limited to generic suggestions; it included concrete future-research recommendations spanning areas like materials, stability, enhancement, and manufacturing.
The literature review test pushed further. With a straightforward prompt—“write a literature review for a thesis about OPV devices”—Manus AI executed an extended research cycle that included searching the internet, synthesizing findings, and organizing the review around performance metrics, fabrication methods, and research challenges. The resulting document was described as a 105-page PDF with a full table of contents, fundamentals, materials development, measurement methods, degradation imaging techniques, and spectroscopic methods. While the language sometimes leaned toward grandiosity, the structure and breadth were treated as academically usable, including extensive references intended to support the write-up.
The most striking demonstration involved turning figures into a paper. After uploading five figures (with captions, though low resolution), Manus AI produced a “story structure” and then a formatted academic paper draft. The draft included a title, abstract, keywords placeholders, an introduction with reference placeholders, experimental methods derived from figure captions (including materials and fabrication process details), and a results-and-discussion section that followed the figure order it deemed appropriate. The draft also included an “application” framing—arguing that the proposed flexible transparent electrodes could support flexible solar cells, citing an approximate power conversion efficiency around 7% and emphasizing mechanical durability. The output was presented as close to a peer-reviewed first draft, though it still lacked some reference completeness and nuanced interpretation.
Despite the strong results, the testing also surfaced friction points: export/download bugs, occasional document-upload failures, and weak image generation for a graphical abstract compared with ChatGPT’s output. Manus AI’s capabilities appear strongest for deep research, literature review, and figure-to-paper structuring, but it’s also described as expensive—consuming hundreds to nearly a thousand credits for major tasks, with a potential cost reaching $200 per month for large credit allotments. Overall, Manus AI is framed as a powerful, agentic research workflow tool for academics, with clear promise and clear rough edges still in early access.
Cornell Notes
Manus AI (Manis.im) functions as an agentic research assistant for academia, running multi-step tasks rather than only generating text. In tests, it produced (1) a detailed literature-gap analysis from three uploaded papers, including executive summaries, key findings, and specific future research directions; (2) a long, structured literature review on OPV devices after an extended web search and synthesis; and (3) a peer-reviewed-style paper draft built from uploaded figures, including an outline, narrative flow, and a formatted manuscript with methods drawn from figure captions. The tool’s outputs are academically structured and show intermediate steps, but it still has early-access bugs and weaker image generation than specialized tools. Cost is also a factor because major tasks consume large numbers of credits.
How did Manus AI handle literature-gap analysis when given multiple papers?
What made the OPV literature review output stand out compared with typical AI drafts?
How did Manus AI turn uploaded figures into a paper draft?
Where did Manus AI underperform in the testing?
What cost signals appeared during the tests, and why do they matter?
Review Questions
- What specific sections and recommendation types appeared in Manus AI’s literature-gap report, and how did it justify gaps beyond listing ideas?
- How did the OPV literature review process (search, synthesis, structure) influence the usefulness of the final 105-page document?
- In the figure-to-paper workflow, what information was successfully extracted from figure captions, and what key elements were still incomplete (e.g., references)?
Key Points
- 1
Manus AI is an agentic research tool that performs multi-step tasks (searching, synthesizing, and structuring outputs) rather than only generating text.
- 2
Literature-gap analysis from multiple uploaded papers produced executive summaries, key findings, and specific future research directions with justification steps.
- 3
An OPV literature review was generated as a long, structured academic-style PDF after an extended web research and synthesis cycle.
- 4
Uploaded figures can be converted into a peer-reviewed-style paper draft, including methods and narrative flow, though results-and-discussion nuance may lag behind a human researcher.
- 5
Early-access reliability issues included export/download bugs and occasional document-upload failures.
- 6
Image generation for academic graphics (e.g., graphical abstracts) was weaker than specialized alternatives like ChatGPT in the tester’s comparison.
- 7
Credits-based pricing can make deep research outputs expensive, with major tasks consuming hundreds to nearly a thousand credits.