The AI That's Changing Academia? Must-See for Researchers!
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Petal is positioned as an end-to-end research workspace that combines document ingestion, organization, annotation, and both single-document and multi-document Q&A.
Briefing
A new wave of research-focused AI is moving beyond “one task at a time” tools and toward a single workspace that can ingest papers, organize a library, and answer questions across documents. One example highlighted here—Petal (spelled “petal” / “P Al Al” in the transcript)—positions itself as generative AI for researchers: upload PDFs or import citations, store and manage a research library, and then chat with either entire collections or specific documents to extract summaries, limitations, and even data-driven details.
The workflow starts with building a document library. After signing up on a free plan for testing, the pricing shown for an edu account is $255 per month (with a student account described as cheaper). The dashboard supports multiple ingestion paths: uploading from a computer, importing via BibTeX, importing via DOI/identifiers, and a web importer (with cloud-drive import described as “coming soon”). Once documents are in place, the interface separates core functions into tabs: a Documents tab for the library, an “AI table” for structured extraction and comparison, and “multi doc chat” for asking questions across selected sources.
A key feature is document-level interaction. Selecting an individual paper opens a side panel with properties and annotation tools, including the ability to highlight specific text and attach comments. From there, the AI can generate responses tied to the highlighted sections. In practice, the transcript shows mixed results: when asked for limitations, the AI returned that the provided document did not mention specific limitations—suggesting it may not infer beyond what’s explicitly present in the selected text. When asked for key points, it produced notes based on the highlighted excerpts rather than the full study, reinforcing that what the user selects can constrain what the AI summarizes.
For cross-paper synthesis, the “AI table” feature is presented as a way to emulate tools like illicit.org (mentioned as a comparison point). Users can create a table with custom columns—such as “main conclusions” or “further work”—and run AI queries per document. The transcript notes a limitation: answers appear to be generated per paper rather than in one “run everything” batch, with a suggestion that credit usage may be part of why bulk generation isn’t immediate.
The multi-document chat is where the tool’s promise becomes most tangible. Using a set of solar-cell papers, it generated research directions and categorized solar-cell types. At first, it struggled to interpret a question about “best” solar cells and didn’t extract efficiencies. After follow-up prompting for efficiencies, it returned numeric ranges (power conversion efficiency from 0.0039 to 3.9) and pulled specific values associated with device variants (e.g., 0.4 for devices with an aluminium cathode and 0.8 for a calcium aluminium cathode). The overall takeaway is that Petal is already useful for structured extraction and targeted synthesis, but performance depends heavily on how precisely questions are framed—especially when pulling quantitative data from dense, multi-document research collections.
The transcript closes by framing this as part of a broader shift: researchers are starting to see AI systems that combine ingestion, organization, annotation, and multi-source Q&A in one place, with expectations that such “single assistant” workflows could substantially boost productivity across fields in the near term.
Cornell Notes
Petal is presented as an AI research assistant that combines document ingestion, library organization, annotation, and question-answering across one or many papers. It supports multiple import methods (PDF upload, BibTeX, DOI/identifiers, and a web importer), then lets users chat with either a single document or a selected set via “multi doc chat.” Annotation and highlighting can constrain what the AI summarizes, so results may reflect only the selected text rather than the entire study. An “AI table” feature enables structured extraction by running custom prompts per paper, useful for building research gaps and proposals. Quantitative extraction improves with more specific questions, such as pulling power conversion efficiency ranges and device-specific values from solar-cell literature.
How does Petal help researchers move from collecting papers to actually extracting usable information?
What role do highlights and annotations play in the quality of AI summaries?
How does the AI table differ from chat-based workflows?
Why did the tool initially struggle with “best solar cells,” and what changed after follow-up?
What does the multi doc chat workflow look like in practice?
Review Questions
- When would you prefer multi doc chat over the AI table, and what kinds of questions fit each best?
- How might highlight-based scoping change the reliability of AI-generated “limitations” or “key points” summaries?
- What question-phrasing strategy helped the AI extract efficiencies from solar-cell papers, and how could that apply to other technical domains?
Key Points
- 1
Petal is positioned as an end-to-end research workspace that combines document ingestion, organization, annotation, and both single-document and multi-document Q&A.
- 2
Multiple import paths—PDF upload, BibTeX, DOI/identifiers, and a web importer—are designed to reduce friction when building a literature library.
- 3
Document-level chat can be tightly controlled using highlights and annotations, but answers may reflect only the selected text rather than the full paper.
- 4
The AI table enables structured extraction by letting users define custom columns and prompts (e.g., main conclusions, further work) for each uploaded study.
- 5
Multi doc chat supports synthesis across many papers, including generating research directions, but early answers may miss quantitative details unless questions are specific.
- 6
In the solar-cell example, asking for efficiencies directly produced numeric ranges and device-specific values, showing that precision in prompts can unlock better data extraction.
- 7
Bulk “answer everything” behavior appears limited in the demonstration, suggesting that per-document processing and AI credit usage may shape how researchers interact with the tool.