Get AI summaries of any video or article — Sign up free
The AI That's Changing Academia? Must-See for Researchers! thumbnail

The AI That's Changing Academia? Must-See for Researchers!

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Petal is positioned as an end-to-end research workspace that combines document ingestion, organization, annotation, and both single-document and multi-document Q&A.

Briefing

A new wave of research-focused AI is moving beyond “one task at a time” tools and toward a single workspace that can ingest papers, organize a library, and answer questions across documents. One example highlighted here—Petal (spelled “petal” / “P Al Al” in the transcript)—positions itself as generative AI for researchers: upload PDFs or import citations, store and manage a research library, and then chat with either entire collections or specific documents to extract summaries, limitations, and even data-driven details.

The workflow starts with building a document library. After signing up on a free plan for testing, the pricing shown for an edu account is $255 per month (with a student account described as cheaper). The dashboard supports multiple ingestion paths: uploading from a computer, importing via BibTeX, importing via DOI/identifiers, and a web importer (with cloud-drive import described as “coming soon”). Once documents are in place, the interface separates core functions into tabs: a Documents tab for the library, an “AI table” for structured extraction and comparison, and “multi doc chat” for asking questions across selected sources.

A key feature is document-level interaction. Selecting an individual paper opens a side panel with properties and annotation tools, including the ability to highlight specific text and attach comments. From there, the AI can generate responses tied to the highlighted sections. In practice, the transcript shows mixed results: when asked for limitations, the AI returned that the provided document did not mention specific limitations—suggesting it may not infer beyond what’s explicitly present in the selected text. When asked for key points, it produced notes based on the highlighted excerpts rather than the full study, reinforcing that what the user selects can constrain what the AI summarizes.

For cross-paper synthesis, the “AI table” feature is presented as a way to emulate tools like illicit.org (mentioned as a comparison point). Users can create a table with custom columns—such as “main conclusions” or “further work”—and run AI queries per document. The transcript notes a limitation: answers appear to be generated per paper rather than in one “run everything” batch, with a suggestion that credit usage may be part of why bulk generation isn’t immediate.

The multi-document chat is where the tool’s promise becomes most tangible. Using a set of solar-cell papers, it generated research directions and categorized solar-cell types. At first, it struggled to interpret a question about “best” solar cells and didn’t extract efficiencies. After follow-up prompting for efficiencies, it returned numeric ranges (power conversion efficiency from 0.0039 to 3.9) and pulled specific values associated with device variants (e.g., 0.4 for devices with an aluminium cathode and 0.8 for a calcium aluminium cathode). The overall takeaway is that Petal is already useful for structured extraction and targeted synthesis, but performance depends heavily on how precisely questions are framed—especially when pulling quantitative data from dense, multi-document research collections.

The transcript closes by framing this as part of a broader shift: researchers are starting to see AI systems that combine ingestion, organization, annotation, and multi-source Q&A in one place, with expectations that such “single assistant” workflows could substantially boost productivity across fields in the near term.

Cornell Notes

Petal is presented as an AI research assistant that combines document ingestion, library organization, annotation, and question-answering across one or many papers. It supports multiple import methods (PDF upload, BibTeX, DOI/identifiers, and a web importer), then lets users chat with either a single document or a selected set via “multi doc chat.” Annotation and highlighting can constrain what the AI summarizes, so results may reflect only the selected text rather than the entire study. An “AI table” feature enables structured extraction by running custom prompts per paper, useful for building research gaps and proposals. Quantitative extraction improves with more specific questions, such as pulling power conversion efficiency ranges and device-specific values from solar-cell literature.

How does Petal help researchers move from collecting papers to actually extracting usable information?

It starts with building a research library through multiple ingestion routes: uploading PDFs, importing citations via BibTeX, importing by DOI/identifiers, and using a web importer (cloud-drive import is described as coming soon). Once documents are stored in the Documents tab, users can either (1) chat with a single document, (2) annotate and highlight specific passages, or (3) use multi doc chat to ask questions across a selected subset of sources. For structured synthesis, the AI table lets users define columns (e.g., “main conclusions” or “further work”) and run AI queries per document.

What role do highlights and annotations play in the quality of AI summaries?

Highlights appear to act like a scope filter. In the demonstration, the AI’s answers about “limitations” and “key points” reflected what was present in the highlighted sections. When asked for limitations, the AI responded that the provided document did not mention specific limitations—suggesting it may not infer beyond the text it was effectively given. The key implication is that selecting the right passages can improve relevance, while vague or narrow highlights can limit the AI’s coverage.

How does the AI table differ from chat-based workflows?

The AI table is designed for structured extraction and comparison. Users create a table with custom prompts per column, such as asking for “main conclusions” or “further work” (including “what experiments could expand on this work”). The transcript notes that generation appears to happen per paper rather than a single “answer everything” batch, which can be inconvenient when many documents are uploaded. Still, it’s positioned as a practical way to turn a literature set into a proposal-ready matrix.

Why did the tool initially struggle with “best solar cells,” and what changed after follow-up?

When asked for “the best solar cells,” the AI initially returned categories/types of solar cells rather than identifying top performers or extracting efficiencies. After a more specific follow-up—asking directly for the efficiencies—the AI produced numeric results, including a power conversion efficiency range from 0.0039 to 3.9 and device-specific values (0.4 for aluminium cathode devices and 0.8 for calcium aluminium cathode devices). The transcript attributes the improvement to more precise question framing.

What does the multi doc chat workflow look like in practice?

Users upload a set of papers, then in multi doc chat they select which sources to include and ask a question that spans the collection. In the solar-cell example, the AI generated multiple research directions (five options) based on the individual articles. The workflow supports iterative refinement: if the first answer misses the intended metric (like efficiencies), the user can follow up with a targeted question to extract the needed data.

Review Questions

  1. When would you prefer multi doc chat over the AI table, and what kinds of questions fit each best?
  2. How might highlight-based scoping change the reliability of AI-generated “limitations” or “key points” summaries?
  3. What question-phrasing strategy helped the AI extract efficiencies from solar-cell papers, and how could that apply to other technical domains?

Key Points

  1. 1

    Petal is positioned as an end-to-end research workspace that combines document ingestion, organization, annotation, and both single-document and multi-document Q&A.

  2. 2

    Multiple import paths—PDF upload, BibTeX, DOI/identifiers, and a web importer—are designed to reduce friction when building a literature library.

  3. 3

    Document-level chat can be tightly controlled using highlights and annotations, but answers may reflect only the selected text rather than the full paper.

  4. 4

    The AI table enables structured extraction by letting users define custom columns and prompts (e.g., main conclusions, further work) for each uploaded study.

  5. 5

    Multi doc chat supports synthesis across many papers, including generating research directions, but early answers may miss quantitative details unless questions are specific.

  6. 6

    In the solar-cell example, asking for efficiencies directly produced numeric ranges and device-specific values, showing that precision in prompts can unlock better data extraction.

  7. 7

    Bulk “answer everything” behavior appears limited in the demonstration, suggesting that per-document processing and AI credit usage may shape how researchers interact with the tool.

Highlights

Petal’s strongest workflow combines library building with annotation and then Q&A—so researchers can extract meaning from specific passages, not just entire PDFs.
Highlighting can constrain summaries: the AI may report “no limitations mentioned” when limitations aren’t present in the selected text.
In solar-cell literature, a vague question about “best” cells returned categories, but a targeted follow-up for “efficiencies” unlocked numeric ranges and cathode-specific values (0.4 and 0.8).
The AI table turns literature review into a structured matrix by letting users define custom prompts per document (useful for identifying gaps and proposing experiments).

Topics

Mentioned