Get AI summaries of any video or article — Sign up free
WARNING: Your PhD Might Get DRASTICALLY Easier With This AI (Genspark Demo: Auto Drafts & Slides) thumbnail

WARNING: Your PhD Might Get DRASTICALLY Easier With This AI (Genspark Demo: Auto Drafts & Slides)

Andy Stapleton·
4 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

GenSpark can generate a structured literature review from a single prompt, with clickable references for source checking.

Briefing

GenSpark’s strongest academic pitch is one-prompt automation: it can turn a basic literature-review request into a structured, referenced review and then use uploaded figures to generate a first-draft peer-reviewed paper layout—complete with an abstract and figure placement. In a demo focused on nano-composite transparent electrodes, a simple prompt (“Write a literature review on nano composite transparent electrodes and their use”) produced a long, organized review with bullet-point structure, real-time typing, and a reference list that can be checked for sources. The output looked solid enough to earn an “A minus” style grade from the reviewer, though the reference count—18 sources—fell short of what many science-focused literature reviews aim for (often 50–60+).

The more consequential leap came when the workflow shifted from writing to paper assembly. After providing only figures, the system generated a proposed paper structure and then a “full paper draft” view. The draft included an abstract and captions-driven interpretation, and it went beyond merely mentioning figures by inserting them into the schematic where it judged they belonged. That figure ordering wasn’t strictly aligned with the original caption numbering (figure one appeared before figure three, with figure two later), but the reviewer framed this as a sign the tool is trying to build narrative logic rather than just reformat content. The overall takeaway: feeding data (figures, captions, and prompts) may soon be enough to produce a usable first draft, leaving humans to refine argumentation, methods detail, and compliance with journal expectations.

Not every academic task landed cleanly. When asked to create a graphical abstract, the generated image text was garbled and the scientific phrasing came out nonsensical (e.g., mangled terms and unreadable text). The reviewer suggested the image output is better treated as inspiration than as publication-ready material, even though GenSpark includes an image studio.

The most “new” capability in the demo was agentic slide creation from raw experimental files. Uploading four-point probe measurement data, the system converted file formats, summarized large documents, generated figures, and assembled an HTML-based slide deck with animations and interactive elements. It produced slides that matched the implied experimental story: introduction and methodology, unannealed sample comparison, key findings, and treatment effects after annealing. It also calculated and presented results such as resistivity differences tied to material changes and reported a “14.5%” narrow gap after annealing. The reviewer noted that exporting to PowerPoint works but looks misformatted in the current beta state; the HTML presentation was the smoothest experience.

Overall, the demo positions GenSpark as a one-prompt pipeline for academia: literature review drafting, figure-to-paper scaffolding, and raw-data-to-presentation generation. The promise is speed and reduced first-draft friction; the caveat is that publication-grade visuals (like graphical abstracts) may still require human correction and stronger validation of references and scientific wording.

Cornell Notes

GenSpark’s academic workflow centers on turning minimal inputs into structured research outputs. A prompt can generate an in-depth literature review with bullet-point organization and a clickable reference list, though the demo produced only 18 sources—less than many science reviews target. More impressively, uploaded figures can be used to create a peer-reviewed paper draft with an abstract and figure placement, building narrative structure rather than just reformatting. The tool’s weakest spot in the demo was graphical abstracts, where generated text was often nonsensical. Its standout feature was agentic slide creation: raw four-point probe data files were converted into an HTML presentation with figures, key findings, and animated, supervisor-ready slides.

What did GenSpark produce from a simple literature-review prompt, and what limitation appeared in the references?

With a prompt to write a literature review on nano composite transparent electrodes, GenSpark generated a comprehensive, structured review with bullet points and real-time typing. It included a reference section that could be checked by clicking through sources. The main limitation was the reference count: the review listed 18 sources, while the reviewer expected far more for science-focused literature reviews (often 50–60+).

How did the figure-to-paper workflow work, and what was notable about figure placement?

After providing figures, GenSpark first produced a proposed paper structure and then a “full paper draft” view. The draft included an abstract and captions-based interpretation, and it inserted figures into the schematic locations it judged appropriate. Figure ordering didn’t match the original caption numbering (figure one appeared before figure three, with figure two later), suggesting the system is optimizing narrative flow rather than preserving label order.

Why was the graphical abstract result considered unreliable for academic use?

When asked to create a graphical abstract using the paper abstract, the generated image text was largely nonsensical and scientific terms were garbled. The reviewer concluded it wasn’t suitable for publication and could at best serve as inspiration, even though GenSpark has an image studio.

What makes GenSpark’s slide generation from raw data different from typical summarizers?

The demo showed an agentic pipeline: raw four-point probe data files were processed through file-format conversion, large-document summarization, figure generation, and then slide assembly. The output was an HTML deck with interactive, animated elements and slides that reflected the experimental story (introduction/methodology, unannealed comparisons, key findings, and annealing effects).

What happened when the demo attempted to export the generated slides to PowerPoint?

Exporting to PowerPoint produced a file, but formatting was off—slide element sizes and layout didn’t match properly. The reviewer treated this as a beta limitation and recommended using the HTML presentation for the best visual fidelity.

Review Questions

  1. In the literature-review demo, what two signals suggested quality (structure and verification) and what one metric suggested incompleteness (source count)?
  2. What evidence from the peer-reviewed draft suggests the system is building narrative logic from figures rather than simply placing them in order?
  3. Which academic task in the demo produced publication-inappropriate output, and what specific failure mode was observed?

Key Points

  1. 1

    GenSpark can generate a structured literature review from a single prompt, with clickable references for source checking.

  2. 2

    The demo’s literature review used only 18 sources, which may be insufficient for many science literature reviews that expect 50–60+ references.

  3. 3

    Uploaded figures can be transformed into a peer-reviewed paper draft scaffold, including an abstract and figure placement within a schematic.

  4. 4

    Figure ordering in the draft may change relative to original caption numbering, indicating narrative optimization rather than strict label preservation.

  5. 5

    Graphical abstract generation produced garbled scientific text, making it unreliable for publication-ready graphics.

  6. 6

    Raw four-point probe data can be converted into an HTML slide deck with figures, key findings, and animations in a near one-prompt workflow.

  7. 7

    PowerPoint export appears beta-rough, with layout and sizing issues compared with the HTML output.

Highlights

A one-prompt request generated a long, structured literature review with bullet points and a reference list that can be checked—though it capped at 18 sources.
Feeding only figures led to a peer-reviewed paper draft with an abstract and figure insertion, including re-ordered figure placement to match narrative flow.
Graphical abstracts came out with nonsensical text, limiting them to inspiration rather than academic use.
Uploading raw four-point probe files produced an HTML presentation with interactive animations and supervisor-ready structure, while PowerPoint export misformatted elements.

Topics