Get AI summaries of any video or article — Sign up free
How to Use Perplexity's Deep Research & Save HOURS on research thumbnail

How to Use Perplexity's Deep Research & Save HOURS on research

Andy Stapleton·
4 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Perplexity’s Deep Research mode supports academic-only sourcing by letting users turn off web results and enable academic search.

Briefing

Perplexity’s “Deep Research” is positioned as a student-friendly way to produce literature reviews with academic-only sourcing—without the manual copy-paste and reference wrangling that often slows down research. The standout feature is control over search sources: users can disable web results and keep the workflow focused on academic material, then ask for tasks like a literature review or a structured research write-up. In practice, the tool generates a research plan, runs through staged steps, and returns a synthesized answer backed by a large set of citations.

A key test involved writing a literature review on nanomaterials for transparent electrodes, specifically targeting new materials, fabrication methods, figure of merit, and stability. After switching to Deep Research mode and setting sources to academic (with web turned off), the workflow produced a multi-section output that broke the topic into categories such as emerging nanomaterials, novel fabrication methods, figure of merit analysis, and stability/degradation mechanisms, ending with a conclusion. The result included 58 academic sources, which the user could expand, remove selectively, and open to view the underlying references.

The interface also supports verification and reuse. Clicking through references takes users directly to the cited material, and the output is organized so citations appear in context rather than forcing users to reconstruct a bibliography afterward. For deliverables, the tool offers export options including PDF, Markdown, and a “Perplexity page” view designed for easier reading and navigation. The PDF is described as fully linked—so references remain clickable—making it practical for study sessions or handing work in with traceable sourcing.

The comparison with ChatGPT is nuanced. Perplexity is praised for academic-only sourcing, structured readability, and export formats that preserve links to sources. ChatGPT is described as sometimes providing more raw information, but it can require more cleanup—copying references, dealing with non-academic citations, and reformatting. In short, Perplexity is framed as more usable for academic workflows, even if it may be less expansive.

One limitation surfaced during a “short 300-word intro for a peer-reviewed paper” prompt: the system handled the topic well but didn’t supply references for that shorter output. The user also felt Deep Research tends to produce “big and deep” results rather than compact, tightly referenced summaries. Still, with Deep Research available for free and capped at a limited number of enhanced queries per day, the overall takeaway is that Perplexity’s Deep Research is a practical research assistant for students—especially when the priority is credible academic citations and a structured literature review that can be exported and revisited later.

Cornell Notes

Perplexity’s Deep Research is built for academic workflows, emphasizing citation-backed outputs and controllable sourcing. With web search turned off and academic search enabled, it can generate a staged literature review that synthesizes findings into clear sections (e.g., emerging materials, fabrication methods, figure of merit, and stability). In a nanomaterials/transparent electrodes test, it produced an answer with 58 academic sources and let users expand, remove, and click through references. Export options (PDF, Markdown, and a navigable Perplexity page) keep citations linked for later verification. The main gap noted is that short, peer-reviewed-style outputs may omit references and Deep Research can skew toward longer, deeper results.

What makes Perplexity’s Deep Research feel different for academic work compared with general chat outputs?

Deep Research mode includes source controls—users can disable web results and rely on academic search. That setup is meant to keep citations academic-only, reducing the time spent filtering out non-scholarly references. The workflow also generates a research plan and then produces a structured synthesis with a large citation set, rather than a free-form answer that requires later cleanup.

How did the nanomaterials/transparent electrodes test demonstrate the tool’s strengths?

The prompt asked for a literature review focused on nanomaterials for transparent electrodes, emphasizing new materials, fabrication methods, figure of merit, and stability. The output was organized into sections such as emerging nanomaterials, novel fabrication methods, figure of merit analysis, and stability/degradation mechanisms, followed by a conclusion. It included 58 academic sources, which could be expanded and individually reviewed.

What citation and verification features help users reuse the research?

Users can click to open the actual references used, and the export options preserve those links. The PDF export is described as fully linked, so references remain clickable for follow-up. Citations are presented in an order that matches where they’re referenced in the output, reducing the need to reconstruct a bibliography manually.

How do export formats change the usability of the output?

Perplexity offers export as PDF, Markdown, and a dedicated Perplexity page view. The Perplexity page is described as easier to read and navigate through sections, while the PDF is practical for printing or submitting with clickable references. Markdown export is presented as an option for users who prefer text-based formatting.

Where did Deep Research fall short in the “short peer-reviewed intro” attempt?

When asked for a short 300-word introduction for a peer-reviewed paper, the output was considered understandable and on-topic, but it didn’t provide references for that shorter response. The limitation suggests the system may not consistently attach citations when the requested output is brief, and it may favor longer, deeper responses overall.

What trade-off emerged in the Perplexity vs. ChatGPT comparison?

Perplexity was favored for academic-only sourcing, structured organization, and export formats that keep references linked. ChatGPT was described as sometimes giving more information, but it could be less usable for academic submission because references may require extra handling and the citations may include non-academic sources.

Review Questions

  1. When web search is disabled and academic search is enabled, what kinds of outputs does Deep Research produce, and how does that affect citation quality?
  2. In the nanomaterials/transparent electrodes example, which sections were used to structure the literature review, and what does that imply about how the tool organizes evidence?
  3. What specific limitation appeared when requesting a short (300-word) peer-reviewed-style introduction, and why does that matter for academic workflows?

Key Points

  1. 1

    Perplexity’s Deep Research mode supports academic-only sourcing by letting users turn off web results and enable academic search.

  2. 2

    Deep Research runs through a staged process (including a research plan) before returning a structured literature review.

  3. 3

    A nanomaterials/transparent electrodes test produced a multi-section review with 58 academic sources and clickable references.

  4. 4

    Export options—PDF, Markdown, and a navigable Perplexity page—aim to preserve usability and citation traceability.

  5. 5

    Perplexity is positioned as more submission-ready than chat-style outputs because references are integrated and exports keep links intact.

  6. 6

    A noted gap is that short peer-reviewed-style outputs may omit references, and Deep Research can skew toward longer, deeper responses.

Highlights

Deep Research can be configured to rely on academic sources only by disabling web search, reducing citation cleanup.
The nanomaterials/transparent electrodes literature review came back with 58 academic sources and a clear section structure (materials, fabrication, figure of merit, stability).
Exports like PDF are described as fully linked, so citations remain clickable for verification.
The short 300-word peer-reviewed intro request produced the right content but lacked references, highlighting an inconsistency for compact outputs.

Topics

Mentioned