Get AI summaries of any video or article — Sign up free
How To Use Perplexity AI For Research - Terrifyingly SMART! thumbnail

How To Use Perplexity AI For Research - Terrifyingly SMART!

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Co-pilot can ask clarifying questions to refine what the user wants before searching, improving relevance.

Briefing

Perplexity AI is positioned as a research workflow tool that can do more than answer questions: it can generate literature starting points, refine searches through back-and-forth prompts, interpret figures from papers, and summarize uploaded PDFs—while letting users control whether their data is used for training. The core value is speed-to-understanding for academic work, especially when time is spent hunting sources, deciphering schematics, or turning scattered figures into a coherent narrative.

The walkthrough starts with Perplexity’s interface and its “Ask anything” prompt, then quickly moves to research-specific controls. “Co-pilot” is highlighted as a mode that asks clarifying questions to lock in what the user actually wants. On Pro, the user mentions receiving “600 co-pilot searches a day” (versus “five” on a lower tier), framing it as a practical limit for daily research. A “Focus” filter includes options like Academic, Writing, Wolfram Alpha, YouTube, and Reddit; the presenter often keeps Academic off early on to avoid overly narrow results, then narrows later when needed.

A first example asks for recent review papers on transparent electrodes for the past couple of years. The system returns a numbered, clickable list of sources and provides formatted answers with references that can be opened directly. When the results skew older than desired (e.g., a 2014 review), the workflow stays conversational: the user can issue follow-up instructions to tighten recency.

For targeted literature, the transcript describes a prompt from a postdoc perspective: finding “five recent papers on nanomaterials for transparent electrodes.” Co-pilot again requests preference details (including a selection related to “performance”), then returns recent, academic-leaning results—such as papers from 2022 and 2023—suggesting the tool can align search scope with the user’s intent.

The most striking capability is figure and image understanding. With “Vision” in Perplexity, the user uploads a schematic from a paper and asks for an explanation. The system reportedly identifies materials and steps that aren’t explicitly written out in the image, including carbon nanotube-related processes and even solvent identification (isopropyl alcohol), then offers follow-up questions like how to do the process and what to focus on. The workflow extends to writing: up to four images can be uploaded to help assemble a story in order, producing a draft-like narrative structure (development, characterization, performance evaluation) and suggesting future research directions.

Finally, Perplexity is used for PDF triage. A user uploads a paper and asks for key points, and the system summarizes methodology, performance, applications, limitations, and next steps—then supports additional Q&A on the same document. Settings are treated as a safety lever: users can turn off whether uploaded data becomes part of training. The transcript also notes that Perplexity can surface relevant external material, including a YouTube video tied to an identified collaborator, reinforcing the tool’s ability to connect research threads beyond the uploaded text.

Cornell Notes

Perplexity AI is presented as a research assistant that speeds up four tasks: finding starting literature, narrowing searches through clarifying prompts, understanding figures, and summarizing uploaded PDFs. “Co-pilot” can ask follow-up questions to refine what the user wants, then returns numbered, reference-linked results. Vision support lets users upload paper schematics and get step-by-step explanations, and multiple figures can be turned into a narrative outline for a draft paper. The workflow also emphasizes control over data use: settings allow users to disable whether uploads contribute to training. Together, these features aim to reduce time spent searching, deciphering, and drafting in academic research.

How does Co-pilot improve search quality compared with a single prompt?

Co-pilot is described as a companion mode that asks clarifying questions before searching, aiming to “make sure you get the perfect answer.” In the examples, the user starts broad (e.g., review papers on transparent electrodes) and then refines when results don’t match the desired recency. For targeted work (nanomaterials for transparent electrodes), Co-pilot requests preference details (including a selection tied to “performance”), then uses those constraints to return more relevant, recent academic sources.

What role do Focus filters play in controlling the breadth of results?

Focus includes options such as Academic, Writing, Wolfram Alpha, YouTube, and Reddit. The transcript says the user often leaves Focus on broadly early in research to see what’s out there, then switches to Academic when the results are too broad or when specific literature is needed. This is framed as a practical way to balance discovery with precision.

What does Vision add to research workflows beyond text Q&A?

Vision enables figure-level understanding. The transcript describes uploading a schematic from a paper and asking for an explanation; the system identifies materials and process steps (including solvent identification like isopropyl alcohol) even when the text isn’t written out in the image. It also supports follow-up questions such as how to do the process and what key points matter for understanding the paper’s method.

How can multiple figures be used to support writing a paper draft?

The transcript claims Perplexity can accept up to four images at a time and then create a coherent story in order from the figures. In the example, the system constructs a narrative around development, characterization, performance evaluation, and suggests future research directions and a conclusion. It also appears to reorder elements based on inferred logical flow, not strictly the upload order.

How does PDF summarization work in practice, and what kinds of questions can follow?

A PDF can be attached, then the user asks for key points to avoid reading the entire paper. The system reportedly summarizes main focus, methodology, performance, applications, advantages, limitations, and next steps. After the initial summary, the user can ask targeted follow-ups like what the limitations are or what the next step should be in the research line.

What privacy/control setting is emphasized when uploading data?

In settings, the transcript highlights a data control option: users can turn off whether uploaded content becomes part of training. This is presented as a key advantage for users who want to keep their documents out of model training.

Review Questions

  1. When results don’t match the desired criteria (e.g., recency), what conversational adjustment does the workflow rely on?
  2. What evidence in the transcript suggests Vision can infer process steps from a schematic rather than only reading explicit labels?
  3. How does the transcript describe using multiple uploaded figures to generate a paper-like narrative structure?

Key Points

  1. 1

    Co-pilot can ask clarifying questions to refine what the user wants before searching, improving relevance.

  2. 2

    Focus filters (including Academic) help balance broad discovery early on with tighter literature retrieval later.

  3. 3

    Perplexity returns numbered, reference-linked results that can be clicked to open specific sources.

  4. 4

    Vision support can interpret paper schematics, identify materials and steps, and answer follow-up questions about the process.

  5. 5

    Up to four images can be uploaded to generate an ordered narrative outline for a draft paper, including future research directions.

  6. 6

    PDF attachments can be summarized into key points, with follow-up Q&A covering focus, methods, limitations, and next steps.

  7. 7

    Settings include a way to disable whether uploaded data is used for training, addressing privacy concerns.

Highlights

Co-pilot’s clarifying questions are framed as the mechanism that turns a vague research request into a more precise literature search.
Vision reportedly extracts solvent and process details from a schematic image even when that information isn’t explicitly written out.
Uploading four figures can produce a draft-like story structure (development → characterization → performance evaluation → future work).
PDF uploads enable rapid triage into methodology, performance, applications, limitations, and next steps—then support deeper follow-ups.

Topics