How To Use Perplexity AI For Research - Terrifyingly SMART!
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Co-pilot can ask clarifying questions to refine what the user wants before searching, improving relevance.
Briefing
Perplexity AI is positioned as a research workflow tool that can do more than answer questions: it can generate literature starting points, refine searches through back-and-forth prompts, interpret figures from papers, and summarize uploaded PDFs—while letting users control whether their data is used for training. The core value is speed-to-understanding for academic work, especially when time is spent hunting sources, deciphering schematics, or turning scattered figures into a coherent narrative.
The walkthrough starts with Perplexity’s interface and its “Ask anything” prompt, then quickly moves to research-specific controls. “Co-pilot” is highlighted as a mode that asks clarifying questions to lock in what the user actually wants. On Pro, the user mentions receiving “600 co-pilot searches a day” (versus “five” on a lower tier), framing it as a practical limit for daily research. A “Focus” filter includes options like Academic, Writing, Wolfram Alpha, YouTube, and Reddit; the presenter often keeps Academic off early on to avoid overly narrow results, then narrows later when needed.
A first example asks for recent review papers on transparent electrodes for the past couple of years. The system returns a numbered, clickable list of sources and provides formatted answers with references that can be opened directly. When the results skew older than desired (e.g., a 2014 review), the workflow stays conversational: the user can issue follow-up instructions to tighten recency.
For targeted literature, the transcript describes a prompt from a postdoc perspective: finding “five recent papers on nanomaterials for transparent electrodes.” Co-pilot again requests preference details (including a selection related to “performance”), then returns recent, academic-leaning results—such as papers from 2022 and 2023—suggesting the tool can align search scope with the user’s intent.
The most striking capability is figure and image understanding. With “Vision” in Perplexity, the user uploads a schematic from a paper and asks for an explanation. The system reportedly identifies materials and steps that aren’t explicitly written out in the image, including carbon nanotube-related processes and even solvent identification (isopropyl alcohol), then offers follow-up questions like how to do the process and what to focus on. The workflow extends to writing: up to four images can be uploaded to help assemble a story in order, producing a draft-like narrative structure (development, characterization, performance evaluation) and suggesting future research directions.
Finally, Perplexity is used for PDF triage. A user uploads a paper and asks for key points, and the system summarizes methodology, performance, applications, limitations, and next steps—then supports additional Q&A on the same document. Settings are treated as a safety lever: users can turn off whether uploaded data becomes part of training. The transcript also notes that Perplexity can surface relevant external material, including a YouTube video tied to an identified collaborator, reinforcing the tool’s ability to connect research threads beyond the uploaded text.
Cornell Notes
Perplexity AI is presented as a research assistant that speeds up four tasks: finding starting literature, narrowing searches through clarifying prompts, understanding figures, and summarizing uploaded PDFs. “Co-pilot” can ask follow-up questions to refine what the user wants, then returns numbered, reference-linked results. Vision support lets users upload paper schematics and get step-by-step explanations, and multiple figures can be turned into a narrative outline for a draft paper. The workflow also emphasizes control over data use: settings allow users to disable whether uploads contribute to training. Together, these features aim to reduce time spent searching, deciphering, and drafting in academic research.
How does Co-pilot improve search quality compared with a single prompt?
What role do Focus filters play in controlling the breadth of results?
What does Vision add to research workflows beyond text Q&A?
How can multiple figures be used to support writing a paper draft?
How does PDF summarization work in practice, and what kinds of questions can follow?
What privacy/control setting is emphasized when uploading data?
Review Questions
- When results don’t match the desired criteria (e.g., recency), what conversational adjustment does the workflow rely on?
- What evidence in the transcript suggests Vision can infer process steps from a schematic rather than only reading explicit labels?
- How does the transcript describe using multiple uploaded figures to generate a paper-like narrative structure?
Key Points
- 1
Co-pilot can ask clarifying questions to refine what the user wants before searching, improving relevance.
- 2
Focus filters (including Academic) help balance broad discovery early on with tighter literature retrieval later.
- 3
Perplexity returns numbered, reference-linked results that can be clicked to open specific sources.
- 4
Vision support can interpret paper schematics, identify materials and steps, and answer follow-up questions about the process.
- 5
Up to four images can be uploaded to generate an ordered narrative outline for a draft paper, including future research directions.
- 6
PDF attachments can be summarized into key points, with follow-up Q&A covering focus, methods, limitations, and next steps.
- 7
Settings include a way to disable whether uploaded data is used for training, addressing privacy concerns.