Get AI summaries of any video or article — Sign up free
Will THIS AI Tool Blow ChatGPT Out of the Water for Research? thumbnail

Will THIS AI Tool Blow ChatGPT Out of the Water for Research?

Andy Stapleton·
4 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Perplexity.ai is positioned as research-first because it retrieves answers from multiple live sources, including academic databases and news.

Briefing

Perplexity.ai is positioning itself as a research-first alternative to ChatGPT by pulling answers from a mix of up-to-date online sources—academic databases, news, and even platforms like YouTube and Reddit—rather than relying on static training data. In practice, the tool’s “Enhanced” mode (built on GPT-4) aims to keep the same conversational power people expect from ChatGPT while adding a research workflow: it surfaces related questions and provides references that users can click through to verify where information came from.

The transcript’s core test is organic photovoltaic research. When asked for the latest papers, Perplexity returns recent literature and links out to sources such as the National Library of Medicine and Semantic Scholar. It also offers a way to refine the search without restarting from scratch—editing the query and revisiting sources when results don’t match expectations. The emphasis is less on generating a polished narrative and more on quickly locating relevant, current papers and the trails behind them.

A second comparison targets explanation. For a concept like “up conversion” in organic photovoltaic devices, ChatGPT Plus produces longer, more detailed explanations. Perplexity’s advantage shifts back to traceability: it includes references alongside the explanation, giving researchers a starting point for reading the underlying material. The speaker’s takeaway is nuanced: ChatGPT still tends to win for depth and usefulness as a scientist’s writing and reasoning assistant, while Perplexity earns its place when the priority is finding sources and reducing the friction of academic searching.

The transcript also highlights current limitations. Perplexity doesn’t read PDFs as effectively as hoped, and a browser extension meant to summarize a paper returns empty results—forcing users to copy and paste content into the model. Meanwhile, ChatGPT’s baseline experience is described as text-only and not inherently built for web search or PDF ingestion, though third-party layers can add those capabilities.

To illustrate that ecosystem, the transcript discusses “Hey GPT,” a paid tool that supports chatting with files (including PDFs) and promises additional capabilities like Wolfram Alpha integration. It also notes cost tradeoffs: using GPT-4 through document chat is slower and can be expensive, while GPT-3.5 can be far cheaper per query. Overall, the transcript frames the near-term landscape as a competition between research-native assistants (like Perplexity) and ChatGPT-based platforms augmented with plugins, browsing, and document handling.

The closing argument is that researchers will benefit most from tools that combine strong language generation with reliable access to the internet and primary sources. As ChatGPT gains more built-in or plugin-driven functionality, competitors that rely on similar add-ons may face pressure—but Perplexity’s reference-first approach remains a practical advantage for quickly gathering citations and current literature.

Cornell Notes

Perplexity.ai is presented as a research-oriented alternative to ChatGPT because it retrieves information from multiple live sources—academic databases, news, and community platforms—so answers can reflect recent literature. In organic photovoltaic examples, it returns clickable references (including sources like the National Library of Medicine and Semantic Scholar) and offers related questions plus query editing without restarting. ChatGPT Plus often produces longer, more detailed explanations and literature-review outlines, but it lacks built-in web search and doesn’t inherently read PDFs. The transcript concludes that ChatGPT remains preferable for depth, while Perplexity is especially useful when the main goal is finding and verifying academic sources quickly.

What makes Perplexity.ai feel more “research-ready” than ChatGPT in the transcript’s tests?

Perplexity.ai is described as drawing from the internet and multiple source types—academic sources, Wolfram Alpha, YouTube, Reddit, and news—so it can surface up-to-date material. It also provides references and related questions, and users can click back into sources or edit the query to refine results without starting over.

How did the organic photovoltaic paper search differ between Perplexity and ChatGPT?

When asked for the latest papers on organic photovoltaic devices, Perplexity returned recent literature and linked out to academic repositories (including the National Library of Medicine and Semantic Scholar). ChatGPT’s responses were treated as less reliable for citations in earlier model versions due to potential reference issues, and the transcript suggests Perplexity’s reference trail is more actionable for verification.

In the “up conversion” explanation test, which tool was favored and why?

ChatGPT Plus produced a longer, more in-depth explanation, which the transcript treats as valuable for understanding. Perplexity’s advantage was the inclusion of references alongside the explanation, giving researchers places to check the underlying material—so both tools had a “win,” but for different needs.

What limitations were noted for Perplexity’s document handling?

Perplexity was described as not reading PDFs as well as expected. A browser extension intended to summarize a paper returned an empty search result, and the workaround was to copy and paste paper text into the model to get answers.

How does the transcript frame cost and performance differences when using ChatGPT for document chat?

Using GPT-4 for chatting with documents was described as slower and more expensive per query. GPT-3.5 was cited as much cheaper (about eight cents for a query in the example), and the transcript suggests saving money by using GPT-3.5 repeatedly, while still preferring GPT-4 when answer quality matters.

What role do third-party layers like “Hey GPT” play in the ChatGPT ecosystem?

The transcript describes Hey GPT as enabling capabilities ChatGPT doesn’t provide by default—chatting with website content and files (including PDFs), with Wolfram Alpha integration coming soon. It also notes that it requires an API key and can become costly, but it demonstrates how researchers can add browsing and document workflows on top of ChatGPT.

Review Questions

  1. When is Perplexity’s reference-first approach more valuable than ChatGPT’s longer-form explanations?
  2. What specific limitations around PDFs and browser extensions were mentioned for Perplexity?
  3. How do cost and model choice (GPT-4 vs GPT-3.5) affect document-based research workflows?

Key Points

  1. 1

    Perplexity.ai is positioned as research-first because it retrieves answers from multiple live sources, including academic databases and news.

  2. 2

    Perplexity’s “Enhanced” mode uses GPT-4-level reasoning while adding clickable references and related questions for verification.

  3. 3

    For organic photovoltaic literature searches, Perplexity returns recent papers with source links (e.g., National Library of Medicine and Semantic Scholar).

  4. 4

    ChatGPT Plus tends to produce more detailed, long-form outputs like literature-review outlines and concept explanations.

  5. 5

    Perplexity’s PDF handling is limited in practice; a browser extension for summarization returned empty results, requiring copy-paste workarounds.

  6. 6

    ChatGPT’s baseline experience is described as text-only, but third-party tools (like Hey GPT) can add web and PDF/document workflows.

  7. 7

    Using GPT-4 for document chat can be slow and expensive, while GPT-3.5 can reduce per-query cost substantially.

Highlights

Perplexity’s main advantage in the transcript is not just answers—it’s the trail of references that can be clicked and checked.
ChatGPT Plus often wins on depth and length, but Perplexity is framed as more practical when citations and recency matter most.
A key workflow gap remains: Perplexity doesn’t reliably summarize PDFs via the mentioned extension, pushing users toward manual copy-paste.
The transcript frames the near-term future as ChatGPT gaining research capabilities through plugins and add-ons, reshaping competition.