Will THIS AI Tool Blow ChatGPT Out of the Water for Research?
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Perplexity.ai is positioned as research-first because it retrieves answers from multiple live sources, including academic databases and news.
Briefing
Perplexity.ai is positioning itself as a research-first alternative to ChatGPT by pulling answers from a mix of up-to-date online sources—academic databases, news, and even platforms like YouTube and Reddit—rather than relying on static training data. In practice, the tool’s “Enhanced” mode (built on GPT-4) aims to keep the same conversational power people expect from ChatGPT while adding a research workflow: it surfaces related questions and provides references that users can click through to verify where information came from.
The transcript’s core test is organic photovoltaic research. When asked for the latest papers, Perplexity returns recent literature and links out to sources such as the National Library of Medicine and Semantic Scholar. It also offers a way to refine the search without restarting from scratch—editing the query and revisiting sources when results don’t match expectations. The emphasis is less on generating a polished narrative and more on quickly locating relevant, current papers and the trails behind them.
A second comparison targets explanation. For a concept like “up conversion” in organic photovoltaic devices, ChatGPT Plus produces longer, more detailed explanations. Perplexity’s advantage shifts back to traceability: it includes references alongside the explanation, giving researchers a starting point for reading the underlying material. The speaker’s takeaway is nuanced: ChatGPT still tends to win for depth and usefulness as a scientist’s writing and reasoning assistant, while Perplexity earns its place when the priority is finding sources and reducing the friction of academic searching.
The transcript also highlights current limitations. Perplexity doesn’t read PDFs as effectively as hoped, and a browser extension meant to summarize a paper returns empty results—forcing users to copy and paste content into the model. Meanwhile, ChatGPT’s baseline experience is described as text-only and not inherently built for web search or PDF ingestion, though third-party layers can add those capabilities.
To illustrate that ecosystem, the transcript discusses “Hey GPT,” a paid tool that supports chatting with files (including PDFs) and promises additional capabilities like Wolfram Alpha integration. It also notes cost tradeoffs: using GPT-4 through document chat is slower and can be expensive, while GPT-3.5 can be far cheaper per query. Overall, the transcript frames the near-term landscape as a competition between research-native assistants (like Perplexity) and ChatGPT-based platforms augmented with plugins, browsing, and document handling.
The closing argument is that researchers will benefit most from tools that combine strong language generation with reliable access to the internet and primary sources. As ChatGPT gains more built-in or plugin-driven functionality, competitors that rely on similar add-ons may face pressure—but Perplexity’s reference-first approach remains a practical advantage for quickly gathering citations and current literature.
Cornell Notes
Perplexity.ai is presented as a research-oriented alternative to ChatGPT because it retrieves information from multiple live sources—academic databases, news, and community platforms—so answers can reflect recent literature. In organic photovoltaic examples, it returns clickable references (including sources like the National Library of Medicine and Semantic Scholar) and offers related questions plus query editing without restarting. ChatGPT Plus often produces longer, more detailed explanations and literature-review outlines, but it lacks built-in web search and doesn’t inherently read PDFs. The transcript concludes that ChatGPT remains preferable for depth, while Perplexity is especially useful when the main goal is finding and verifying academic sources quickly.
What makes Perplexity.ai feel more “research-ready” than ChatGPT in the transcript’s tests?
How did the organic photovoltaic paper search differ between Perplexity and ChatGPT?
In the “up conversion” explanation test, which tool was favored and why?
What limitations were noted for Perplexity’s document handling?
How does the transcript frame cost and performance differences when using ChatGPT for document chat?
What role do third-party layers like “Hey GPT” play in the ChatGPT ecosystem?
Review Questions
- When is Perplexity’s reference-first approach more valuable than ChatGPT’s longer-form explanations?
- What specific limitations around PDFs and browser extensions were mentioned for Perplexity?
- How do cost and model choice (GPT-4 vs GPT-3.5) affect document-based research workflows?
Key Points
- 1
Perplexity.ai is positioned as research-first because it retrieves answers from multiple live sources, including academic databases and news.
- 2
Perplexity’s “Enhanced” mode uses GPT-4-level reasoning while adding clickable references and related questions for verification.
- 3
For organic photovoltaic literature searches, Perplexity returns recent papers with source links (e.g., National Library of Medicine and Semantic Scholar).
- 4
ChatGPT Plus tends to produce more detailed, long-form outputs like literature-review outlines and concept explanations.
- 5
Perplexity’s PDF handling is limited in practice; a browser extension for summarization returned empty results, requiring copy-paste workarounds.
- 6
ChatGPT’s baseline experience is described as text-only, but third-party tools (like Hey GPT) can add web and PDF/document workflows.
- 7
Using GPT-4 for document chat can be slow and expensive, while GPT-3.5 can reduce per-query cost substantially.