Get AI summaries of any video or article — Sign up free
ChatGPT Plugins: Are They Really Helping Researchers or Just Hype? thumbnail

ChatGPT Plugins: Are They Really Helping Researchers or Just Hype?

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

“Chat with PDF” worked best when the PDF was accessible via a shareable URL (e.g., Google Drive with link access).

Briefing

ChatGPT’s plugin and Bing browsing upgrades can help researchers—especially when the target material is accessible online—but the experience is still too error-prone and slow to replace specialized literature tools. In practical tests, the “agent-like” behavior of browsing and the third-party plugin ecosystem produced inconsistent results, with failures that were visible in real time.

The most promising outcome came from the “Chat with PDF” plugin. When a paper was uploaded to Google Drive and shared via a link, ChatGPT was able to retrieve the PDF through the plugin and generate a detailed summary that pulled out key information rather than stopping at generic bullet points. That workflow matters because it turns a common research task—reading and condensing a paper—into something closer to an interactive document analysis tool, provided the PDF is reachable through a shareable URL.

By contrast, literature-finding plugins showed early-stage reliability issues. With “Next Paper” and “Scholar AI” enabled (only three plugins can be active at once), attempts to fetch “latest” work on a topic ran into errors and date mismatches. One run returned a “client error not found,” and another produced papers that appeared to be from 2012 even when the query specified 2022. Checking a DOI confirmed at least some results were real, but the system still struggled to respect the requested publication window. The result was a frustrating loop: the tool returned plausible citations, yet the timing filters—critical for “recent research”—didn’t consistently work.

Bing browsing in “GP4” mode also delivered the core promise—finding and summarizing recent papers—but with visible breakdowns. The browsing process clicked through links, attempted to read pages, and then produced a final summary with a reference that could be opened to reach the underlying paper. Still, the run included repeated “click failed” and “reading content failed” messages, and the user had to wait while the system iterated through the web. That makes the workflow feel less like a dependable research assistant and more like a fragile automated search.

Overall, the takeaway is pragmatic: specialized repositories and dedicated paper-finding tools remain more efficient for researchers who need speed, accuracy, and control over filters like publication date. Tools such as illicit are cited as “brilliant” for finding papers by date, and the broader conclusion is that the hype cycle for AI-in-research may be past its peak. For now, ChatGPT with plugins and browsing looks best as a supplement—particularly for summarizing accessible documents—rather than a full replacement for established academic search workflows.

Cornell Notes

ChatGPT’s plugins and Bing browsing can support research, but reliability is uneven. “Chat with PDF” performed best: when a paper was shared via an accessible Google Drive link, ChatGPT retrieved the PDF and produced a detailed summary that went beyond simple bullet points. Literature-focused plugins like “Next Paper” and “Scholar AI” struggled with core requirements such as publication-date filtering, sometimes returning older papers or failing with errors. Bing browsing in GP4 mode could locate and summarize papers and provide references, but it often showed click/read failures and required patience. The net effect: useful for document summarization, not yet dependable enough to replace specialized paper search tools.

Which plugin delivered the most reliable research value in the test, and why?

“Chat with PDF” was the standout. After uploading a paper to Google Drive and setting it so anyone with the link could access it, ChatGPT used the plugin to fetch the PDF via its URL. It then generated a thorough summary using the paper’s content, extracting key details in a way that felt more complete than the typical shallow bullet-point summaries seen in some other services. The key condition was accessibility: the PDF had to be reachable through a sharing link.

What went wrong with literature-finding plugins like Next Paper and Scholar AI?

The main issues were errors and date handling. With “Next Paper,” a request returned a “client error not found.” With “Scholar AI,” the system produced papers that didn’t match the requested publication window—returning items that appeared to be from 2012 even when the query specified 2022. Even when results included real papers (confirmed by checking a DOI), the failure to consistently honor date constraints undermined usefulness for “latest research.”

How did the “only three plugins enabled at a time” constraint affect the workflow?

It forced selective testing and limited combinations. The user enabled three plugins and then ran queries, which meant the system’s behavior depended on which subset was active. That constraint also made it harder to compare performance across many tools in a single run, since each additional plugin would require reconfiguration.

What did Bing browsing in GP4 mode accomplish, and what made it frustrating?

Bing browsing could act like a web-search agent: it clicked through links, attempted to read content, and ultimately produced a summary with at least one reference that could be opened to reach the paper. However, the process repeatedly failed during navigation and reading—showing “click failed” and “reading content failed” messages. The user had to watch the iteration and wait for completion, making it feel immature for real-world research use.

Given these results, what’s the practical recommendation for researchers?

Use ChatGPT plugins selectively. For summarizing papers, especially PDFs accessible via share links, “Chat with PDF” can be genuinely helpful. For finding recent papers with strict filters (like publication date), dedicated tools and repositories are still more efficient and dependable. The transcript points to date-focused paper discovery tools such as illicit as examples of better fit for the “find the right recent literature fast” job.

Review Questions

  1. When does “Chat with PDF” become effective, and what specific setup made it work in the test?
  2. What evidence suggested that Scholar AI and Next Paper struggled with “latest” queries?
  3. Why might Bing browsing be less suitable than specialized literature tools for time-sensitive research?

Key Points

  1. 1

    “Chat with PDF” worked best when the PDF was accessible via a shareable URL (e.g., Google Drive with link access).

  2. 2

    Literature plugins produced inconsistent results, including visible errors and mismatches with requested publication dates.

  3. 3

    Even when DOIs corresponded to real papers, failing to respect date filters reduced usefulness for “latest research.”

  4. 4

    Bing browsing could find and summarize papers and provide references, but navigation and content retrieval failures slowed the workflow.

  5. 5

    The three-plugin limit constrained experimentation and made performance comparisons more segmented.

  6. 6

    Specialized paper-finding tools and repositories remain more reliable for date-filtered literature search (e.g., illicit).

  7. 7

    AI-in-research may be moving past hype, with current strengths leaning toward document summarization rather than end-to-end discovery.

Highlights

“Chat with PDF” produced a detailed summary after retrieving a Google Drive-shared PDF, making it the most immediately useful plugin in the test.
Next Paper returned a “client error not found,” while Scholar AI sometimes ignored the requested publication year and surfaced older papers.
Bing browsing in GP4 mode could deliver a final paper summary with a reference, but it repeatedly hit “click failed” and “reading content failed” issues during the search process.

Mentioned