Hottest NEW AI tools for Research: Must-Watch AI Apps
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Sight can generate grant-style drafts with structured “specific aims” and includes real references by searching and analyzing research articles.
Briefing
Academic research is getting reshaped by an AI “tool arms race,” with new systems moving beyond chat into drafting, citation-building, and even automated multi-step workflows. The standout theme is speed: researchers can now generate grant outlines, literature-review starting points, and structured academic text by pulling in references and synthesizing across multiple documents—often with only a short prompt.
Sight (spelled “sight” in the transcript) is presented as a one-prompt research assistant that can draft sections of essays and grant proposals while drawing on research articles. A live example shows a grant-writing prompt (“explore how … affects chromosome …”) producing a structured set of “specific aims” and then returning real references. The assistant appears to search and analyze documents in the background, assembling a grant proposal draft that includes cited sources. The presenter also notes that the generated literature reviews tend to be shorter than traditional ones, but they still reduce the “grunt work” of structuring and getting a first pass of relevant literature. The practical takeaway: better results come from being more specific about what the researcher wants—such as targeting a word count, subfield, and time window.
Jenny AI is positioned as a more hands-on writing partner. After logging in, it keeps generating text continuously, letting users accept or request the next chunk without restarting from scratch. It also supports adding citations directly into the draft, and it offers features for creating section and subheadings—turning a rough topic request into an organized academic structure. A key promise is expanding compatibility with reference managers, with the CEO mentioning upcoming support for Mendeley and Zotero, which would make it easier to move from AI drafting to standard academic workflows.
Hey GPT is introduced as a paid tool that connects chat-based AI to the internet and to uploaded files. The transcript emphasizes “chat with PDFs” as a major research advantage: users can upload papers, then ask for outlines and “important findings” that are pulled from the documents. It also supports chatting with websites, though the process can be a bit clunky and may cost tokens depending on the amount of web content crawled. The overall pitch is consolidation—one place to query multiple sources (PDFs, HTML pages, and Google-linked information) rather than switching tools.
Finally, the transcript shifts from single-assistant tools to agentic automation. Agents are described as AI systems that spawn other tasks or other agents to pursue a goal. Auto GPT is shown as an example that can run a “science agent” to collect information and write it into a text document, but it currently gets stuck in loops and may require an API (and therefore can cost money) if it continuously searches the web. Agent GPT is also mentioned as a browser-based beta alternative. The message is cautious optimism: agent workflows look promising for academia, but reliability and cost control remain early-stage issues.
Cornell Notes
The transcript highlights a rapid shift in academic AI tools from simple Q&A toward end-to-end research assistance: drafting, citation support, and multi-source synthesis. Sight is showcased for generating grant proposal structures and literature-review starting points with real references. Jenny AI is framed as a continuous writing assistant that can generate sections/subheadings and insert citations, with planned integration for Mendeley and Zotero. Hey GPT is presented as a file-and-web-aware chat system that lets researchers query PDFs and websites for outlines and key findings. The final category—AI agents—aims to automate multi-step research tasks, but current versions can loop and may be expensive to run.
How does Sight turn a short research prompt into something usable for grant writing?
What makes Jenny AI feel different from a typical AI writing chat?
Why is “chat with PDFs” a big deal in Hey GPT?
What trade-off comes with using Hey GPT to chat with websites?
What are AI agents, and what’s the current limitation shown in the transcript?
What practical direction does the transcript suggest for getting better AI outputs?
Review Questions
- Which tool in the transcript is most directly demonstrated for generating grant “specific aims” with cited references, and what is the mechanism behind that output?
- How do Sight, Jenny AI, and Hey GPT differ in their approach to citations and source grounding?
- What does the transcript identify as the main barrier to agent-based research automation right now: reliability, cost, or both?
Key Points
- 1
Sight can generate grant-style drafts with structured “specific aims” and includes real references by searching and analyzing research articles.
- 2
AI-generated literature reviews may be shorter than traditional ones, but they can still save time by providing a first-pass structure and relevant citations.
- 3
Jenny AI supports continuous drafting with accept/continue behavior, plus one-click creation of section headings and subheadings.
- 4
Jenny AI’s planned integration with Mendeley and Zotero is positioned as a key step toward smoother academic citation workflows.
- 5
Hey GPT enables grounded Q&A by letting users chat with uploaded PDFs and also query websites, with token-based cost considerations for web crawling.
- 6
Agentic tools like Auto GPT can automate multi-step research tasks, but current versions may loop and can be expensive depending on API usage.
- 7
Across tools, more specific prompts (scope, time window, word count, subfield) are presented as the fastest route to higher-quality outputs.