The Future of Research Is Here! - Chat with *ALL* your data like never before
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
power drill.ai offers a code-free beta workflow for building an AI knowledge base from multiple uploaded papers and querying it with database-specific chat settings.
Briefing
AI tools are moving research from “search and skim” to “upload and interrogate,” letting researchers ask questions across many PDFs and other documents as if they had a dedicated second brain. The core promise is simple: build an AI knowledge base from a set of files, then use one conversational prompt to extract summaries, next steps, and literature gaps—while grounding answers in the uploaded sources rather than generic training data.
The first tool highlighted, power drill.ai (in beta), focuses on bridging an individual’s data with AI through a code-free workflow. Users create an AI knowledge base by uploading multiple papers into a dataset, then switch the chat mode from a generic ChatGPT-style interface to “Andy research,” which targets the user’s specific database. In a demonstration, the user asks for “obvious next steps” and receives a research-oriented suggestion tied to the uploaded material—specifically pointing toward further investigation into the final location and distribution of SDS in a device. The assistant also provides a more detailed “summary of my research,” along with references pulled from the documents, aiming to reduce the risk of missing key points when dealing with large file collections. The tradeoff is that the workflow is still early: data sources are added one at a time (at least in the current beta flow), and the creator flags that there’s a more robust approach.
That “better way” comes next with quiver, described as a second brain for storing and retrieving unstructured information. Quiver is positioned as more flexible and more controllable than the simpler beta approach: it’s open source, emphasizes secure data handling and user control, and can ingest a wide range of content types beyond PDFs—Word documents, text, URL files, images, and code snippets. The online experience works by uploading files, then chatting with a “default brain” using academic-style prompts. In the example, the assistant identifies a literature gap: missing actual values for collection parameters and resolution, even though those items are mentioned in the dataset.
For maximum control, the transcript recommends running quiver locally rather than using the hosted web app. That setup is more technical: it involves installing Docker, using Super Bass for chunking and storing the uploaded knowledge, and following GitHub “getting started” steps. The payoff is a local interface running on localhost (port 3000), where the brain is configured with an OpenAI key, a chosen model, and a “personality” such as acting like an academic researcher. When asked for a gap in the literature, the local brain returns actionable directions—suggesting areas like new materials, improved fabrication techniques, interface studies, device integration, and durability studies.
Finally, the transcript frames the broader shift: researchers can create multiple “brains” segmented by topic, each with its own uploaded dataset and prompt style. That modular setup is presented as a practical way to keep different research threads organized while querying them with tailored academic guidance. The tools are pitched as early but powerful steps toward a future where research assistants work directly from a user’s own document library.
Cornell Notes
The transcript argues that research is getting easier because AI tools can ingest many files—especially PDFs—and answer questions grounded in that uploaded material. power drill.ai offers a code-free beta workflow: users upload papers into a dataset, switch chat to a database-specific mode (“Andy research”), and get summaries, next steps, and references pulled from their sources. quiver expands the idea into a more controllable “second brain” that supports many unstructured formats and can be run securely and openly. The most powerful setup uses a local quiver instance with Docker and Super Bass for chunking and storage, then configures a brain with an OpenAI key and an academic researcher personality. The practical takeaway is that researchers can create topic-specific brains and query them conversationally to surface gaps and next research directions.
How does power drill.ai turn a pile of PDFs into usable research answers?
What kinds of gaps can quiver identify from uploaded research, and what’s the example gap?
Why does the transcript recommend running quiver locally instead of only using the web app?
What does “quiver personality” change in practice?
How can researchers organize work across different topics using these tools?
Review Questions
- What specific workflow steps in power drill.ai connect uploaded documents to database-grounded answers?
- What technical components are required to run quiver locally, and what role does Super Bass play?
- In the quiver example, what kind of literature gap is detected, and how does it relate to missing data rather than missing topics?
Key Points
- 1
power drill.ai offers a code-free beta workflow for building an AI knowledge base from multiple uploaded papers and querying it with database-specific chat settings.
- 2
Switching from generic chat to a dataset-bound mode (e.g., “Andy research”) is what makes answers reference the user’s uploaded sources.
- 3
quiver is positioned as a more controllable second brain that supports many unstructured formats, including PDFs, Word documents, text, URLs, images, and code snippets.
- 4
quiver’s local setup uses Docker and Super Bass to chunk and store knowledge, then runs a configurable brain on localhost for private querying.
- 5
Durability studies and other next-step research directions can be surfaced when prompts are grounded in the uploaded literature and conversation context.
- 6
Creating multiple topic-specific brains helps researchers keep different research areas segmented and query them with tailored academic prompts.