Get AI summaries of any video or article — Sign up free
The Future of Research Is Here! - Chat with *ALL* your data like never before thumbnail

The Future of Research Is Here! - Chat with *ALL* your data like never before

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

power drill.ai offers a code-free beta workflow for building an AI knowledge base from multiple uploaded papers and querying it with database-specific chat settings.

Briefing

AI tools are moving research from “search and skim” to “upload and interrogate,” letting researchers ask questions across many PDFs and other documents as if they had a dedicated second brain. The core promise is simple: build an AI knowledge base from a set of files, then use one conversational prompt to extract summaries, next steps, and literature gaps—while grounding answers in the uploaded sources rather than generic training data.

The first tool highlighted, power drill.ai (in beta), focuses on bridging an individual’s data with AI through a code-free workflow. Users create an AI knowledge base by uploading multiple papers into a dataset, then switch the chat mode from a generic ChatGPT-style interface to “Andy research,” which targets the user’s specific database. In a demonstration, the user asks for “obvious next steps” and receives a research-oriented suggestion tied to the uploaded material—specifically pointing toward further investigation into the final location and distribution of SDS in a device. The assistant also provides a more detailed “summary of my research,” along with references pulled from the documents, aiming to reduce the risk of missing key points when dealing with large file collections. The tradeoff is that the workflow is still early: data sources are added one at a time (at least in the current beta flow), and the creator flags that there’s a more robust approach.

That “better way” comes next with quiver, described as a second brain for storing and retrieving unstructured information. Quiver is positioned as more flexible and more controllable than the simpler beta approach: it’s open source, emphasizes secure data handling and user control, and can ingest a wide range of content types beyond PDFs—Word documents, text, URL files, images, and code snippets. The online experience works by uploading files, then chatting with a “default brain” using academic-style prompts. In the example, the assistant identifies a literature gap: missing actual values for collection parameters and resolution, even though those items are mentioned in the dataset.

For maximum control, the transcript recommends running quiver locally rather than using the hosted web app. That setup is more technical: it involves installing Docker, using Super Bass for chunking and storing the uploaded knowledge, and following GitHub “getting started” steps. The payoff is a local interface running on localhost (port 3000), where the brain is configured with an OpenAI key, a chosen model, and a “personality” such as acting like an academic researcher. When asked for a gap in the literature, the local brain returns actionable directions—suggesting areas like new materials, improved fabrication techniques, interface studies, device integration, and durability studies.

Finally, the transcript frames the broader shift: researchers can create multiple “brains” segmented by topic, each with its own uploaded dataset and prompt style. That modular setup is presented as a practical way to keep different research threads organized while querying them with tailored academic guidance. The tools are pitched as early but powerful steps toward a future where research assistants work directly from a user’s own document library.

Cornell Notes

The transcript argues that research is getting easier because AI tools can ingest many files—especially PDFs—and answer questions grounded in that uploaded material. power drill.ai offers a code-free beta workflow: users upload papers into a dataset, switch chat to a database-specific mode (“Andy research”), and get summaries, next steps, and references pulled from their sources. quiver expands the idea into a more controllable “second brain” that supports many unstructured formats and can be run securely and openly. The most powerful setup uses a local quiver instance with Docker and Super Bass for chunking and storage, then configures a brain with an OpenAI key and an academic researcher personality. The practical takeaway is that researchers can create topic-specific brains and query them conversationally to surface gaps and next research directions.

How does power drill.ai turn a pile of PDFs into usable research answers?

Users create a dataset, add multiple papers as data sources, then start a chat session that targets their uploaded library. In the demo, the chat mode is switched from a generic ChatGPT-style interface to “Andy research,” which is tied to the user’s database. Prompts like “based on the data set… any obvious next steps” produce research-oriented suggestions grounded in the uploaded content (e.g., further investigation into the final location and distribution of SDS in a device). The tool also provides references it pulls from the documents, aiming to prevent missing important details across many files.

What kinds of gaps can quiver identify from uploaded research, and what’s the example gap?

quiver is used to ask academic-style questions against an uploaded corpus. In the example, the assistant identifies a missing-value gap: even though the dataset mentions “The Collection parameters and resolution,” it does not provide actual values for those parameters. That kind of gap is actionable because it points to what future work should measure or report, not just what topics exist.

Why does the transcript recommend running quiver locally instead of only using the web app?

The local approach is framed as the “best way” because it gives more control and keeps the system running on the user’s own computer. The setup uses Docker and Super Bass to chunk and store the knowledge so it can be queried later. The transcript notes the process is trickier—about two hours of troubleshooting for the author—but results in a local localhost interface (port 3000) where the brain is configured with an OpenAI key, model choice, and a “quiver personality” like an academic researcher.

What does “quiver personality” change in practice?

The personality setting changes how the brain responds to prompts. In the demonstration, the brain is configured to act like an academic researcher and uses a basic research prompt. That configuration influences outputs such as identifying literature gaps and suggesting next study directions (e.g., new materials, improved fabrication techniques, interface studies, device integration, and durability studies).

How can researchers organize work across different topics using these tools?

Both tools are presented as enabling multiple “brains” or knowledge bases. The transcript highlights creating a new brain for a different domain (e.g., a YouTube channel) and then uploading files for that brain separately. Each brain can have its own personality and default prompt behavior, allowing topic-segmented querying rather than mixing all documents into one undifferentiated dataset.

Review Questions

  1. What specific workflow steps in power drill.ai connect uploaded documents to database-grounded answers?
  2. What technical components are required to run quiver locally, and what role does Super Bass play?
  3. In the quiver example, what kind of literature gap is detected, and how does it relate to missing data rather than missing topics?

Key Points

  1. 1

    power drill.ai offers a code-free beta workflow for building an AI knowledge base from multiple uploaded papers and querying it with database-specific chat settings.

  2. 2

    Switching from generic chat to a dataset-bound mode (e.g., “Andy research”) is what makes answers reference the user’s uploaded sources.

  3. 3

    quiver is positioned as a more controllable second brain that supports many unstructured formats, including PDFs, Word documents, text, URLs, images, and code snippets.

  4. 4

    quiver’s local setup uses Docker and Super Bass to chunk and store knowledge, then runs a configurable brain on localhost for private querying.

  5. 5

    Durability studies and other next-step research directions can be surfaced when prompts are grounded in the uploaded literature and conversation context.

  6. 6

    Creating multiple topic-specific brains helps researchers keep different research areas segmented and query them with tailored academic prompts.

Highlights

power drill.ai’s dataset-based chat can generate “next steps” grounded in uploaded PDFs, including references pulled from the documents.
quiver can identify a concrete gap where collection parameters and resolution are mentioned but actual values are missing.
Running quiver locally requires Docker and Super Bass, producing a localhost interface where the brain is configured with an OpenAI key and an academic researcher personality.
The transcript frames the future of research as conversational interrogation of a user’s own document library, not generic web searching.

Topics

Mentioned