RAG Retrieval — Topic Summaries
AI-powered summaries of 9 videos about RAG Retrieval.
9 summaries
Hybrid Search RAG With Langchain And Pinecone Vector DB
Hybrid search for RAG is built on a simple but powerful idea: retrieve relevant chunks using both semantic similarity (dense vector search) and...
The 5 Types of LLM Apps
LLM apps can be sorted into five practical categories—ranging from chat-style assistants to fully autonomous agents—so builders can more clearly...
How to Compare Multiple Large PDF Files Using AI (w/ Jerry Liu, Co-Founder of LlamaIndex)
Comparing two huge PDFs with AI—like Uber and Lyft 10-K filings—breaks down when retrieval is done “all at once” in a single vector index. The core...
Advanced RAG 05 - HyDE - Hypothetical Document Embeddings
HyDE (Hypothetical Document Embeddings) improves retrieval in RAG by using a large language model to draft a “hypothetical answer,” embedding that...
Million Token Context Windows? Myth Busted—Limits & Fixes
Claims of “million-token context windows” are being sold as if they let large language models reliably read and reason over book-length prompts. In...
Build 100% Local AI Agent to Chat with Your Files | Private AI Knowledge Base with MCP & RAG
A fully local “private knowledge base” agent can chat with a user’s own files by combining a custom MCP tool server with retrieval-augmented...
Create Your Own Microsoft Recall AI Feature with RAG?
A practical “Recall”-style system can be built by combining automated screen capture, AI-based interpretation of what’s on-screen, and a retrieval...
What is Chunking in AI? The Beginners Guide. The Power of Chunking in LLMs & RAG Explained!
Chunking is the practical technique that lets AI systems handle information that’s too large to process in one go—by breaking text into smaller,...
How RAG Finds Answers in Millions of Documents | Embeddings, Vector Databases, LangChain & Supabase
Retrieval in RAG hinges on one practical step: turning a user question into a vector and then finding the most semantically similar document chunks...