Nitesh — Person Summaries
AI-powered summaries of 32 videos about Nitesh.
32 summaries
GenAI Roadmap for Beginners | End-to-End GenAI Course 2025 | CampusX
Generative AI is moving from hype to a teachable, buildable skill set—so the real win is learning it through a structured roadmap rather than chasing...
LangChain Models | Indepth Tutorial with Code Demo | Video 3 | CampusX
LangChain’s “Models” component is built to give one common interface for working with different AI model providers—so code can switch between...
Tensors in PyTorch | Video 2 | CampusX
Tensors sit at the center of deep learning in PyTorch because they turn real-world data—images, text, audio, video—into efficient, hardware-friendly...
Attention Mechanism in 1 video | Seq2Seq Networks | Encoder Decoder Architecture
Attention-based encoder–decoder models fix two core weaknesses of the classic LSTM Seq2Seq setup: they stop forcing a single, static sentence summary...
Learn AI Coding the Right Way (No Vibe Coding) | New Playlist | CampusX
Anthropic’s “Claude Code” is being positioned as an emerging industry standard for AI-assisted software development—so the playlist’s core promise is...
LSTM | Part 3 | Next Word Predictor Using | CampusX
A next word predictor can be built as a text generator, but it becomes much easier to train when the problem is reframed as supervised learning: turn...
Path & Query Params in FastAPI | Video 4 | CampusX
FastAPI path parameters let clients pick a specific resource directly from the URL—turning one endpoint into a flexible “fetch/update/delete by ID”...
Chains in LangChain | Generative AI using LangChain | Video 7 | CampusX
LangChain chains turn a multi-step LLM workflow from a manual, “call-everything-separately” process into a connected pipeline where each step...
What are Runnables in LangChain | Generative AI using LangChain | Video 8 | CampusX
LangChain’s “runnables” are the missing abstraction that turns a pile of LLM-related components into a composable system. Instead of manually wiring...
What is Agentic AI? | Agentic AI using LangGraph | Video 2 | CampusX
Agentic AI is a software paradigm built to take a user’s goal and run toward it with minimal human input—planning, executing steps, adapting when...
LangGraph Core Concepts | Agentic AI using LangGraph | Video 4 | CampusX
LangGraph’s core promise is turning multi-step LLM workflows into an executable graph: each workflow step becomes a node, and edges define what runs...
Transformer Architecture | Part 1 Encoder Architecture | CampusX
Transformer encoder architecture is built from a repeating pattern: each encoder block takes token embeddings (augmented with positional...
Retrievers in LangChain | Generative AI using LangChain | Video 13 | CampusX
RAG systems live or die by retrieval quality, and LangChain’s retrievers are the modular “search engines” that pull the most relevant documents from...
Langchain Runnables - Part 2 | Generative AI using LangChain | Video 9 | CampusX
LangChain’s “runnables” are built to solve a practical integration problem: earlier LangChain components (prompt templates, LLM calls, parsers,...
Masked Self Attention | Masked Multi-head Attention in Transformer | Transformer Decoder
Transformer decoders generate text one token at a time during inference, but they can be trained in parallel during training—thanks to masked...
Tool Calling in LangChain | Generative AI using LangChain | Video 17 | CampusX
LangChain tool calling turns an LLM from a text-only assistant into a system that can use external functions safely—by letting the model *suggest*...
Model Context Protocol - The Why | MCP Trilogy | CampusX
Model Context Protocol (MCP) is positioned as the missing layer that lets AI assistants work across many tools without the usual copy‑paste “context...
Self Attention Geometric Intuition | How to Visualize Self Attention | CampusX
Self-attention in Transformers can be visualized as a geometry-driven “pull” between word embeddings: each token’s new representation is a weighted...
LangSmith Crash Course | LangSmith Tutorial for Beginners | Observability in GenAI | CampusX
LangSmith is positioned as the missing “white-box” layer for LLM applications—turning opaque, non-deterministic behavior into traceable,...
Serving ML Models with FastAPI | Video 7 | CampusX
FastAPI is used to turn a trained machine-learning model into a working prediction service, then wrap that service with a simple Streamlit front end...
Hyperparameter Tuning using Optuna | Bayesian Optimization using Optuna
Hyperparameter tuning stops being a brute-force chore when Optuna replaces exhaustive search with Bayesian optimization that learns where accuracy is...
Complete Deep Learning Roadmap | CampusX
Deep learning is the foundational skill set behind today’s GenAI and LLM work—and the fastest path to becoming job-ready is a structured, six-month...
LangGraph + SQLite | Chatbot with Database Integration | CampusX
The core upgrade is replacing a RAM-based “memory saver” with a SQLite-backed checkpointer so a LangGraph chatbot can keep conversations permanently....
Advanced RAG: How Corrective RAG (CRAG) Solves Traditional RAG Problems | CampusX
Corrective RAG (CRAG) is presented as a fix for a core weakness in traditional RAG: it blindly trusts retrieved documents, so when retrieval returns...
How to build MCP Client using LangGraph | Agentic AI using LangGraph | CampusX
Agentic AI tool integrations get brittle fast when every chatbot hard-codes custom “tool” wrappers for each external service. MCP (Model Context...
Observability in LangGraph | LangSmith Integration with LangGraph
Observability for LangGraph agents becomes practical once every user turn is captured as an end-to-end trace in LangSmith—complete with timing, token...
Self-RAG Tutorial: How to Make Your AI Fact-Check Itself | Advanced RAG | CampusX
Self-RAG is built to stop retrieval-augmented generation from “going along for the ride” when it shouldn’t—by forcing the system to judge its own...
Long Term Memory in LangGraph
Long-term memory is the missing ingredient for chatbots that feel personal over time: instead of treating every conversation as brand-new, the system...
How To Implement Short Term Memory Using LangGraph
Short-term memory in LangGraph isn’t something LLMs can keep on their own—so the practical fix is to store conversation state outside the model and...
Context Window Management in Claude Code | CampusX
Claude Code’s context window is small enough to become the bottleneck for real development work—and managing it well is the difference between steady...
Claude.md | Claude Code — The Most Important File | CampusX
Claude.md (and its related “Claude” configuration files) exist to fix a practical limitation of agentic coding: LLM-based agents don’t retain past...
Spec-Driven Development in Claude Code | CampusX
Spec-driven development is presented as the antidote to “wipe coding,” a fast but control-poor style of AI-assisted programming that often produces...