Get AI summaries of any video or article — Sign up free

Nitesh — Person Summaries

AI-powered summaries of 32 videos about Nitesh.

32 summaries

No matches found.

GenAI Roadmap for Beginners | End-to-End GenAI Course 2025 | CampusX

CampusX · 3 min read

Generative AI is moving from hype to a teachable, buildable skill set—so the real win is learning it through a structured roadmap rather than chasing...

Generative AI BasicsFoundation ModelsTransformer Curriculum

LangChain Models | Indepth Tutorial with Code Demo | Video 3 | CampusX

CampusX · 3 min read

LangChain’s “Models” component is built to give one common interface for working with different AI model providers—so code can switch between...

LangChain ModelsLLM vs Chat ModelsOpenAI Anthropic Gemini

Tensors in PyTorch | Video 2 | CampusX

CampusX · 3 min read

Tensors sit at the center of deep learning in PyTorch because they turn real-world data—images, text, audio, video—into efficient, hardware-friendly...

TensorsPyTorch BasicsTensor Operations

Attention Mechanism in 1 video | Seq2Seq Networks | Encoder Decoder Architecture

CampusX · 3 min read

Attention-based encoder–decoder models fix two core weaknesses of the classic LSTM Seq2Seq setup: they stop forcing a single, static sentence summary...

Attention MechanismSeq2SeqEncoder Decoder

Learn AI Coding the Right Way (No Vibe Coding) | New Playlist | CampusX

CampusX · 3 min read

Anthropic’s “Claude Code” is being positioned as an emerging industry standard for AI-assisted software development—so the playlist’s core promise is...

Claude CodeAgentic CodingVibe Coding

LSTM | Part 3 | Next Word Predictor Using | CampusX

CampusX · 2 min read

A next word predictor can be built as a text generator, but it becomes much easier to train when the problem is reframed as supervised learning: turn...

Next Word PredictionLSTMSupervised Learning

Path & Query Params in FastAPI | Video 4 | CampusX

CampusX · 2 min read

FastAPI path parameters let clients pick a specific resource directly from the URL—turning one endpoint into a flexible “fetch/update/delete by ID”...

Path ParametersHTTPException 404FastAPI Path Metadata

Chains in LangChain | Generative AI using LangChain | Video 7 | CampusX

CampusX · 3 min read

LangChain chains turn a multi-step LLM workflow from a manual, “call-everything-separately” process into a connected pipeline where each step...

LangChain ChainsSequential PipelinesParallel Execution

What are Runnables in LangChain | Generative AI using LangChain | Video 8 | CampusX

CampusX · 3 min read

LangChain’s “runnables” are the missing abstraction that turns a pile of LLM-related components into a composable system. Instead of manually wiring...

LangChain RunnablesLLM ChainsRetrieval QA

What is Agentic AI? | Agentic AI using LangGraph | Video 2 | CampusX

CampusX · 3 min read

Agentic AI is a software paradigm built to take a user’s goal and run toward it with minimal human input—planning, executing steps, adapting when...

Agentic AIGoal OrientationPlanning and Execution

LangGraph Core Concepts | Agentic AI using LangGraph | Video 4 | CampusX

CampusX · 3 min read

LangGraph’s core promise is turning multi-step LLM workflows into an executable graph: each workflow step becomes a node, and edges define what runs...

LangGraph Core ConceptsAgentic AI WorkflowsNodes And Edges

Transformer Architecture | Part 1 Encoder Architecture | CampusX

CampusX · 3 min read

Transformer encoder architecture is built from a repeating pattern: each encoder block takes token embeddings (augmented with positional...

Transformer EncoderMulti-Head Self-AttentionPositional Encoding

Retrievers in LangChain | Generative AI using LangChain | Video 13 | CampusX

CampusX · 3 min read

RAG systems live or die by retrieval quality, and LangChain’s retrievers are the modular “search engines” that pull the most relevant documents from...

Retrievers in LangChainRAG ComponentsWikipedia Retriever

Langchain Runnables - Part 2 | Generative AI using LangChain | Video 9 | CampusX

CampusX · 3 min read

LangChain’s “runnables” are built to solve a practical integration problem: earlier LangChain components (prompt templates, LLM calls, parsers,...

LangChain RunnablesRunnableSequenceRunnableParallel

Masked Self Attention | Masked Multi-head Attention in Transformer | Transformer Decoder

CampusX · 2 min read

Transformer decoders generate text one token at a time during inference, but they can be trained in parallel during training—thanks to masked...

Transformer DecoderMasked Self AttentionAutoregressive Inference

Tool Calling in LangChain | Generative AI using LangChain | Video 17 | CampusX

CampusX · 3 min read

LangChain tool calling turns an LLM from a text-only assistant into a system that can use external functions safely—by letting the model *suggest*...

Tool CallingTool BindingTool Execution

Model Context Protocol - The Why | MCP Trilogy | CampusX

CampusX · 3 min read

Model Context Protocol (MCP) is positioned as the missing layer that lets AI assistants work across many tools without the usual copy‑paste “context...

Model Context ProtocolContext AssemblyFunction Calling

Self Attention Geometric Intuition | How to Visualize Self Attention | CampusX

CampusX · 2 min read

Self-attention in Transformers can be visualized as a geometry-driven “pull” between word embeddings: each token’s new representation is a weighted...

Self AttentionGeometric IntuitionQuery-Key-Value

LangSmith Crash Course | LangSmith Tutorial for Beginners | Observability in GenAI | CampusX

CampusX · 3 min read

LangSmith is positioned as the missing “white-box” layer for LLM applications—turning opaque, non-deterministic behavior into traceable,...

LangSmith Crash CourseObservability in GenAILangChain Integration

Serving ML Models with FastAPI | Video 7 | CampusX

CampusX · 3 min read

FastAPI is used to turn a trained machine-learning model into a working prediction service, then wrap that service with a simple Streamlit front end...

FastAPI Model ServingPydantic ValidationFeature Engineering

Hyperparameter Tuning using Optuna | Bayesian Optimization using Optuna

CampusX · 3 min read

Hyperparameter tuning stops being a brute-force chore when Optuna replaces exhaustive search with Bayesian optimization that learns where accuracy is...

Bayesian OptimizationHyperparameter TuningOptuna Workflow

Complete Deep Learning Roadmap | CampusX

CampusX · 3 min read

Deep learning is the foundational skill set behind today’s GenAI and LLM work—and the fastest path to becoming job-ready is a structured, six-month...

Deep Learning RoadmapNeural NetworksConvolutional Neural Networks

LangGraph + SQLite | Chatbot with Database Integration | CampusX

CampusX · 3 min read

The core upgrade is replacing a RAM-based “memory saver” with a SQLite-backed checkpointer so a LangGraph chatbot can keep conversations permanently....

LangGraph CheckpointingSQLite PersistenceStreamlit Threads

Advanced RAG: How Corrective RAG (CRAG) Solves Traditional RAG Problems | CampusX

CampusX · 3 min read

Corrective RAG (CRAG) is presented as a fix for a core weakness in traditional RAG: it blindly trusts retrieved documents, so when retrieval returns...

Corrective RAGRetrieval EvaluationKnowledge Refinement

How to build MCP Client using LangGraph | Agentic AI using LangGraph | CampusX

CampusX · 3 min read

Agentic AI tool integrations get brittle fast when every chatbot hard-codes custom “tool” wrappers for each external service. MCP (Model Context...

MCP ClientLangGraph IntegrationTool Maintenance

Observability in LangGraph | LangSmith Integration with LangGraph

CampusX · 2 min read

Observability for LangGraph agents becomes practical once every user turn is captured as an end-to-end trace in LangSmith—complete with timing, token...

LangSmith IntegrationObservabilityLangGraph Tracing

Self-RAG Tutorial: How to Make Your AI Fact-Check Itself | Advanced RAG | CampusX

CampusX · 3 min read

Self-RAG is built to stop retrieval-augmented generation from “going along for the ride” when it shouldn’t—by forcing the system to judge its own...

Self-RAGAdvanced RAGLangGraph Implementation

Long Term Memory in LangGraph

CampusX · 3 min read

Long-term memory is the missing ingredient for chatbots that feel personal over time: instead of treating every conversation as brand-new, the system...

Long-Term MemoryLangGraph Memory StoresSemantic Search

How To Implement Short Term Memory Using LangGraph

CampusX · 3 min read

Short-term memory in LangGraph isn’t something LLMs can keep on their own—so the practical fix is to store conversation state outside the model and...

LangGraph Short-Term MemoryCheckpointer ThreadsPostgreSQL Persistence

Context Window Management in Claude Code | CampusX

CampusX · 3 min read

Claude Code’s context window is small enough to become the bottleneck for real development work—and managing it well is the difference between steady...

Context WindowToken BudgetingAuto-Compaction

Claude.md | Claude Code — The Most Important File | CampusX

CampusX · 3 min read

Claude.md (and its related “Claude” configuration files) exist to fix a practical limitation of agentic coding: LLM-based agents don’t retain past...

Claude.mdSlash Init.claude Folder

Spec-Driven Development in Claude Code | CampusX

CampusX · 3 min read

Spec-driven development is presented as the antidote to “wipe coding,” a fast but control-poor style of AI-assisted programming that often produces...

Spec-Driven DevelopmentWipe CodingAcceptance Criteria