The Full Stack — Channel Summaries
AI-powered summaries of 103 videos about The Full Stack.
103 summaries
LLMOps (LLM Bootcamp)
LLMOps is less about picking the “best” language model and more about building a reliable production loop: choose a model with the right trade-offs,...
Launch an LLM App in One Hour (LLM Bootcamp)
Large language models are turning into general-purpose “next-word” engines that can power far more than chat—especially when paired with language...
LLM Foundations (LLM Bootcamp)
Large language models work because they turn text into numbers, then learn—via gradient-based training—to predict the next token using a Transformer...
Harrison Chase - Agents Masterclass from LangChain Founder (LLM Bootcamp)
Agent systems are built around a simple but consequential shift: use a language model as a reasoning engine that decides which tool to call next,...
Lecture 1: Deep Learning Fundamentals (Full Stack Deep Learning - Spring 2021)
Deep learning fundamentals hinge on a simple but powerful idea: neural networks are flexible function approximators whose weights can be trained by...
Lecture 01: When to Use ML and Course Vision (FSDL 2022)
Machine learning is moving into the mainstream, but the real challenge isn’t getting models to work—it’s deciding when ML is worth the added...
Augmented Language Models (LLM Bootcamp)
Augmented language models hinge on a simple constraint: modern LLMs are strong at language and instruction-following, but they lack up-to-date world...
Chip Huyen on Machine Learning Interviews (Full Stack Deep Learning - November 2019)
Machine learning hiring is less about “perfect” interviews and more about navigating a noisy, expensive, and often inconsistent process—so candidates...
Learn to Spell: Prompt Engineering (LLM Bootcamp)
Prompt engineering is the practical art of choosing the exact text you feed a language model so it behaves the way you need—often replacing what used...
Lecture 1: Introduction to Deep Learning - Full Stack Deep Learning - March 2019
Deep learning’s breakthrough in 2012 wasn’t just a better model—it replaced hand-crafted image features with learned representations, turning “what...
Lecture 6: Infrastructure & Tooling (Full Stack Deep Learning - Spring 2021)
Deep learning progress depends less on model code than on the surrounding “infrastructure and tooling” that turns raw data into continuously...
LangChain Demo + Q&A with Harrison Chase
LangChain’s core value is turning large language models from “text-in, text-out” into usable applications by providing the missing framework:...
UX for Language User Interfaces (LLM Bootcamp)
Language user interfaces are poised to become the next major step change in computing—replacing menus, forms, and command buttons with text-first...
1. Overview - ML Projects - Full Stack Deep Learning
Deep learning projects fail far more often than teams expect—one survey cited in the discussion found that 85% of AI projects at large companies...
Lecture 02: Development Infrastructure & Tooling (FSDL 2022)
Machine learning development runs on a “data flywheel,” but getting from an idea to a reliable system at scale depends on disciplined software...
Lecture 5: ML Projects (Full Stack Deep Learning - Spring 2021)
Machine learning projects fail less because models are “bad” and more because teams start with unclear goals, unrealistic feasibility assumptions,...
Lecture 2: Setting Up Machine Learning Projects - Full Stack Deep Learning - March 2019
Machine learning projects succeed or fail less on model choice than on how well teams plan, collect data, test beyond validation scores, and set...
Reza Shabani - How Replit Trained Their Own LLMs (LLM Bootcamp)
Replit’s Ghostwriter code-completion model is built through a tightly engineered pipeline designed to make smaller, cheaper, and more specialized...
Lecture 4: Transfer Learning and Transformers (Full Stack Deep Learning - Spring 2021)
Transfer learning is the bridge that lets large, pre-trained neural networks work on small, task-specific datasets—first in computer vision, then in...
What's Next? (LLM Bootcamp)
Multimodal large language models are rapidly turning into general-purpose “brains” for both software and physical machines—especially robotics—by...
2. Lifecycle - ML Projects - Full Stack Deep Learning
Machine learning projects follow a repeatable lifecycle—planning, data collection, training/debugging, and staged deployment—but progress often loops...
Lab 04: Experiment Management (FSDL 2022)
Experiment management is the difference between “useful training output” and “lost knowledge.” During model training, metrics like loss and...
Lecture 11A: Deploying ML Models (Full Stack Deep Learning - Spring 2021)
Machine learning models don’t become “production-ready” just because they work in a notebook; they need a deployment path that fits the latency,...
Lecture 7: Troubleshooting Deep Neural Networks (Full Stack Deep Learning - Spring 2021)
Troubleshooting deep neural networks is hard because the same drop in performance can come from many different causes—and many bugs don’t announce...
Labs 1-3: Introduction to the Text Recognizer Project - Full Stack Deep Learning - March 2019
Handwritten-text recognition is built as a full pipeline: a web backend accepts an encoded image, a deployed “compiled prediction model” runs...
Lecture 2A: Convolutional Neural Networks (Full Stack Deep Learning - Spring 2021)
Convolutional neural networks gained their edge in computer vision by replacing the “flatten an image and learn a giant matrix” approach with a...
Lecture 06: Continual Learning (FSDL 2022)
Continual learning in production is less about “retraining whenever something feels off” and more about running a structured retraining strategy that...
Lab 02: PyTorch Lightning and Convolutional NNs (FSDL 2022)
PyTorch Lightning is presented as the practical fix for the “sharp edges” of hand-rolling PyTorch training loops—especially when training needs to...
Lecture 3: Recurrent Neural Networks (Full Stack Deep Learning - Spring 2021)
Recurrent neural networks (RNNs) were built to handle sequence data efficiently by reusing the same weights across time and carrying information...
Jeremy Howard on Platform.ai and Fast.ai (Full Stack Deep Learning - March 2019)
Jeremy Howard argues that “augmented machine learning”—tight human–computer collaboration—beats fully automated ML pipelines for most practical...
Lecture 2B: Computer Vision Applications (Full Stack Deep Learning - Spring 2021)
Computer vision deep learning has advanced largely by swapping in better image-recognition backbones—then reusing those same building blocks for...
Lab 07: Web Deployment (FSDL 2022)
A practical deployment pipeline turns a trained PyTorch text recognizer into a portable, shareable model service—first by compiling it to...
Project Walkthrough: askFSDL (LLM Bootcamp)
A Discord bot built for askFSDL delivers retrieval-augmented question answering over a curated knowledge base, but the biggest gains come less from...
Lecture 8: Data Management (Full Stack Deep Learning - Spring 2021)
Data management is where most deep learning projects quietly win or fail: getting messy, distributed inputs into a GPU-ready training pipeline—and...
Lecture 10: ML Testing & Explainability (Full Stack Deep Learning - Spring 2021)
Machine-learning systems fail in ways that offline test scores can’t fully predict, so teams need a broader testing mindset: validate not just a...
Lecture 07: Foundation Models (FSDL 2022)
Foundation models are driving a shift in AI from task-specific systems toward general-purpose models built by scaling architecture, data, and...
Panel Discussion: Do I need a PhD to work in ML? (Full Stack Deep Learning - Spring 2021)
A PhD is not a universal requirement for working in machine learning; it depends on what kind of ML work someone wants to do and how employers...
Lecture 05: Deployment (FSDL 2022)
Model deployment is where machine learning stops being a lab exercise and starts proving it can solve real user problems—often revealing flaws that...
Lecture 4: Infrastructure and Tooling - Full Stack Deep Learning - March 2019
Deep learning success depends less on model architecture than on building an end-to-end system that can ingest data, train reliably, deploy safely,...
Lecture 03: Troubleshooting & Testing (FSDL 2022)
Troubleshooting and testing in software is about risk reduction, but testing never becomes a guarantee of correctness—so the practical goal is to...
Lecture 04: Data Management (FSDL 2022)
Data management is the hidden driver of machine-learning performance: spending far more time on data than on models—especially on dataset quality,...
3. Prioritizing - ML Projects - Full Stack Deep Learning
Picking the right machine learning projects comes down to a simple but disciplined tradeoff: pursue work that delivers high business impact while...
Lab 06: Data Annotation (FSDL 2022)
Data annotation is treated as a make-or-break step in the full machine-learning pipeline: rich, carefully structured labels—often at finer...
Lecture 11B: Monitoring ML Models (Full Stack Deep Learning - Spring 2021)
Monitoring deployed machine learning models is about catching silent performance decay—often driven by changes in data, user behavior, or sampling...
Lecture 13: ML Teams (Full Stack Deep Learning - Spring 2021)
Machine learning teams fail or succeed less on model quality alone and more on how organizations staff roles, structure accountability, and manage...
Overview (1) - Infrastructure and Tooling - Full Stack Deep Learning
Turnitin’s products sit at the intersection of writing support and academic integrity: Revision Assistant provides detailed, non-grading feedback to...
Lab 08: Monitoring (FSDL 2022)
Model monitoring for a production text recognizer has to go beyond infrastructure health checks and into “behavioral” signals—whether the system’s...
6. Baselines - ML Projects - Full Stack Deep Learning
Baselines act as a reality check for model performance by setting a lower bound on what a system can achieve. The tighter that lower bound, the more...
Lab 9: Web Deployment (Full Stack Deep Learning - Spring 2021)
Lab 9 turns a trained paragraph text recognizer into something that can be called over HTTP and packaged for deployment. The core move is speeding up...
Lecture 7: Machine Learning Teams - Full Stack Deep Learning - March 2019
Machine learning teams face a widening talent gap that makes hiring—and building effective teams—far harder than most companies expect. Estimates...
Livecoding: Getting Started with LLMs, by Jeremy Howard
The core takeaway is that strong performance on an LLM multiple-choice science benchmark comes less from clever prompting and more from disciplined...
Lab 1 - Introduction - Full Stack Deep Learning
The lab setup centers on building a production-minded deep learning pipeline for a text-recognition app—turning an uploaded page image into a clean...
Lab 05: Troubleshooting & Testing (FSDL 2022)
Testing and performance troubleshooting for deep learning systems hinge on two disciplines: automated quality gates for code and data, and a...
Richard Socher on NLP at Salesforce (Full Stack Deep Learning - March 2019)
Natural language processing is stuck in a cycle of single-task models that improve benchmarks but don’t add up to a general system. Richard Socher’s...
Peter Welinder - Fireside Chat with OpenAI VP Product (LLM Bootcamp)
Peter Welinder traces a career path from early confusion about “artificial intelligence” to product-focused machine learning—and credits a series of...
Andrej Karpathy on AI at Tesla (Full Stack Deep Learning - August 2018)
Deep learning for real-world autonomy is shifting the center of gravity from “clever algorithms” to “programming with data.” Andrej Karpathy,...
Software Engineering (2) - Infrastructure and Tooling - Full Stack Deep Learning
Python has become the default language for full-stack deep learning less because it’s inherently perfect for scientific computing and more because...
Lecture 8: Troubleshooting Deep Neural Networks - Full Stack Deep Learning - March 2019
Troubleshooting deep neural networks is hard not because training is mysterious, but because the same drop in performance can come from many...
Lab 3: RNNs (Full Stack Deep Learning - Spring 2021)
Sequence models for handwritten text recognition take a practical turn in Lab 3: a sliding-window CNN baseline quickly works when characters don’t...
Lecture 9: Testing and Deployment - Full Stack Deep Learning - March 2019
Machine learning systems need a different testing and deployment playbook than traditional software because the “running system” depends on both code...
Lecture 08: ML Teams and Project Management (FSDL 2022)
Machine-learning product teams face a structural problem: ML adds uncertainty, scarce talent, and stakeholder misunderstanding on top of the usual...
4. Archetypes - ML Projects - Full Stack Deep Learning
Machine learning projects tend to fall into three archetypes—improving an existing process, augmenting a manual workflow, or automating a manual...
Lecture 6: Data Management - Full Stack Deep Learning - March 2019
Data management in deep learning is less about model math and more about building a reliable pipeline for labels, storage, versioning, and...
Lukas Biewald on Founding Weights & Biases and FigureEight (Full Stack Deep Learning - March 2019)
Deep learning’s real bottleneck isn’t model architecture—it’s the messy, high-stakes work of turning training into reliable production systems. Lucas...
5. Metrics - ML Projects - Full Stack Deep Learning
Choosing the right metric is the make-or-break decision that determines whether an ML project can be steered toward real-world usefulness. Because...
Computing and GPUs (3) - Infrastructure & Tooling - Full Stack Deep Learning
Deep learning progress over the past five years has tracked compute growth closely enough that hardware choices now shape what experiments are even...
Labs 4-5: Tracking Experiments - Full Stack Deep Learning - March 2019
Handwriting line recognition is built from two linked pieces: a convolutional network that scans an input line image window-by-window, and a sequence...
Pieter Abbeel on Research Directions (Full Stack Deep Learning - November 2019)
Research frontiers in deep learning are increasingly about learning systems that can adapt quickly—often with only a few examples or trials—while...
Lecture 12: Research Directions (Full Stack Deep Learning - Spring 2021)
Deep learning research is shifting from “interesting ideas” to “rapidly deployable tools,” and the lecture’s through-line is that the fastest...
Lecture 9: Ethics (Full Stack Deep Learning - Spring 2021)
Ethics in machine learning isn’t about “feeling” that something is right or simply following the law. It’s about making defensible choices under...
Labeling (3) - Data Management - Full Stack Deep Learning
Data labeling hinges less on the annotation software’s feature list and more on the human decisions inside the labeling workflow—especially when...
Roles (2) - ML Teams - Full Stack Deep Learning
Machine learning teams split work across distinct roles—ML product management, DevOps, data engineering, ML engineering, ML research, and data...
Why you should always overfit a single batch to debug your deep learning model
Debugging a deep learning model becomes dramatically easier once training runs end-to-end and the system can overfit a single batch. The core idea is...
Lab 8: Testing and Continuous Integration (Full Stack Deep Learning - Spring 2021)
Lab 8 focuses on making a full-stack handwriting OCR project safer to change by adding automated linting, targeted tests, and continuous integration....
What machine learning role is right for you?
Machine learning teams hire for several distinct roles—DevOps, data engineering, machine learning engineering, machine learning research, and data...
ML Test Score (2) - Testing & Deployment - Full Stack Deep Learning
Machine learning systems accumulate “hidden technical debt” because the work doesn’t end at model training. Once a model is deployed, it becomes a...
Frameworks & Distributed Training (5) - Infrastructure & Tooling - Full Stack Deep Learning
Deep learning frameworks have shifted from “fast in production, painful in development” toward a convergence where developers write in Python with...
Managing (4) - ML Teams - Full Stack Deep Learning
Managing machine-learning teams is hard largely because progress is unpredictable: early gains often don’t translate into sustained improvement, and...
Project Structure (1) - Testing & Deployment - Full Stack Deep Learning
A practical full-stack deep learning setup hinges on separating three systems—prediction, training, and serving—and then testing each with the right...
Hyperparameter Tuning (7) - Infrastructure and Tooling - Full Stack Deep Learning
Hyperparameter tuning is often where deep-learning experiments stall: teams can guess a rough model size, but the real question is how to search the...
Overview (1) - ML Teams - Full Stack Deep Learning
Machine learning teams are unusually difficult to run because every core responsibility of technical management—hiring, alignment of work, long-term...
All in One (8) - Infrastructure and Tooling - Full Stack Deep Learning
The push toward “all-in-one” deep learning infrastructure is about replacing a patchwork of point tools with a single system that can take models...
Docker (4) - Testing & Deployment - Full Stack Deep Learning
Docker’s core value is that it packages an application with only the binaries and libraries it needs—no guest operating system—making deployments...
Resource Management (4) - Infrastructure & Tooling - Full Stack Deep Learning
Resource management in deep learning is about making shared compute usable: multiple people need to launch experiments quickly, with dependencies...
Sources (2) - Data Management - Full Stack Deep Learning
Deep learning in production often hinges less on flashy model design and more on how teams source, label, and multiply data. Label-hungry approaches...
Versioning (5) - Data Management - Full Stack Deep Learning
Versioning in machine learning isn’t just about saving model code—it’s about making the trained artifact reproducible by tracking the exact data used...
Debug (3) - Troubleshooting - Full Stack Deep Learning
Debugging deep learning starts with a practical goal: make the model run end-to-end, then prove it can learn by forcing it to overfit a single batch,...
Lecture 09: Ethics (FSDL 2022)
Ethics in tech and machine learning comes down to managing three recurring tensions—alignment failures, stakeholder trade-offs, and the need for...
Storage (4) - Data Management - Full Stack Deep Learning
Storage choices determine how data moves, how fast it can be read, and how safely it can be reused across training and production. The core takeaway...
Orgs (3) - ML Teams - Full Stack Deep Learning
Machine-learning organizations don’t have a single “correct” structure yet, but companies tend to evolve through a recognizable ladder: from ad hoc...
Start Simple (2) - Troubleshooting - Full Stack Deep Learning
Starting simple is the fastest way to find out whether poor model performance comes from a hard problem or from avoidable bugs in the pipeline. The...
Deep Learning Frameworks
Deep learning frameworks can be judged along two practical axes: how pleasant they are for building models and how well they scale once those models...
CI/Testing (3) - Testing & Deployment - Full Stack Deep Learning
Continuous integration is the backbone of reliable machine-learning development: every time code is pushed, an automated pipeline runs tests (and...
Lecture 10: Research Directions - Full Stack Deep Learning - March 2019
Research momentum in deep learning has accelerated to the point where thousands of papers arrive every month, making it impossible for any one person...
Monitoring (6) - Testing & Deployment - Full Stack Deep Learning
Monitoring for machine learning deployments isn’t just about keeping servers alive—it’s about catching data and model failures early, then feeding...
Processing (6) - Data Management - Full Stack Deep Learning
Building a photo popularity predictor that updates daily forces data pipelines to do more than just “run a model.” The core need is reliable data...
Hiring (5) - ML Teams - Full Stack Deep Learning
AI hiring is being squeezed by a widening talent gap: estimates suggest only thousands to a few hundred thousand people can build AI systems, far...
Evaluate (4) - Troubleshooting - Full Stack Deep Learning
Model improvement starts with evaluation, not guesswork: once a team is reasonably confident the model is bug-free, the next move is to measure...
Improve (5) - Troubleshooting - Full Stack Deep Learning
Model improvement starts with a simple priority order: fix underfitting first, then tackle overfitting, and only after both training and validation...
Tune hyper-parameters (6) - Troubleshooting - Full Stack Deep Learning
Hyper-parameter tuning is the last major lever after training and validation curves look “reasonably close,” but it’s hard because there are many...