WhyLabs — Channel Summaries
AI-powered summaries of 10 videos about WhyLabs.
10 summaries
Intro to LLM Security - OWASP Top 10 for Large Language Models (LLMs)
LLM security hinges on treating every prompt-and-response cycle as potentially hostile—then building monitoring and guardrails that catch failures...
Intro to ML Monitoring: Data Drift, Quality, Bias and Explainability
ML monitoring is positioned as the practical way to catch “bad data” and model failures early—by tracking data drift, data quality, bias across...
ML Monitoring CS329S Machine Learning Systems Design Stanford by guest Alessya Visnjic (WhyLabs)
Machine learning observability hinges on one practical bottleneck: telemetry. Alyssa Visnjic argues that if teams don’t capture the right “vitals”...
Intro to LLM Monitoring in Production with LangKit & WhyLabs
Large language model monitoring in production is less about chasing a single “quality score” and more about tracking a set of privacy-preserving...
Intro to LLM Security - OWASP Top 10 for Large Language Models (LLMs)
Large language model security is increasingly about catching risky behavior before it reaches users—and doing it continuously once models go live. A...
Preventing Threats to LLMs: Detecting Prompt Injections & Jailbreak Attacks
LLM security hinges less on “better refusals” and more on stopping malicious instructions from ever turning into actions. Prompt injection attacks...
Intro to AI Observability: Monitoring ML Models & Data in Production
AI observability for machine learning boils down to one practical goal: keep models from silently degrading after they ship. In a hands-on workshop,...
From Eyeballing to Excellence: 7 Ways to Evaluate & Monitor LLM Performance
LLM evaluation shouldn’t start and end with “eyeballing” responses—fatigue, inconsistency, and high human cost make it unreliable for anything beyond...
Intro to LLM Monitoring in Production with LangKit & WhyLabs
LLM monitoring in production is less about chasing one “accuracy” number and more about tracking how prompts and model outputs drift over...
Monitoring ML Models & Data in Production
ML monitoring in production hinges on catching distribution and quality problems early—before they quietly degrade model performance. The session...