OpenAI — Channel Summaries — Page 2
AI-powered summaries of 110 videos about OpenAI.
110 summaries
Measuring Agents With Interactive Evaluations
Frontier AI progress needs more than “right answers” in fixed settings; it requires interactive benchmarks that measure how efficiently an agent...
Learning Dexterity | Alex Ray | 2018 Summer Intern Open House
A dexterous, underactuated five-finger robot hand learned to manipulate small objects in the real world using reinforcement learning trained entirely...
Model Behavior: The Science of AI Style
AI style—how a model’s values, tone, and “flare” show up in everyday responses—is treated as a core driver of trust and usefulness, not a cosmetic...
Studying Scaling Laws for Transformer Architecture … | Shola Oyedele | OpenAI Scholars Demo Day 2021
Scaling laws for language models can forecast how loss improves with compute, but it’s unclear whether those relationships hold across different...
Music Generation | Christine Payne | OpenAI Scholars Demo Day 2018
Christine Payne’s demo centers on a practical bottleneck in neural music generation: turning music—where multiple notes can occur at once and notes...
Semantic Parsing English to GraphQL | Andre Carerra | OpenAI Scholars Demo Day 2020
Semantic parsing from English into GraphQL is feasible with general-purpose encoder–decoder language models, but accuracy lags behind SQL-focused...
Large Scale Reward Modeling | Jonathan Ward | OpenAI Scholars Demo Day 2021
Large-scale reward modeling can be trained from cheap, naturally occurring preference signals on the internet—without the costly, researcher-guided...
OpenAI DevDay 2024 | Community Spotlight | Supabase
Supabase is pitching an AI-powered PostgreSQL playground that lets a model run real database operations end-to-end inside the browser—turning “code...
Words to Bytes: Exploring Language Tokenizations | Sam Gbafa | OpenAI Scholars Demo Day 2021
Language tokenization choices can materially change how well a language model learns, but the “best” granularity depends on data size and model...
Quantifying Interpretability of Models Trained on Coi… | Jorge Orbay | OpenAI Scholars Demo Day 2020
Neural networks trained on more diverse experience tend to develop features humans can interpret more often—and that relationship can be measured...