The Universal AI Skill: Good Taste
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Good taste is framed as the core judgment skill that determines what AI outputs are actually useful as models handle more routine work.
Briefing
Success with AI in 2025 won’t come from better prompts alone—it will come from “good taste”: the human ability to judge what’s right, what’s wrong, and what could be better. As AI systems take over more routine work—drafting spreadsheets, producing slide decks, generating analysis—what remains valuable is the judgment layer that decides which outputs are actually useful. That judgment isn’t a vague aesthetic preference; it’s a skill built from experience and sharpened over time, and it becomes the central “CPU” for how people steer increasingly capable models.
Taste starts as an internal signal. It’s the gut-level sense that something is right or off, and it grows as people accumulate enough experience in a domain to form strong opinions. That process can happen in everyday arenas—sports fandom, book collecting, fashion—long before it shows up at work. The key shift is recognizing that taste isn’t reserved for elite spaces where people “look like they belong.” It’s practiced constantly, and it can be used for career pathing: people should identify areas where they already have experience and opinions, then evolve those tastes as curiosity and circumstances change.
Why taste matters more now is that AI is rapidly moving from “grunt work” toward larger portions of the workday. Systems such as Claude and ChatGPT can generate credible artifacts quickly, which changes the leverage of human labor. Earlier career value often centered on time spent producing information directly—typing, assembling documents, and physical collaboration. Now the leverage shifts toward how humans use AI “work primitives” (thinking, drafting, structuring, iterating) in ways that reflect domain expertise and judgment. Model makers are also competing for attention by pulling people into model-centered workflows—asking users to think in Claude, code in code tools, or work through image-focused systems—similar to how social platforms once competed for time.
In that environment, taste becomes the mechanism for demanding useful work rather than chasing perfection. AI outputs can feel “hollow” when they miss context that only a human holds. Instead of either trusting everything or discarding everything after one mistake, the better approach is to bring taste into the interaction: give specific feedback on what to keep, what to change, and what to reject. That includes prompting with preferences (“I like this phrasing,” “don’t invent numbers,” “ask for missing information”) and using taste to calibrate how much to rely on a model versus how much to supply.
Taste also has to adapt to acceleration. Models improve on timescales of months, so people must repeatedly recalibrate their instincts about what a new system can and can’t do. The speaker frames this as a world where models may outperform humans in narrow domains, but humans still interpret outputs, supply missing context, and decide what matters. The practical takeaway: treat AI as a toolkit, keep what fits your standards, discard what doesn’t, and use your judgment to steer multi-turn work—prompting, reviewing AI-assisted drafts, and pushing back when something doesn’t meet the bar for quality.
Cornell Notes
“Good taste” is presented as the universal AI skill that determines what’s useful as models get better at drafting and analysis. Taste is defined as an internal gut-level sense of what’s right, what’s wrong, and what could be improved—shaped by accumulating domain experience until strong opinions form. As AI takes over more of the work stack (spreadsheets, PDFs, slide decks, analysis), human value shifts toward judging and steering outputs, not producing everything from scratch. The recommended practice is to bring taste into model interactions with specific preferences and rejection criteria, rather than either trusting blindly or discarding after one error. Because model capabilities change quickly, taste must also be recalibrated over time to match what each system can and can’t do.
How does the transcript define “taste,” and where does it come from?
Why does taste become more important as AI systems improve?
What’s the recommended way to interact with AI when outputs feel “off”?
How does the transcript connect taste to career growth?
What does “models are gaining intelligence so quickly” imply for taste?
How does the transcript describe the role of taste versus model intelligence?
Review Questions
- What are the three components of taste as described (right/wrong and improvement), and how does experience turn into strong opinions?
- How should a person respond when AI output feels hollow or when one part is wrong—what’s the middle path between blind trust and total rejection?
- Why does taste need to be recalibrated as models improve rapidly, and what practical behaviors support that recalibration?
Key Points
- 1
Good taste is framed as the core judgment skill that determines what AI outputs are actually useful as models handle more routine work.
- 2
Taste is an internal gut-level signal shaped by domain experience until strong opinions form, and it can develop through hobbies as well as professional life.
- 3
Career decisions should be made through a taste lens by identifying areas of existing experience and evolving those tastes over time as curiosity changes.
- 4
AI increases leverage for humans by shifting value from producing everything directly to steering AI “work primitives” in ways that reflect expertise and judgment.
- 5
The recommended interaction style is to bring taste into prompts and feedback with specific preferences and rejection criteria, not binary trust or rejection.
- 6
As model intelligence accelerates, people must repeatedly recalibrate their instincts about what each system can and can’t do.
- 7
AI should be treated as a toolkit: keep what matches your standards, discard what doesn’t, and use taste to demand useful—not perfect—work.