Get AI summaries of any video or article — Sign up free
The Universal AI Skill: Good Taste thumbnail

The Universal AI Skill: Good Taste

5 min read

Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Good taste is framed as the core judgment skill that determines what AI outputs are actually useful as models handle more routine work.

Briefing

Success with AI in 2025 won’t come from better prompts alone—it will come from “good taste”: the human ability to judge what’s right, what’s wrong, and what could be better. As AI systems take over more routine work—drafting spreadsheets, producing slide decks, generating analysis—what remains valuable is the judgment layer that decides which outputs are actually useful. That judgment isn’t a vague aesthetic preference; it’s a skill built from experience and sharpened over time, and it becomes the central “CPU” for how people steer increasingly capable models.

Taste starts as an internal signal. It’s the gut-level sense that something is right or off, and it grows as people accumulate enough experience in a domain to form strong opinions. That process can happen in everyday arenas—sports fandom, book collecting, fashion—long before it shows up at work. The key shift is recognizing that taste isn’t reserved for elite spaces where people “look like they belong.” It’s practiced constantly, and it can be used for career pathing: people should identify areas where they already have experience and opinions, then evolve those tastes as curiosity and circumstances change.

Why taste matters more now is that AI is rapidly moving from “grunt work” toward larger portions of the workday. Systems such as Claude and ChatGPT can generate credible artifacts quickly, which changes the leverage of human labor. Earlier career value often centered on time spent producing information directly—typing, assembling documents, and physical collaboration. Now the leverage shifts toward how humans use AI “work primitives” (thinking, drafting, structuring, iterating) in ways that reflect domain expertise and judgment. Model makers are also competing for attention by pulling people into model-centered workflows—asking users to think in Claude, code in code tools, or work through image-focused systems—similar to how social platforms once competed for time.

In that environment, taste becomes the mechanism for demanding useful work rather than chasing perfection. AI outputs can feel “hollow” when they miss context that only a human holds. Instead of either trusting everything or discarding everything after one mistake, the better approach is to bring taste into the interaction: give specific feedback on what to keep, what to change, and what to reject. That includes prompting with preferences (“I like this phrasing,” “don’t invent numbers,” “ask for missing information”) and using taste to calibrate how much to rely on a model versus how much to supply.

Taste also has to adapt to acceleration. Models improve on timescales of months, so people must repeatedly recalibrate their instincts about what a new system can and can’t do. The speaker frames this as a world where models may outperform humans in narrow domains, but humans still interpret outputs, supply missing context, and decide what matters. The practical takeaway: treat AI as a toolkit, keep what fits your standards, discard what doesn’t, and use your judgment to steer multi-turn work—prompting, reviewing AI-assisted drafts, and pushing back when something doesn’t meet the bar for quality.

Cornell Notes

“Good taste” is presented as the universal AI skill that determines what’s useful as models get better at drafting and analysis. Taste is defined as an internal gut-level sense of what’s right, what’s wrong, and what could be improved—shaped by accumulating domain experience until strong opinions form. As AI takes over more of the work stack (spreadsheets, PDFs, slide decks, analysis), human value shifts toward judging and steering outputs, not producing everything from scratch. The recommended practice is to bring taste into model interactions with specific preferences and rejection criteria, rather than either trusting blindly or discarding after one error. Because model capabilities change quickly, taste must also be recalibrated over time to match what each system can and can’t do.

How does the transcript define “taste,” and where does it come from?

Taste is described as an internal sense—“your gut knows best”—that something is right or wrong and that something could be better. It grows as people accumulate enough experience in a domain to form strong opinions. That experience isn’t limited to work; it can develop in hobbies and everyday interests (e.g., fantasy football debates, book collecting, fashion). The point is that everyone practices taste daily, even if it isn’t labeled that way.

Why does taste become more important as AI systems improve?

As AI gets better at producing workable artifacts (like discounted cash flow sheets, polished PDFs, or slide decks), the remaining human leverage shifts to judgment: deciding what’s actually useful and high-quality. The transcript argues that AI can handle more “grunt work,” so the differentiator becomes the ability to steer models toward the right outcomes—demanding useful work rather than perfect work.

What’s the recommended way to interact with AI when outputs feel “off”?

Instead of trusting everything or throwing it out after one mistake, the transcript recommends using taste during the interaction. That means giving specific feedback on what to keep and what to change—e.g., liking certain numbering or phrasing, rejecting overdramatic language, and flagging invented numbers. It also includes prompting for missing context (“tell me when you need more information” or “go research and come back”), using preferences to guide multi-turn refinement.

How does the transcript connect taste to career growth?

Career pathing should be viewed through a taste lens: identify areas where you already have experience and can lean on formed opinions, then evolve those tastes as your interests and curiosity change. The transcript notes that tastes can shift dramatically over decades; the skill isn’t having one permanent opinion, but developing and updating judgment as you move into new domains.

What does “models are gaining intelligence so quickly” imply for taste?

Model capability changes on timescales of months, which makes taste feel jarring to apply at first. The transcript frames the future as one where people must depend even more on taste, not less, because humans still interpret outputs and supply context that models can’t fully “metabolize.” As models improve, the task becomes calibrating how to use each system effectively—recognizing where it’s strong and where it needs different inputs.

How does the transcript describe the role of taste versus model intelligence?

Even if a model can outperform a person in specific domains, humans still decide what’s correct given their private context. The transcript gives an example of seeing a model’s reasoning as plausible but feeling “off” because the model lacks the user’s internal context. The nuanced response is to adjust inputs and interaction style, not to assume the model is either always right or always wrong.

Review Questions

  1. What are the three components of taste as described (right/wrong and improvement), and how does experience turn into strong opinions?
  2. How should a person respond when AI output feels hollow or when one part is wrong—what’s the middle path between blind trust and total rejection?
  3. Why does taste need to be recalibrated as models improve rapidly, and what practical behaviors support that recalibration?

Key Points

  1. 1

    Good taste is framed as the core judgment skill that determines what AI outputs are actually useful as models handle more routine work.

  2. 2

    Taste is an internal gut-level signal shaped by domain experience until strong opinions form, and it can develop through hobbies as well as professional life.

  3. 3

    Career decisions should be made through a taste lens by identifying areas of existing experience and evolving those tastes over time as curiosity changes.

  4. 4

    AI increases leverage for humans by shifting value from producing everything directly to steering AI “work primitives” in ways that reflect expertise and judgment.

  5. 5

    The recommended interaction style is to bring taste into prompts and feedback with specific preferences and rejection criteria, not binary trust or rejection.

  6. 6

    As model intelligence accelerates, people must repeatedly recalibrate their instincts about what each system can and can’t do.

  7. 7

    AI should be treated as a toolkit: keep what matches your standards, discard what doesn’t, and use taste to demand useful—not perfect—work.

Highlights

Taste is presented as the “sense” that something is right or wrong—and as the mechanism for deciding what could be better.
As AI drafts more artifacts, human value shifts toward judging and steering outputs rather than producing everything from scratch.
The transcript argues against two extremes—blind trust and total rejection—replacing them with specific, taste-driven feedback.
Because models improve on fast timescales, taste must be recalibrated continuously to match changing capabilities.

Topics

  • Good Taste
  • AI Workflows
  • Human Judgment
  • Career Pathing
  • Prompting Feedback

Mentioned