Get AI summaries of any video or article — Sign up free
The Al Trick That Finally Made Me Better at My Job (Not Just Faster) thumbnail

The Al Trick That Finally Made Me Better at My Job (Not Just Faster)

6 min read

Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Shift career development from job titles to trainable skills that show up in the artifacts people produce.

Briefing

Knowledge work has long been treated like a job ladder—titles, compensation bands, and vague “strategic thinking” claims—yet real improvement requires something closer to athletic training: repeated practice, clear feedback, and outcome-based measurement. The core shift proposed is to move from jobs as the unit of career growth to skills as the unit of development, then use AI to make practice and coaching scalable.

The argument starts with a mismatch. Athletes and musicians train specific sub-skills through structured repetition, while knowledge workers mostly perform “live games” with fuzzy outcomes, delayed feedback, and low repetition. In knowledge work, success rarely flips from 0 to 1; quality can depend on speed, risk, politics, relationships, and multiple competing metrics. Feedback also arrives late and noisily—decisions made in one quarter may only be judged months later, after markets shift or competitors act. And the work itself is entangled with real people and real money, leaving few low-stakes sandboxes. The result is that most “reps” happen under pressure, not in deliberate practice.

AI changes the feasibility of training. The proposal is not that AI becomes a magic brain, but that it can read text, follow instructions, and apply a rubric consistently—turning written work into a practice surface. The method begins with humans defining “good.” Pick one artifact that matters (for example, a product decision document, an engineering architecture memo, a sales pipeline expectation, or a call summary). Then gather trusted colleagues and extract a concrete checklist of what “good” means—such as whether the decision is stated clearly, whether real options exist, whether stakes and metrics are explicit, and whether risks and trade-offs are surfaced. Turn that into a 1–5 rubric and annotate several real examples with red-pen style feedback.

Only after that human calibration comes the AI step: feed the rubric and annotated examples to a large language model so it can score new drafts, quote the relevant passages, justify each score, and recommend edits that move specific dimensions upward. The payoff is a consistent “film review” signal—like basketball going into the basket—so individuals can track patterns over a quarter and teams can reduce the guesswork that leads to vague reviews and endless iteration.

From scoring to drills is the next leap. Each skill is treated as a repeatable pattern visible in artifacts, not as an abstract adjective. Five recurring skills are highlighted: judgment (framing decisions under uncertainty), orchestration (turning fuzzy goals into workflows for humans and AI), coordination (moving groups through ambiguity, including humans plus agents), taste (a quality bar and the ability to articulate it), and updating (changing priors as evidence shifts without being whiplashed by noise). With rubrics in place, weekly practice drills can be built—such as writing a one-page decision doc from a messy prompt and running it through the same rubric to identify what’s missing.

The approach extends to teams and hiring. Team leads can co-create rubrics, run AI-assisted pre-reviews before human approval, and hold short weekly practice sessions on the areas the rubric flags. In interviews, candidates can be evaluated using the same game as the job: take-home or live writing/repair of realistic artifacts, rubric-based critique of a deliberately mediocre AI-generated example, and live constraint changes to test reasoning under pressure. The goal isn’t to police AI use or treat scores as promotion-grade precision; it’s to build stable patterns of thought that remain visible even when AI accelerates drafting. The broader claim is that AI finally makes coaching and deliberate practice for knowledge work practical—answering a 2019 challenge that knowledge workers never truly trained in the way athletes do.

Cornell Notes

The proposal reframes career growth around skills rather than job titles, arguing that knowledge work lacks the deliberate practice athletes and musicians rely on. Knowledge work suffers from fuzzy outcomes, delayed/noisy feedback, and low repetition, so people mostly learn through high-stakes “live games.” AI makes a training loop possible by enabling consistent rubric-based critique of written artifacts. The workflow starts with humans defining what “good” looks like using a 1–5 rubric and annotated examples, then uses an LLM to score new drafts, quote evidence, and suggest edits. Skills are treated as repeatable patterns visible in artifacts—especially judgment, orchestration, coordination, taste, and updating—then practiced through weekly drills and, optionally, used to align hiring and development.

Why does knowledge work fail to produce the kind of training loop athletes get?

Three structural problems block deliberate practice: outcomes are fuzzy (multiple dimensions like speed, quality, politics, relationships, and risk rarely yield a clean yes/no), feedback is delayed and noisy (a decision in Q1 may only be judged in Q3 after market shifts or competitor actions), and repetition is low (consequential docs—specs, architecture memos, strategy writeups—are tied to real money and people, leaving few low-stakes sandboxes). The default becomes “performing in front of the crowd” rather than practicing sub-skills with rapid feedback.

What does “practicing” look like for knowledge work once AI is available?

Practicing means turning one important artifact type into a repeatable drill. First, humans define a rubric for what good looks like (e.g., for a decision doc: clear one-sentence decision, at least two real options, explicit stakes/metrics, a recommendation, and surfaced risks/trade-offs). Next, they annotate several real examples with scores and red-pen notes. Then an LLM uses that rubric and the annotated examples to score new drafts, quote the relevant text, explain why each score was given, and propose edits that raise specific dimensions by 1–2 points. That scoring becomes the feedback signal for weekly practice.

Which five skills are presented as especially repeatable for training?

The framework highlights five recurring, trainable skills: (1) Judgment—how decisions are framed and options chosen under uncertainty; (2) Orchestration—turning fuzzy goals into workflows that humans and AI can execute; (3) Coordination—moving groups through ambiguity, including coordinating agents as they improve; (4) Taste—maintaining a meaningful quality bar and articulating what “good” is; and (5) Updating—changing priors as evidence/context shifts using a rubric or heuristic to avoid being driven by noise.

How does the rubric-based approach avoid turning AI into an unearned authority?

The rubric is calibrated by trusted humans before any AI scoring. People extract concrete definitions of “good,” build a 1–5 scale, and mark up real examples to show what earns each score. Only after that human calibration does the LLM apply the rubric consistently. The intent is to create a consistent critique “wall” for practice, not to outsource judgment to a model.

How can teams operationalize this without turning it into surveillance?

The method recommends starting small and focusing on improvement rather than perfection or promotion-grade precision. Teams can co-create rubrics, use AI to critique drafts before human review, and run short weekly practice sessions on the growth areas the rubric flags. The emphasis stays on reducing iteration churn and improving clarity, not on scoring every document indefinitely or using noisy numbers for formal ratings.

How can the same skill framework improve hiring?

Hiring can mirror the job by using the same artifact rubrics candidates will be expected to produce. Instead of inferring skills from stories, candidates can complete a realistic take-home or live writing/repair task (e.g., drafting or fixing a decision document), then go through a live constraint change (like legal blocking or timeline shrinking) to test reasoning. A critique exercise can also be used: candidates evaluate a deliberately mediocre AI-generated doc and explain what’s wrong. The rubric and AI can provide consistent first-pass scoring, while interviewers still assess whether the candidate can articulate trade-offs and reasoning.

Review Questions

  1. What three structural factors make knowledge-work feedback and repetition unlike sports training, and how does the proposed rubric loop address each one?
  2. How would you design a rubric for “good” in one artifact type you regularly produce (e.g., decision docs, architecture memos, call summaries), and what evidence would you require for a 1 vs. a 5 score?
  3. Why does the framework treat judgment, orchestration, coordination, taste, and updating as patterns in artifacts rather than abstract traits?

Key Points

  1. 1

    Shift career development from job titles to trainable skills that show up in the artifacts people produce.

  2. 2

    Treat knowledge-work practice as deliberate repetition with narrow situations, specific feedback, and measurable outcomes—made feasible by AI rubric scoring.

  3. 3

    Define “good” with humans first: extract a concrete checklist from trusted colleagues, build a 1–5 rubric, and annotate real examples before involving an LLM.

  4. 4

    Use AI as a consistent film-review tool that quotes evidence, justifies scores, and recommends edits that move specific rubric dimensions upward.

  5. 5

    Convert rubric scoring into weekly drills by repeatedly producing the same artifact type from messy prompts and comparing drafts to stronger exemplars.

  6. 6

    Align team development and hiring by using the same rubrics to evaluate and practice the work candidates will do on the job.

  7. 7

    Start small and avoid over-precision: rubric scores should guide improvement, not become a surveillance system or the sole basis for promotions.

Highlights

Knowledge work lacks the training conditions of sports—fuzzy outcomes, delayed/noisy feedback, and low repetition—so people mostly learn through high-stakes performance rather than practice.
AI becomes useful once humans define a rubric for “good” and annotate examples; then an LLM can score new drafts consistently and suggest targeted edits.
Five repeatable skills—judgment, orchestration, coordination, taste, and updating—can be trained by treating them as patterns visible in written artifacts.
Teams can use AI-assisted pre-reviews and short weekly practice sessions to build shared standards and reduce iteration churn.
Hiring can be grounded by evaluating candidates with the same artifact rubrics and realistic constraints used for internal development.

Topics

  • Skills-Based Career Growth
  • Deliberate Practice
  • AI Rubrics
  • Knowledge Work Training
  • Hiring Assessments

Mentioned