The Al Trick That Finally Made Me Better at My Job (Not Just Faster)
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Shift career development from job titles to trainable skills that show up in the artifacts people produce.
Briefing
Knowledge work has long been treated like a job ladder—titles, compensation bands, and vague “strategic thinking” claims—yet real improvement requires something closer to athletic training: repeated practice, clear feedback, and outcome-based measurement. The core shift proposed is to move from jobs as the unit of career growth to skills as the unit of development, then use AI to make practice and coaching scalable.
The argument starts with a mismatch. Athletes and musicians train specific sub-skills through structured repetition, while knowledge workers mostly perform “live games” with fuzzy outcomes, delayed feedback, and low repetition. In knowledge work, success rarely flips from 0 to 1; quality can depend on speed, risk, politics, relationships, and multiple competing metrics. Feedback also arrives late and noisily—decisions made in one quarter may only be judged months later, after markets shift or competitors act. And the work itself is entangled with real people and real money, leaving few low-stakes sandboxes. The result is that most “reps” happen under pressure, not in deliberate practice.
AI changes the feasibility of training. The proposal is not that AI becomes a magic brain, but that it can read text, follow instructions, and apply a rubric consistently—turning written work into a practice surface. The method begins with humans defining “good.” Pick one artifact that matters (for example, a product decision document, an engineering architecture memo, a sales pipeline expectation, or a call summary). Then gather trusted colleagues and extract a concrete checklist of what “good” means—such as whether the decision is stated clearly, whether real options exist, whether stakes and metrics are explicit, and whether risks and trade-offs are surfaced. Turn that into a 1–5 rubric and annotate several real examples with red-pen style feedback.
Only after that human calibration comes the AI step: feed the rubric and annotated examples to a large language model so it can score new drafts, quote the relevant passages, justify each score, and recommend edits that move specific dimensions upward. The payoff is a consistent “film review” signal—like basketball going into the basket—so individuals can track patterns over a quarter and teams can reduce the guesswork that leads to vague reviews and endless iteration.
From scoring to drills is the next leap. Each skill is treated as a repeatable pattern visible in artifacts, not as an abstract adjective. Five recurring skills are highlighted: judgment (framing decisions under uncertainty), orchestration (turning fuzzy goals into workflows for humans and AI), coordination (moving groups through ambiguity, including humans plus agents), taste (a quality bar and the ability to articulate it), and updating (changing priors as evidence shifts without being whiplashed by noise). With rubrics in place, weekly practice drills can be built—such as writing a one-page decision doc from a messy prompt and running it through the same rubric to identify what’s missing.
The approach extends to teams and hiring. Team leads can co-create rubrics, run AI-assisted pre-reviews before human approval, and hold short weekly practice sessions on the areas the rubric flags. In interviews, candidates can be evaluated using the same game as the job: take-home or live writing/repair of realistic artifacts, rubric-based critique of a deliberately mediocre AI-generated example, and live constraint changes to test reasoning under pressure. The goal isn’t to police AI use or treat scores as promotion-grade precision; it’s to build stable patterns of thought that remain visible even when AI accelerates drafting. The broader claim is that AI finally makes coaching and deliberate practice for knowledge work practical—answering a 2019 challenge that knowledge workers never truly trained in the way athletes do.
Cornell Notes
The proposal reframes career growth around skills rather than job titles, arguing that knowledge work lacks the deliberate practice athletes and musicians rely on. Knowledge work suffers from fuzzy outcomes, delayed/noisy feedback, and low repetition, so people mostly learn through high-stakes “live games.” AI makes a training loop possible by enabling consistent rubric-based critique of written artifacts. The workflow starts with humans defining what “good” looks like using a 1–5 rubric and annotated examples, then uses an LLM to score new drafts, quote evidence, and suggest edits. Skills are treated as repeatable patterns visible in artifacts—especially judgment, orchestration, coordination, taste, and updating—then practiced through weekly drills and, optionally, used to align hiring and development.
Why does knowledge work fail to produce the kind of training loop athletes get?
What does “practicing” look like for knowledge work once AI is available?
Which five skills are presented as especially repeatable for training?
How does the rubric-based approach avoid turning AI into an unearned authority?
How can teams operationalize this without turning it into surveillance?
How can the same skill framework improve hiring?
Review Questions
- What three structural factors make knowledge-work feedback and repetition unlike sports training, and how does the proposed rubric loop address each one?
- How would you design a rubric for “good” in one artifact type you regularly produce (e.g., decision docs, architecture memos, call summaries), and what evidence would you require for a 1 vs. a 5 score?
- Why does the framework treat judgment, orchestration, coordination, taste, and updating as patterns in artifacts rather than abstract traits?
Key Points
- 1
Shift career development from job titles to trainable skills that show up in the artifacts people produce.
- 2
Treat knowledge-work practice as deliberate repetition with narrow situations, specific feedback, and measurable outcomes—made feasible by AI rubric scoring.
- 3
Define “good” with humans first: extract a concrete checklist from trusted colleagues, build a 1–5 rubric, and annotate real examples before involving an LLM.
- 4
Use AI as a consistent film-review tool that quotes evidence, justifies scores, and recommends edits that move specific rubric dimensions upward.
- 5
Convert rubric scoring into weekly drills by repeatedly producing the same artifact type from messy prompts and comparing drafts to stronger exemplars.
- 6
Align team development and hiring by using the same rubrics to evaluate and practice the work candidates will do on the job.
- 7
Start small and avoid over-precision: rubric scores should guide improvement, not become a surveillance system or the sole basis for promotions.