AI Interviews are Crap: Here's How to Use AI to get Hired (and Hire) in 2025
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI screening and AI-assisted applications are widespread, which increases generic responses and reduces human-signal quality during hiring.
Briefing
AI interviewing is producing a sameness problem—and a human-signal problem—so both candidates and hiring teams need to redesign how AI is used. With 83% of companies admitting they screen with AI and 65% of candidates admitting they apply with AI, the process increasingly rewards generic, tool-generated answers. Even when candidates reach live interviews, AI-driven interview tools can derail conversations: interviewers miss the nuance of what a person is actually thinking, while candidates get cut off, hit confusing questions, and sometimes face recording or comprehension errors. The result is a worse candidate experience and, more importantly, weaker hiring decisions—because “efficiency” that removes human judgment doesn’t reliably produce the next champion.
For candidates, the prescription starts with tool strategy. Expensive AI interview services can encourage bland responses that employers detect as generic; the better approach is using AI as a partner to structure thinking rather than as a script generator. The guidance is blunt: the right tool won’t get a job, and it won’t build a career. Instead, candidates should use AI to practice and refine—such as free options like Google Interview Warm-up—then bring their own judgment to the room.
Next comes an artifact strategy: create proof-of-work packets that show how decisions were made under real constraints, not just polished outcomes. A “pretty website” that charts success isn’t enough; the most persuasive materials include trade-offs, constraints, iteration history, and even “ugly artifacts” like scratch notes, failed experiments, and setback negotiations. The same idea extends into interview storytelling via “STAR C”—Situation, Task, Action, Results, plus Constraints. Since AI answers often lack constraint detail, layering constraints into the STAR format helps candidates demonstrate judgment rather than recite a template.
Candidates are also urged to be transparent in 2025. Admitting the AI stack used for research, verification, and drafting—along with where AI was wrong, where hallucinations were caught, and where the candidate disagreed—signals ownership. That transparency is framed as authenticity, not cheating.
For hiring managers, the focus shifts from detection to assessment. Penalizing candidates for using AI (especially when they’re open about it) undermines the culture needed for AI fluency. Instead of “AI-free” interview designs, teams should run AI assessment practices: provide AI tools during interviews, evaluate how candidates collaborate with them, and test agency—whether the candidate can direct AI toward useful outcomes. The strongest signal comes from messy, constraint-heavy problems that require negotiating conflicting requirements, because those are exactly where AI breaks down and human judgment matters.
A three-level AI fluency framework is proposed: AI literacy (tool choice, hallucination verification, awareness of limitations), AI integration (workflow design, error handling, metrics and evaluation philosophy), and AI leadership (strategic adoption, governance, team development, and accountability for outputs). For senior hires, the bar includes change management—how they’d move an organization toward strong integration fluency within a defined timeline.
Across both sides, the closing principles are consistent: enhancement beats replacement, transparency beats concealment, and prompts must be used with intent—supporting assessment and preparation rather than substituting for judgment. Hiring can improve only if AI is treated as a table-stakes participant in the process, with humans staying responsible for the final decisions.
Cornell Notes
AI interviewing is failing because heavy AI use pushes candidates toward generic answers and can even degrade live conversations, making human judgment harder to see. Candidates are advised to use AI as a thinking partner—then prove judgment through proof-of-work artifacts and STAR C storytelling that explicitly includes constraints. Transparency is framed as a strength: candidates should name their AI stack for research and verification and describe where they caught errors or disagreed with AI. Hiring managers should stop treating AI as something to detect and penalize; instead, they should assess AI fluency by giving candidates AI tools during interviews and evaluating how they direct, verify, and integrate AI into workflows. A three-level model—AI literacy, AI integration, and AI leadership—helps structure what “good” looks like at different seniorities.
Why does AI-heavy hiring risk producing “everyone sounds the same,” and what does that do to hiring quality?
What is an “artifact strategy,” and how should candidates build artifacts that demonstrate judgment rather than polish?
How does STAR C differ from standard STAR, and why does it matter in an AI-saturated interview environment?
What does “transparent AI use” look like for candidates, and why is it positioned as authenticity?
How should hiring managers assess candidates differently in 2025—detection or assessment?
What are the three levels of AI fluency, and what interview signals correspond to each?
Review Questions
- What specific elements should a proof-of-work artifact packet include to demonstrate judgment under constraints?
- How would you modify a STAR response to apply STAR C, and what signal does the added constraints detail provide?
- If you were designing an interview for AI fluency, what would you test to distinguish AI literacy from AI integration?
Key Points
- 1
AI screening and AI-assisted applications are widespread, which increases generic responses and reduces human-signal quality during hiring.
- 2
Candidates should use AI to structure thinking and practice, not to replace their own judgment; the right tool alone won’t secure a job or career.
- 3
Build proof-of-work artifact packets that show constraints, trade-offs, iteration, and even failed experiments—not just polished outcomes.
- 4
Use STAR C (Situation, Task, Action, Results, plus Constraints) to demonstrate judgment in a way AI templates often miss.
- 5
Candidates should be transparent about their AI stack and verification process, including where they caught hallucinations or disagreed with AI.
- 6
Hiring managers should shift from AI detection to AI assessment by giving candidates AI tools during interviews and evaluating agency, verification, and workflow integration.
- 7
AI fluency can be assessed in three tiers—literacy, integration, and leadership—so interview questions match the seniority of the role.