8 Ways to Use AI When Someone Is Trying to Screw You (Adversarial Prompting)
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
AI can reduce the cost of investigating institutional wrongdoing by collapsing hours of document review into a few hours of user time—when used with a structured workflow.
Briefing
A widow’s medical bill was cut by $162,000 after an AI-assisted review found Medicare billing violations—an example of how large language models can help people fight back when institutions rely on confusion. The broader claim is that many high-stakes disputes (medical, debt collection, insurance denials, school services, and more) are structured around information asymmetry: hospitals, collectors, and other organizations count on families not knowing the relevant rules, deadlines, or regulatory language, and on the cost of hiring experts to be out of reach. AI changes that cost equation by collapsing hours of investigative work into a few hours of user time—if people use it with a disciplined method rather than treating it like casual “advice.”
The argument centers on “adversarial prompting,” meaning situations where the institution has incentives to overcharge, delay, deny, or otherwise pressure the individual. In those settings, the key isn’t just asking an LLM for an opinion. Instead, the method uses eight capabilities designed to (1) uncover the applicable rule book, (2) test claims against authoritative sources, and (3) convert findings into a defensible negotiation position.
First, the LLM should parse intimidating technical frameworks—like Medicare billing rules, FDCPA statutes, or special education regulations—so the user can audit compliance without needing subject-matter expertise. Second, it should cross-reference multiple authority sources (for example, CPT codes against CMS bundling rules and Medicare fee schedules), because violations can hide in gaps between documents. Third, the user should “match institutional register” by drafting correspondence in a professional tone with regulatory citations and escalation logic, since institutions triage disputes based on perceived sophistication.
Fourth, the LLM should identify the governing rule book(s) for the specific domain and locate current versions. Fifth, it should help find categorical, binary violations—clear “they did X or they didn’t” breaches—rather than vague “this seems too expensive” disagreements that institutions can dismiss. Sixth, it should calculate objective anchors from authoritative benchmarks (Medicare reimbursement rates, comparable sales, clinical guidelines) so the position doesn’t sound purely subjective.
Seventh, AI should collapse investigative costs while keeping the user in control of verification. The method explicitly warns against relying on the model to absorb legal or medical liability; users should verify citations and code interpretations, especially when stakes are high. Eighth, it recommends using AI to generate verification prompts that catch the model’s own mistakes—flagging incorrect citations or misread codes before anything is sent.
Underpinning the eight steps are three strategic ideas: investigation must come before negotiation; the user must control the conversation’s frame (rejecting institutional narratives like “charity assistance” when the issue is rule-based billing violations); and responses from the institution act like diagnostics—quick settlement can signal weakness in the institution’s position, while counteroffers suggest negotiation territory.
Ultimately, the message is that AI can erode institutions’ historical monopoly on complex information. The practical takeaway is that people can use LLMs to conduct institutional-grade investigations, but only by following a structured workflow that turns regulatory text into evidence, benchmarks, and verification-ready claims.
Cornell Notes
The central insight is that AI can help individuals fight back in “adversarial” disputes where institutions benefit from information asymmetry—especially when the stakes involve money, rights, or deadlines. The method relies on eight LLM capabilities: decode technical rule frameworks, cross-check multiple authority sources, match the institution’s professional register, identify the governing rule book, target categorical violations, build objective benchmark anchors, use AI to reduce investigative time while verifying outputs, and create verification prompts to catch hallucinations. The approach also emphasizes strategy: investigate before negotiating, control the frame of the dispute, and treat institutional responses as diagnostic signals about the strength of one’s position. This matters because it can collapse expert-level investigation costs from thousands of dollars to hours of user effort—without surrendering verification responsibility.
Why does the transcript treat “adversarial prompting” as different from asking for advice?
How does cross-referencing authority sources help uncover hidden violations?
What does “matching institutional register” mean in practice?
What’s the difference between “marginal disputes” and “categorical violations,” and why does it matter?
How does the transcript address hallucinations and verification responsibility?
What strategic principles guide the order and framing of actions?
Review Questions
- What are the eight LLM capabilities, and which ones directly support finding evidence versus drafting persuasive correspondence?
- Why does the transcript insist on categorical violations instead of subjective complaints, and how would you translate that into a prompt?
- How can “verification prompts” reduce the risk of hallucinated citations when stakes are high?
Key Points
- 1
AI can reduce the cost of investigating institutional wrongdoing by collapsing hours of document review into a few hours of user time—when used with a structured workflow.
- 2
Adversarial disputes require more than “advice”; they call for targeted tasks like rule-book identification, cross-referencing authorities, and evidence extraction.
- 3
Cross-checking multiple authoritative sources helps expose violations that hide in gaps between documents (e.g., billing codes versus bundling rules versus fee schedules).
- 4
Professional “register” in correspondence can change how institutions triage disputes, making documented, citation-backed claims harder to dismiss.
- 5
The strongest cases focus on categorical, binary rule violations and objective benchmark anchors rather than subjective “too expensive” arguments.
- 6
Users must stay responsible for verification—especially citations and code interpretations—while AI accelerates the investigative and drafting steps.
- 7
Investigation should come before negotiation, and institutional responses should be treated as diagnostic signals about the strength of each side’s position.