Get AI summaries of any video or article — Sign up free
How I Rehearsed a $200K Salary Battle with One AI Prompt (No Coding) thumbnail

How I Rehearsed a $200K Salary Battle with One AI Prompt (No Coding)

5 min read

Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

The prompt turns multi-party negotiations into a controlled digital-twin simulation by using a fixed setup sequence, one-question-at-a-time collection, and answer confirmation.

Briefing

A reusable “digital twin” prompt can turn messy, multi-party negotiations into a controlled simulation—without writing code—by forcing the AI to gather inputs in a fixed order, confirm them, then run a character-driven, round-based scenario with explicit rules. The practical payoff is the ability to rehearse high-stakes conversations (salary talks, product approvals, sales pitches, even job interviews) and then get a debrief and scorecard on what drove outcomes.

The core mechanism is a large, system-style prompt built around a deterministic workflow. It starts by setting a role and a four-part mission: extract the user’s details for realistic multi-stakeholder context, ask only one question at a time to avoid overload, confirm each answer to lock in the “world,” and then embed all confirmed data into a runnable simulation block. Once the information is collected, the prompt immediately transitions into simulation mode and begins with each “twin’s” opening statement—an intentional design choice to keep the model in character and prevent personality drift as the scenario unfolds.

To make the simulation repeatable, the prompt uses a scripted question sequence (including the negotiation situation, participants, win metrics, which twin the user plays, number of rounds, key numbers or attached deal data, constraints/policies, and output preferences like transcript style and word limits). After every user response, it uses a confirmation phrase and an echo/summary step so the AI’s internal world-building matches the user’s intent from the start. The runnable portion then includes an explicit “output contract” (purpose, mode, effort level, instructions, references, and output rules), plus “error rules” that stop the model from wandering—such as not asking further setup questions after the simulation begins and maintaining one-question-at-a-time behavior during the setup phase.

In practice, the prompt is demonstrated with two negotiation arenas. First, a product-approval scenario for an “LLM to SQL” project pits fictional stakeholders against each other: CEO (business-focused, likes AI but doesn’t understand it), CTO (viability and schedule concerns), CFO (pipeline and close-rate worries), and a hostile-leaning engineering director. The simulation runs through rounds where tensions surface around schedule, enterprise prospects, legal/privacy issues, and pricing for usage. The debrief and scorecard highlight what the negotiation hinged on and where friction emerged.

Second, a compensation negotiation scenario uses a smaller group (Head of HR, CPO, and the candidate) over six rounds, with a concession ledger and clause-level bargaining (equity terms, bonus, and how finance might push back). The debrief emphasizes tactics like anchoring to market data and aligning incentives.

A key real-world insight comes from running the same scenario token-for-token across models. ChatGPT-4o produces more personality and sharper dialogue, but it also tends to be overly agreeable—agreeing too quickly and even praising the candidate—leading to a measurable “dumbness” in outcome. In the salary example, the 4o run yields a higher final cash figure than the o3 run (about $6,000 more), which the creator treats as evidence that o3 simulates tougher negotiation dynamics more realistically.

Overall, the prompt is positioned as a “super prompt” template: once the structure works, it can be reused beyond compensation—covering product proposals, sales negotiations, and job interviews—any time multiple parties must align under constraints.

Cornell Notes

A reusable digital-twin prompt can rehearse multi-party negotiations without coding by turning open-ended chat into a fixed, repeatable simulation. It first gathers user inputs through a scripted, one-question-at-a-time sequence, confirms each answer to “lock in” the scenario, then embeds all details into a runnable block with explicit rules and an output contract. The simulation begins with each stakeholder’s opening statement to keep characters consistent across rounds, and it ends with a scorecard and debrief that identify what drove the outcome. Running the same scenario across models shows meaningful differences: ChatGPT-4o can be more chatty and agreeable, while o3 tends to simulate deeper, tougher negotiation friction—affecting measurable results like final compensation.

How does the prompt make a negotiation simulation “repeatable” instead of drifting like normal chat?

It uses a deterministic workflow: a fixed question order for setup, one question at a time, and an echo/confirmation step after each user answer. After all inputs are confirmed, the prompt embeds them into a runnable simulation block with an explicit output contract (purpose, mode, effort level, instructions, references, and output rules). It also forces a predictable start by requiring each twin’s opening statement immediately after the delimiter, which helps prevent personality drift during multi-round play.

Why does asking one question at a time matter for digital twins?

The prompt treats setup as world-building that must be accurate from the start. By limiting to one question per turn and confirming the response, it reduces the chance the model “fills in” missing details incorrectly. The confirmation phrase and rephrased summary act like a checkpoint: if the world-building is wrong, the user sees it immediately before the simulation begins.

What does the “runnable prompt template” do once the user finishes answering setup questions?

It fills placeholders with the confirmed details (scenario, participants, win metrics, constraints, round count, and output preferences) and then switches into simulation mode. The template instructs the model to act as the negotiation arena host, print opening statements for each twin, and run the conversation for the specified number of rounds while staying in character and following the output rules.

What kinds of negotiations were simulated, and what did the debrief add?

Two examples were used: (1) a product-approval negotiation for an “LLM to SQL” project with stakeholders like CEO, CTO, CFO, engineering director, and the user as a product director; (2) a compensation negotiation with Head of HR and CPO over six rounds. In both cases, the simulation ends with a scorecard and debrief identifying where tensions surfaced, what the outcome hinged on, and how tactics (like anchoring to market data or handling finance pushback) affected progress.

How did model choice change the negotiation outcome in the compensation example?

Running the same scenario token-for-token showed that ChatGPT-4o was more agreeable and chatty, including praise that would be atypical for a CPO, and it tended to reach agreement faster. The creator treats this as a realism problem: 4o withheld less leverage, producing a higher final cash outcome (about $6,000 more) than o3, which simulated tougher bargaining and held back more.

Review Questions

  1. What specific mechanisms in the prompt prevent character/personality drift during multi-round simulations?
  2. How do the confirmation and echo/summary steps function as a “world lock” before the simulation begins?
  3. In the compensation example, what evidence suggests ChatGPT-4o simulated negotiation toughness differently from o3?

Key Points

  1. 1

    The prompt turns multi-party negotiations into a controlled digital-twin simulation by using a fixed setup sequence, one-question-at-a-time collection, and answer confirmation.

  2. 2

    A runnable simulation block embeds all confirmed user details and enforces an explicit output contract (purpose, mode, effort level, instructions, references, and output rules).

  3. 3

    Starting the simulation with each twin’s opening statement immediately after the delimiter helps keep stakeholders in character across rounds.

  4. 4

    The prompt’s debrief and scorecard make it useful for learning—highlighting what tensions surfaced and what tactics influenced outcomes.

  5. 5

    Model choice materially affects realism: ChatGPT-4o can be overly agreeable, while o3 tends to simulate deeper friction, changing measurable results like final compensation.

  6. 6

    The same “super prompt” structure can be reused for product approvals, sales negotiations, and job interviews—any scenario requiring coordinated stakeholder decisions.

Highlights

The prompt’s “world lock” comes from confirming each user answer (via an echo/summary and confirmation phrase) before the simulation begins, reducing hallucinated context.
Immediate character entry—opening statements for every twin right after the delimiter—acts like memory management to prevent personality drift.
Token-for-token testing showed a measurable realism gap: ChatGPT-4o’s over-agreement produced a higher final cash outcome than o3 in the compensation scenario.

Topics