How I Rehearsed a $200K Salary Battle with One AI Prompt (No Coding)
Based on AI News & Strategy Daily | Nate B Jones's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
The prompt turns multi-party negotiations into a controlled digital-twin simulation by using a fixed setup sequence, one-question-at-a-time collection, and answer confirmation.
Briefing
A reusable “digital twin” prompt can turn messy, multi-party negotiations into a controlled simulation—without writing code—by forcing the AI to gather inputs in a fixed order, confirm them, then run a character-driven, round-based scenario with explicit rules. The practical payoff is the ability to rehearse high-stakes conversations (salary talks, product approvals, sales pitches, even job interviews) and then get a debrief and scorecard on what drove outcomes.
The core mechanism is a large, system-style prompt built around a deterministic workflow. It starts by setting a role and a four-part mission: extract the user’s details for realistic multi-stakeholder context, ask only one question at a time to avoid overload, confirm each answer to lock in the “world,” and then embed all confirmed data into a runnable simulation block. Once the information is collected, the prompt immediately transitions into simulation mode and begins with each “twin’s” opening statement—an intentional design choice to keep the model in character and prevent personality drift as the scenario unfolds.
To make the simulation repeatable, the prompt uses a scripted question sequence (including the negotiation situation, participants, win metrics, which twin the user plays, number of rounds, key numbers or attached deal data, constraints/policies, and output preferences like transcript style and word limits). After every user response, it uses a confirmation phrase and an echo/summary step so the AI’s internal world-building matches the user’s intent from the start. The runnable portion then includes an explicit “output contract” (purpose, mode, effort level, instructions, references, and output rules), plus “error rules” that stop the model from wandering—such as not asking further setup questions after the simulation begins and maintaining one-question-at-a-time behavior during the setup phase.
In practice, the prompt is demonstrated with two negotiation arenas. First, a product-approval scenario for an “LLM to SQL” project pits fictional stakeholders against each other: CEO (business-focused, likes AI but doesn’t understand it), CTO (viability and schedule concerns), CFO (pipeline and close-rate worries), and a hostile-leaning engineering director. The simulation runs through rounds where tensions surface around schedule, enterprise prospects, legal/privacy issues, and pricing for usage. The debrief and scorecard highlight what the negotiation hinged on and where friction emerged.
Second, a compensation negotiation scenario uses a smaller group (Head of HR, CPO, and the candidate) over six rounds, with a concession ledger and clause-level bargaining (equity terms, bonus, and how finance might push back). The debrief emphasizes tactics like anchoring to market data and aligning incentives.
A key real-world insight comes from running the same scenario token-for-token across models. ChatGPT-4o produces more personality and sharper dialogue, but it also tends to be overly agreeable—agreeing too quickly and even praising the candidate—leading to a measurable “dumbness” in outcome. In the salary example, the 4o run yields a higher final cash figure than the o3 run (about $6,000 more), which the creator treats as evidence that o3 simulates tougher negotiation dynamics more realistically.
Overall, the prompt is positioned as a “super prompt” template: once the structure works, it can be reused beyond compensation—covering product proposals, sales negotiations, and job interviews—any time multiple parties must align under constraints.
Cornell Notes
A reusable digital-twin prompt can rehearse multi-party negotiations without coding by turning open-ended chat into a fixed, repeatable simulation. It first gathers user inputs through a scripted, one-question-at-a-time sequence, confirms each answer to “lock in” the scenario, then embeds all details into a runnable block with explicit rules and an output contract. The simulation begins with each stakeholder’s opening statement to keep characters consistent across rounds, and it ends with a scorecard and debrief that identify what drove the outcome. Running the same scenario across models shows meaningful differences: ChatGPT-4o can be more chatty and agreeable, while o3 tends to simulate deeper, tougher negotiation friction—affecting measurable results like final compensation.
How does the prompt make a negotiation simulation “repeatable” instead of drifting like normal chat?
Why does asking one question at a time matter for digital twins?
What does the “runnable prompt template” do once the user finishes answering setup questions?
What kinds of negotiations were simulated, and what did the debrief add?
How did model choice change the negotiation outcome in the compensation example?
Review Questions
- What specific mechanisms in the prompt prevent character/personality drift during multi-round simulations?
- How do the confirmation and echo/summary steps function as a “world lock” before the simulation begins?
- In the compensation example, what evidence suggests ChatGPT-4o simulated negotiation toughness differently from o3?
Key Points
- 1
The prompt turns multi-party negotiations into a controlled digital-twin simulation by using a fixed setup sequence, one-question-at-a-time collection, and answer confirmation.
- 2
A runnable simulation block embeds all confirmed user details and enforces an explicit output contract (purpose, mode, effort level, instructions, references, and output rules).
- 3
Starting the simulation with each twin’s opening statement immediately after the delimiter helps keep stakeholders in character across rounds.
- 4
The prompt’s debrief and scorecard make it useful for learning—highlighting what tensions surfaced and what tactics influenced outcomes.
- 5
Model choice materially affects realism: ChatGPT-4o can be overly agreeable, while o3 tends to simulate deeper friction, changing measurable results like final compensation.
- 6
The same “super prompt” structure can be reused for product approvals, sales negotiations, and job interviews—any scenario requiring coordinated stakeholder decisions.