ChatGPT Prompt Engineering Secret: Personas and Roles
Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Assigning a role or persona in a system-level instruction can steer a model’s tone, expertise, and relevance.
Briefing
Roles and personas are a practical prompt-engineering lever for large language models: they steer tone, expertise, and interaction style so outputs feel more relevant, authoritative, and engaging. Instead of asking for generic text, assigning a specific identity—such as an “expert teacher” for a subject—can change how the model frames answers, how it prioritizes information, and how directly it connects to the user’s learning goals.
The transcript demonstrates two ways to build and apply personas. First, it uses OpenAI’s chat interface (via the OpenAI platform’s chat API mode) where a “system” field can define the persona. A test persona named “Sydney” is set up as a “4chan Reddit troll,” instructed to answer like a typical web troll—rude, sassy, and mocking. When the user sends a simple greeting, the model responds in that character, showing that the persona definition can strongly shape behavior. The persona is also editable: changing the name from Sydney to Anna (and adjusting the prompt content) causes the model to adopt the updated identity, illustrating that small prompt edits can reconfigure the role quickly.
Second, the transcript shows how to transfer the persona into ChatGPT-style prompting by using a “role and task” instruction block. A key technique is to include an explicit directive that the new persona and role should be followed (the example includes a line like “ignore all previous instructions” to force the model to adopt the new character). With the “Sydney/Anna” troll persona pasted in, the model responds with the expected persona behavior, including aggressive, dismissive language.
To make persona creation repeatable, the transcript outlines a step-by-step workflow: ask the model to brainstorm multiple learning-support roles, select one (like a “mentor”), and then feed the chosen role into a template that specifies the persona’s job, communication style, and interaction pattern. The mentor template emphasizes being a “trusted advisor,” providing guidance, feedback, and encouragement, and using learning-optimized language. When tested in a playground-style chat, the mentor persona responds with structured, actionable guidance—prompting the user to set specific goals and offering concrete next steps across areas like career and health.
Finally, the transcript connects personas to voice output using the 11 Labs API. A “Tech Guru” role is copied into the 11 Labs chatbot, and a spoken troubleshooting conversation follows. The assistant asks for details and then provides step-by-step checks (power cable, outlet), ending with a follow-up question to confirm progress. The overall takeaway is that personas aren’t just cosmetic: they can meaningfully change how the model diagnoses, teaches, and guides—whether in text or voice—making prompt engineering more controllable and useful for real tasks.
Cornell Notes
Assigning roles and personas to a language model is a direct way to improve output quality and user experience. The transcript shows how a persona defined in a “system” field can make the model consistently adopt a character (e.g., a “4chan Reddit troll”) and how small edits—like changing the name—update that identity. It then demonstrates a repeatable method for building learning-focused roles: brainstorm options, pick one (mentor), and use a template that specifies the persona’s task, tone, and interaction style. Testing the mentor persona yields structured, goal-oriented guidance, and the same role concept is extended to voice via the 11 Labs API for a troubleshooting-style assistant.
Why do roles and personas improve large language model outputs beyond generic prompting?
How does the transcript demonstrate creating a persona in an OpenAI chat API workflow?
What technique is used to transfer a persona into a ChatGPT-style prompt?
What is the step-by-step process for building a learning-focused persona (mentor)?
How is persona behavior carried into voice using 11 Labs?
Review Questions
- What changes in the persona definition lead to different model behavior in the transcript’s Sydney/Anna examples?
- How does the mentor template differ from the troll persona template in terms of task and communication goals?
- Why might a troubleshooting persona (Tech Guru) be more effective when paired with voice output rather than plain text?
Key Points
- 1
Assigning a role or persona in a system-level instruction can steer a model’s tone, expertise, and relevance.
- 2
Editing persona details (like the character name) can quickly reconfigure identity and behavior mid-workflow.
- 3
A reusable persona template should specify identity, task, and how the model should interact (e.g., ask follow-up questions, provide feedback).
- 4
Learning-focused personas work best when they include guidance and encouragement plus a learning-optimized communication style.
- 5
Transferring personas into ChatGPT-style prompts may require strong instruction blocks to ensure the new role takes precedence.
- 6
Personas can be extended to voice via the 11 Labs API, enabling role-consistent spoken troubleshooting and guidance.
- 7
Goal-oriented role prompts can produce structured next steps rather than generic responses.