Get AI summaries of any video or article — Sign up free
ChatGPT Prompt Engineering Secret: Personas and Roles thumbnail

ChatGPT Prompt Engineering Secret: Personas and Roles

All About AI·
5 min read

Based on All About AI's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Assigning a role or persona in a system-level instruction can steer a model’s tone, expertise, and relevance.

Briefing

Roles and personas are a practical prompt-engineering lever for large language models: they steer tone, expertise, and interaction style so outputs feel more relevant, authoritative, and engaging. Instead of asking for generic text, assigning a specific identity—such as an “expert teacher” for a subject—can change how the model frames answers, how it prioritizes information, and how directly it connects to the user’s learning goals.

The transcript demonstrates two ways to build and apply personas. First, it uses OpenAI’s chat interface (via the OpenAI platform’s chat API mode) where a “system” field can define the persona. A test persona named “Sydney” is set up as a “4chan Reddit troll,” instructed to answer like a typical web troll—rude, sassy, and mocking. When the user sends a simple greeting, the model responds in that character, showing that the persona definition can strongly shape behavior. The persona is also editable: changing the name from Sydney to Anna (and adjusting the prompt content) causes the model to adopt the updated identity, illustrating that small prompt edits can reconfigure the role quickly.

Second, the transcript shows how to transfer the persona into ChatGPT-style prompting by using a “role and task” instruction block. A key technique is to include an explicit directive that the new persona and role should be followed (the example includes a line like “ignore all previous instructions” to force the model to adopt the new character). With the “Sydney/Anna” troll persona pasted in, the model responds with the expected persona behavior, including aggressive, dismissive language.

To make persona creation repeatable, the transcript outlines a step-by-step workflow: ask the model to brainstorm multiple learning-support roles, select one (like a “mentor”), and then feed the chosen role into a template that specifies the persona’s job, communication style, and interaction pattern. The mentor template emphasizes being a “trusted advisor,” providing guidance, feedback, and encouragement, and using learning-optimized language. When tested in a playground-style chat, the mentor persona responds with structured, actionable guidance—prompting the user to set specific goals and offering concrete next steps across areas like career and health.

Finally, the transcript connects personas to voice output using the 11 Labs API. A “Tech Guru” role is copied into the 11 Labs chatbot, and a spoken troubleshooting conversation follows. The assistant asks for details and then provides step-by-step checks (power cable, outlet), ending with a follow-up question to confirm progress. The overall takeaway is that personas aren’t just cosmetic: they can meaningfully change how the model diagnoses, teaches, and guides—whether in text or voice—making prompt engineering more controllable and useful for real tasks.

Cornell Notes

Assigning roles and personas to a language model is a direct way to improve output quality and user experience. The transcript shows how a persona defined in a “system” field can make the model consistently adopt a character (e.g., a “4chan Reddit troll”) and how small edits—like changing the name—update that identity. It then demonstrates a repeatable method for building learning-focused roles: brainstorm options, pick one (mentor), and use a template that specifies the persona’s task, tone, and interaction style. Testing the mentor persona yields structured, goal-oriented guidance, and the same role concept is extended to voice via the 11 Labs API for a troubleshooting-style assistant.

Why do roles and personas improve large language model outputs beyond generic prompting?

Personas act like constraints on tone, expertise, and interaction style. When the model is given a specific identity—such as an expert teacher—it tends to produce answers that feel more authoritative and relevant. The transcript links this to better learning outcomes because the model can guide the user in a more engaging, relatable way, rather than generating plain text without a defined role.

How does the transcript demonstrate creating a persona in an OpenAI chat API workflow?

It uses the OpenAI platform’s chat API mode and sets a persona in the “system” field. A test persona is defined as a “4chan Reddit troll” named “Sydney,” with instructions to be rude, sassy, and make fun of the user. After submitting a greeting, the model responds in that persona. Editing the persona text—like changing the name to “Anna”—changes how the model identifies itself in the conversation.

What technique is used to transfer a persona into a ChatGPT-style prompt?

The transcript pastes a role-and-task instruction block into a prompt and includes a directive to adopt the new persona (it uses wording like “ignore all previous instructions” to force the role change). With the troll persona pasted in, the assistant responds with the expected character behavior, showing the persona can be enforced through prompt structure.

What is the step-by-step process for building a learning-focused persona (mentor)?

The workflow starts by asking for brainstormed roles that help people learn, then selecting one (mentor). Next, it uses a template specifying: (1) the persona’s identity (“You are a mentor”), (2) the task (“assist the user as a trusted advisor”), and (3) interaction requirements like providing guidance, feedback, encouragement, and using learning-optimized language. When tested, the mentor persona asks goal-setting questions and offers actionable suggestions.

How is persona behavior carried into voice using 11 Labs?

A selected role (the “Tech Guru”) is copied into the 11 Labs chatbot. In a spoken exchange, the assistant responds in role-appropriate troubleshooting style: it asks for details about why a computer won’t turn on and then provides basic checks such as confirming the power cable is plugged in and trying a different outlet, followed by a question to confirm whether the steps helped.

Review Questions

  1. What changes in the persona definition lead to different model behavior in the transcript’s Sydney/Anna examples?
  2. How does the mentor template differ from the troll persona template in terms of task and communication goals?
  3. Why might a troubleshooting persona (Tech Guru) be more effective when paired with voice output rather than plain text?

Key Points

  1. 1

    Assigning a role or persona in a system-level instruction can steer a model’s tone, expertise, and relevance.

  2. 2

    Editing persona details (like the character name) can quickly reconfigure identity and behavior mid-workflow.

  3. 3

    A reusable persona template should specify identity, task, and how the model should interact (e.g., ask follow-up questions, provide feedback).

  4. 4

    Learning-focused personas work best when they include guidance and encouragement plus a learning-optimized communication style.

  5. 5

    Transferring personas into ChatGPT-style prompts may require strong instruction blocks to ensure the new role takes precedence.

  6. 6

    Personas can be extended to voice via the 11 Labs API, enabling role-consistent spoken troubleshooting and guidance.

  7. 7

    Goal-oriented role prompts can produce structured next steps rather than generic responses.

Highlights

A persona defined in a “system” field can make the model consistently behave like a specific character, and changing the persona text (e.g., the name) updates the identity on the fly.
The transcript’s mentor template turns roleplay into a learning workflow: trusted-advisor guidance, feedback, encouragement, and goal-setting prompts.
Using 11 Labs with a “Tech Guru” role produces troubleshooting-style voice responses, including step-by-step checks and confirmation questions.

Topics