Get AI summaries of any video or article — Sign up free
9 of the Best Bing (GPT 4) Prompts thumbnail

9 of the Best Bing (GPT 4) Prompts

AI Explained·
5 min read

Based on AI Explained's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use role and pacing constraints (e.g., “only reply as…,” “ask one question at a time,” and “wait for my answers”) to turn chat output into an interactive practice session.

Briefing

Bing chat can be turned into a high-performance “persona” and research assistant by using prompts that enforce role, structure, and examples—often producing results that feel tailored, more accurate, and more usable than generic queries. The most practical payoff comes from prompts that make the model behave like a tool: it reads a job description, asks interview questions one by one, and waits for answers—turning practice into an interactive session rather than a one-shot output.

A standout example starts with a job-specific interview coach prompt. Even when the user doesn’t explicitly name the job, the instruction “detailed on this page” cues Bing to interpret the linked posting, then respond only as the interviewer. The prompt also constrains behavior: ask questions one at a time, don’t dump the whole conversation, and avoid explanations. The result is a realistic interview flow that adapts to the role and the posting’s details, including domain-specific questions (such as benefits and challenges of implementing Robotics and AI in finance or supply chain processes). The transcript frames this as a substitute for paid coaching—especially when paired with follow-ups like grading answers or using a pasted CV to generate targeted fit arguments.

Naming and creativity prompts also improve when they demand evidence and a style guide rather than letting the model freestyle. When asked for YouTube channel names at the intersection of AI and politics, the initial suggestions can come out bland. But adding a research-and-iteration step—“research how best to name things” and then apply an “Igor naming guide” emphasizing evocative, non-generic meaning—yields sharper options like “Spark Ethos,” “Paradox Influence,” and “Polaris,” with the model justifying why each name fits the intended vibe.

Role-play prompts can be surprisingly effective too, but the transcript notes a workaround: if Bing refuses a role, removing the role-play framing can still get the same underlying task done. A time-travel guide request produces immersive, in-character recommendations (including real historical stops like the first Viking raid on Lindisfarne in 793 and meeting King Alfred the Great). For education, the same “adventure” approach works with etymology: tracing word origins back through earlier languages, then continuing with new word pairs using a one-shot setup.

Accuracy and reliability improve through prompting techniques like few-shot examples. A classic logic error—sister’s age when she was half the user’s age at age six—gets corrected after the user provides a single correct example (and uses “let’s think step by step”). Formatting constraints further reduce chaos: specifying an output template (publication date, abstract/conclusion summary, main author, and citations) turns Bing into a structured research assistant, enabling faster comparison across peer-reviewed studies (with the caveat that users must still fact-check).

The transcript also highlights prompt “style control,” including rewriting in the style of Carl Sagan to upgrade bland prose, then feeding the output into Midjourney for image generation. Overall, the core insight is that prompt engineering isn’t just about asking better questions—it’s about forcing the model into a repeatable workflow: role enforcement, structured outputs, evidence-based iteration, and example-driven reasoning.

Cornell Notes

The transcript argues that Bing chat becomes dramatically more useful when prompts constrain behavior and add structure. A job-interview coach prompt can read a posting, ask questions one at a time, and wait for answers—turning practice into an interactive session. Creativity improves when prompts require research and a naming guide, producing less generic YouTube channel names. Accuracy can jump with few-shot prompting: providing one correct example (plus “let’s think step by step”) helps Bing correct a logic error it previously got wrong. Finally, specifying output formats and even writing styles (e.g., Carl Sagan) makes results more actionable and easier to reuse, including for image generation in Midjourney.

How does the interview-coach prompt force Bing to behave like a real interviewer rather than a generic Q&A bot?

It combines role and output constraints: “only reply as the interviewer,” “do not write all the conversation at once,” “ask me the questions and wait for my answers,” and “do not write explanations.” It also relies on context by referencing “the position detailed on this page,” so Bing reads the job posting and tailors questions to it, including domain-specific topics like Robotics and AI in finance or supply chain processes.

Why do YouTube channel naming prompts improve when they include research and a naming guide?

Free-form naming can be bland. The transcript shows an upgraded prompt that asks Bing to research best practices for naming and then apply the “Igor naming guide,” which emphasizes evocative, descriptive, and emotional (but not literal or generic) names. That guidance leads to sharper options such as “Spark Ethos,” “Paradox Influence,” and “Polaris,” with explanations tied to meaning like guidance and leadership.

What is few-shot prompting, and how does it fix a logic problem Bing gets wrong?

Few-shot prompting means providing one example of the desired reasoning pattern. The transcript uses a sister-age puzzle where Bing initially answers incorrectly. After the user supplies a correct example (using “let’s think step by step” and ending with “does this make sense”), Bing is asked the original question again without being given the correct final answer. The model then “thinks it through” and produces the correct result, illustrating how one example can steer reasoning.

How can users get around Bing’s reluctance to role-play while still getting immersive outputs?

The transcript describes a tactic: if Bing denies a role-play request (like acting as an entertaining etymologist), clearing the role-play framing and asking for the underlying task directly can work. In the etymology example, Bing still performs the same “trace word origins back in time” challenge once the request is framed as a direct task rather than explicit role-play.

What prompting approach turns Bing into a structured research assistant?

Specifying an exact output format. The transcript gives a template requesting: (1) date of publication, (2) summary of the abstract and conclusion, (3) main author, and (4) citations on peer-reviewed papers about caffeine consumption and cognitive performance. This structured instruction reduces randomness and makes it easier to compare studies, though users are still responsible for fact-checking.

How does style prompting connect to image generation workflows?

The transcript shows rewriting text in the style of Carl Sagan to upgrade bland prose, then using that output as input for Midjourney. It demonstrates that style control can be used not only for better writing but also as a bridge into visual generation, producing prompts like a man crossing a road in New York at night, pixel art, retro video game animation, or cartoon superhero disguises.

Review Questions

  1. Which specific constraints in the interview-coach prompt prevent Bing from dumping a full conversation at once?
  2. Give one example of how few-shot prompting changes Bing’s behavior compared with a single direct question.
  3. What output-format instruction would you use if you wanted Bing to return research summaries in a consistent, citation-ready layout?

Key Points

  1. 1

    Use role and pacing constraints (e.g., “only reply as…,” “ask one question at a time,” and “wait for my answers”) to turn chat output into an interactive practice session.

  2. 2

    Make context explicit by referencing “the position detailed on this page” so Bing can tailor responses to a specific job description or document.

  3. 3

    Reduce bland creativity by requiring research plus a style guide (e.g., an “Igor naming guide”) rather than asking for names directly.

  4. 4

    Improve reasoning accuracy with few-shot prompting by providing one correct example and a step-by-step pattern.

  5. 5

    If Bing refuses role-play, reframe the request as the underlying task to keep the same outcome without the rejected framing.

  6. 6

    Specify output templates (date, abstract/conclusion summary, author, citations) to get structured results suitable for research workflows.

  7. 7

    Control writing and downstream generation by prompting for a named author’s style (e.g., Carl Sagan) and then reusing the output for tools like Midjourney.

Highlights

A job-interview prompt can read a posting and run a realistic interview loop—question by question, with Bing waiting for answers.
Naming gets sharper when prompts require research and apply a concrete naming guide, producing evocative, non-generic options like “Polaris.”
Few-shot prompting can correct a logic error: one correct “let’s think step by step” example can steer Bing to the right answer later.
Structured formatting prompts turn Bing into a research organizer, returning publication dates, summaries, authors, and citations in a consistent layout.
Style prompting (e.g., Carl Sagan) can upgrade prose and feed directly into Midjourney image-generation prompts.

Topics

  • Prompt Engineering
  • Interview Practice
  • Naming Strategies
  • Few-Shot Reasoning
  • Structured Research Outputs

Mentioned