9 of the Best Bing (GPT 4) Prompts
Based on AI Explained's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use role and pacing constraints (e.g., “only reply as…,” “ask one question at a time,” and “wait for my answers”) to turn chat output into an interactive practice session.
Briefing
Bing chat can be turned into a high-performance “persona” and research assistant by using prompts that enforce role, structure, and examples—often producing results that feel tailored, more accurate, and more usable than generic queries. The most practical payoff comes from prompts that make the model behave like a tool: it reads a job description, asks interview questions one by one, and waits for answers—turning practice into an interactive session rather than a one-shot output.
A standout example starts with a job-specific interview coach prompt. Even when the user doesn’t explicitly name the job, the instruction “detailed on this page” cues Bing to interpret the linked posting, then respond only as the interviewer. The prompt also constrains behavior: ask questions one at a time, don’t dump the whole conversation, and avoid explanations. The result is a realistic interview flow that adapts to the role and the posting’s details, including domain-specific questions (such as benefits and challenges of implementing Robotics and AI in finance or supply chain processes). The transcript frames this as a substitute for paid coaching—especially when paired with follow-ups like grading answers or using a pasted CV to generate targeted fit arguments.
Naming and creativity prompts also improve when they demand evidence and a style guide rather than letting the model freestyle. When asked for YouTube channel names at the intersection of AI and politics, the initial suggestions can come out bland. But adding a research-and-iteration step—“research how best to name things” and then apply an “Igor naming guide” emphasizing evocative, non-generic meaning—yields sharper options like “Spark Ethos,” “Paradox Influence,” and “Polaris,” with the model justifying why each name fits the intended vibe.
Role-play prompts can be surprisingly effective too, but the transcript notes a workaround: if Bing refuses a role, removing the role-play framing can still get the same underlying task done. A time-travel guide request produces immersive, in-character recommendations (including real historical stops like the first Viking raid on Lindisfarne in 793 and meeting King Alfred the Great). For education, the same “adventure” approach works with etymology: tracing word origins back through earlier languages, then continuing with new word pairs using a one-shot setup.
Accuracy and reliability improve through prompting techniques like few-shot examples. A classic logic error—sister’s age when she was half the user’s age at age six—gets corrected after the user provides a single correct example (and uses “let’s think step by step”). Formatting constraints further reduce chaos: specifying an output template (publication date, abstract/conclusion summary, main author, and citations) turns Bing into a structured research assistant, enabling faster comparison across peer-reviewed studies (with the caveat that users must still fact-check).
The transcript also highlights prompt “style control,” including rewriting in the style of Carl Sagan to upgrade bland prose, then feeding the output into Midjourney for image generation. Overall, the core insight is that prompt engineering isn’t just about asking better questions—it’s about forcing the model into a repeatable workflow: role enforcement, structured outputs, evidence-based iteration, and example-driven reasoning.
Cornell Notes
The transcript argues that Bing chat becomes dramatically more useful when prompts constrain behavior and add structure. A job-interview coach prompt can read a posting, ask questions one at a time, and wait for answers—turning practice into an interactive session. Creativity improves when prompts require research and a naming guide, producing less generic YouTube channel names. Accuracy can jump with few-shot prompting: providing one correct example (plus “let’s think step by step”) helps Bing correct a logic error it previously got wrong. Finally, specifying output formats and even writing styles (e.g., Carl Sagan) makes results more actionable and easier to reuse, including for image generation in Midjourney.
How does the interview-coach prompt force Bing to behave like a real interviewer rather than a generic Q&A bot?
Why do YouTube channel naming prompts improve when they include research and a naming guide?
What is few-shot prompting, and how does it fix a logic problem Bing gets wrong?
How can users get around Bing’s reluctance to role-play while still getting immersive outputs?
What prompting approach turns Bing into a structured research assistant?
How does style prompting connect to image generation workflows?
Review Questions
- Which specific constraints in the interview-coach prompt prevent Bing from dumping a full conversation at once?
- Give one example of how few-shot prompting changes Bing’s behavior compared with a single direct question.
- What output-format instruction would you use if you wanted Bing to return research summaries in a consistent, citation-ready layout?
Key Points
- 1
Use role and pacing constraints (e.g., “only reply as…,” “ask one question at a time,” and “wait for my answers”) to turn chat output into an interactive practice session.
- 2
Make context explicit by referencing “the position detailed on this page” so Bing can tailor responses to a specific job description or document.
- 3
Reduce bland creativity by requiring research plus a style guide (e.g., an “Igor naming guide”) rather than asking for names directly.
- 4
Improve reasoning accuracy with few-shot prompting by providing one correct example and a step-by-step pattern.
- 5
If Bing refuses role-play, reframe the request as the underlying task to keep the same outcome without the rejected framing.
- 6
Specify output templates (date, abstract/conclusion summary, author, citations) to get structured results suitable for research workflows.
- 7
Control writing and downstream generation by prompting for a named author’s style (e.g., Carl Sagan) and then reusing the output for tools like Midjourney.