8 New Ways to Use Bing's Upgraded 8 [now 20] Message Limit (ft. pdfs, quizzes, tables, scenarios...)
Based on AI Explained's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Bing Chat’s eight-message per-turn limit makes multi-step prompts more practical by reducing early cutoffs.
Briefing
Microsoft’s Bing Chat has raised its per-turn message limit to eight back-and-forth exchanges (up from six), and the practical impact is that longer, multi-step prompts can stay “in conversation” instead of forcing an early cutoff. That extra room turns chat into a more usable workspace for tasks that require iterative refinement—quizzes, document synthesis, roleplay, and structured outputs—rather than one-shot answers.
One of the most direct productivity wins is turning prompts into interactive practice. Bing can generate a multiple-choice quiz and, crucially, keep the flow going by producing a new question after each answer. The transcript shows a prompt that asks for answers and explanations and then immediately requests another question after each response—preventing the chat from ending with a generic “do you want another question?” prompt that would waste message turns. The example quiz walks through Transformers basics (including a question about the pre-trained model behind 2018 state-of-the-art NLP results) and then moves into architecture differences between encoder-only and encoder-decoder Transformers.
The same eight-turn buffer also enables richer “what-if” exploration. Bing can generate counterfactual scenarios—such as how Sauron might have defeated the Fellowship in The Lord of the Rings—while staying coherent across multiple follow-ups. Instead of treating counterfactuals as a single creative paragraph, the conversation can branch into additional “why didn’t X happen?” questions and alternate tactics.
For work-oriented research, the transcript highlights a PDF-based workflow: Bing can read and synthesize insights across multiple documents when given links, then continue the discussion with follow-up questions about implications. The example focuses on combining academic papers (including references to “gbc5” and data limitations) and then asking for further insights, suggesting potential directions like using self-play of large language models to expand data beyond human-authored datasets.
Roleplay becomes more immersive with the longer limit. In “moments before disaster,” the user places themselves just before the 1755 Lisbon earthquake and asks for advice; Bing responds in-character with time-specific warnings and practical guidance, including uncertainty about which buildings will survive. Another roleplay test debates historical philosophy by adopting Socrates’ style—pressing for definitions and drilling into moral claims—so the dialogue behaves like a structured Socratic questioning session.
Structured outputs also benefit. Bing can generate comparison tables and then extend them across additional columns within the conversation budget—illustrated by comparing the Mona Lisa and Colgate toothpaste, then adding a polar-bear encounter as a new comparison dimension.
Finally, the transcript shows “current events commentary” by having historical figures react to modern deals—Napoleon weighing an OpenAI–Microsoft agreement as strategic market influence, while Mahatma Gandhi frames it as a threat to dignity and freedom through AI. Even lighter uses appear, like summarizing movies or Brexit news in emojis, but the throughline is consistent: eight turns make multi-step prompts and iterative refinement far more practical than earlier limits.
Cornell Notes
Bing Chat’s raised eight-message limit enables longer, multi-step interactions that don’t collapse into a single answer. The transcript demonstrates interactive quizzes that keep generating new questions after each response, counterfactual “what-if” scenarios that sustain follow-up questions, and PDF-based synthesis where Bing can combine insights across multiple documents and continue with implications. It also supports immersive roleplay (disaster survival guidance and Socrates-style debate) and structured outputs like expandable comparison tables. The practical takeaway is that iterative workflows—learning, research, and creative exploration—fit better within a single conversation window.
How does the transcript turn Bing into an interactive quiz partner instead of a one-and-done answer?
What makes counterfactuals more useful with an eight-turn limit?
What workflow does the transcript use to extract research insights from PDFs?
How does “moments before disaster” demonstrate roleplay that stays actionable?
What structured task is shown to benefit from expanding within the conversation limit?
Review Questions
- What specific prompt instruction prevents the quiz from ending with a “do you want another question?” prompt?
- In the PDF workflow example, how does Bing handle multiple documents—summarize one at a time or integrate across them?
- What two different kinds of roleplay are demonstrated, and what makes each feel distinct (disaster guidance vs. Socrates-style debate)?
Key Points
- 1
Bing Chat’s eight-message per-turn limit makes multi-step prompts more practical by reducing early cutoffs.
- 2
Interactive quizzes work best when prompts explicitly request a new question after each answer, keeping the flow continuous.
- 3
Counterfactual “what-if” prompts become more valuable when follow-up questions can stay within the same conversation window.
- 4
Bing can synthesize insights across multiple linked PDFs and continue the discussion with implication-focused follow-ups.
- 5
Roleplay scenarios can remain actionable when the prompt anchors the user in time and asks for practical advice in-character.
- 6
Structured outputs like comparison tables can be extended across additional columns within the same conversation budget.
- 7
Historical-figure reactions to current events can be generated as contrasting viewpoints, such as Napoleon’s strategic framing versus Gandhi’s concerns about dignity and freedom through AI.