Get AI summaries of any video or article — Sign up free
The AI wars: Google vs Bing (ChatGPT) thumbnail

The AI wars: Google vs Bing (ChatGPT)

sentdex·
5 min read

Based on sentdex's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT’s RLHF-based answer experience triggered a measurable shift toward direct responses, with rapid adoption that threatens link-based search behavior.

Briefing

The competitive center of gravity in online information is shifting from search engines that return links to “answer engines” that generate direct responses—and that shift puts Google’s core business at risk while giving Microsoft and OpenAI a fast path to leadership. The turning point is ChatGPT’s rapid adoption after OpenAI paired a large language model with reinforcement learning from human feedback (RLHF) and released a public “search preview” experience in late 2022. Within two months, ChatGPT reached nearly 600 million visitors, including 100 million unique users, making it the fastest-growing consumer application ever. That scale matters because it changes how people satisfy information needs: fewer clicks, less browsing, and more reliance on a single generated response.

Google built its dominance on speed and relevance—especially response time—and on the ad and data flywheel powered by search. With Google holding roughly 85% of search traffic, the company’s share is still stagnant or slowly declining, even though it remains the default in many browsers and phones. Bing is the only major search rival gaining traction, and the transcript frames ChatGPT as the accelerant: when users get useful answers instantly, they may not need to search at all. The stakes are framed starkly—Google’s valuation and revenue model are tightly tied to search ads and search-derived data, so losing the “answer” moment could mean more than losing users; it could mean losing the underlying platform.

Microsoft’s advantage comes from both capital and distribution. Microsoft invested over $1 billion into OpenAI in mid-2019 and earlier acquired GitHub for $7.5 billion, setting up Copilot—an application built on GPT-style code prediction—that works quickly and effectively. By 2021, GitHub Copilot had become a “life-changing” tool for programmers, reinforcing the idea that large language models can be scaled into real products with low latency. Then, in 2023, Microsoft doubled down: it announced another $10 billion investment into OpenAI, raising its stake to 49% and structuring profit participation so Microsoft can recoup its investment before taking a larger share. The transcript argues this is a win for Microsoft and Bing because ChatGPT already demonstrated the hardest part—turning a powerful model into a reliable, widely used service.

Google’s response is portrayed as late and fragile. Google outlined its commitment to large language models through Bard, powered by “Lambda,” but the transcript criticizes the messaging as thin and points to a high-profile Bard ad about James Webb Space Telescope discoveries that produced a confident but factually incorrect answer. That kind of error is treated as a credibility problem: answer engines must be trusted, and large language models can sound right while being wrong.

The near-term outcome is framed as a race over trust, inference speed, and product limitations. ChatGPT’s restrictions and refusal patterns are cited as a weakness, while competing “chat + search” experiences like you.com and openassistance.io are mentioned as possible challengers. Even so, the transcript’s bottom line is that the search era—type query, click links, sift through ads—is nearing its end, and Google’s biggest risk is being forced to play catch-up in a market where users increasingly want answers, not results.

Cornell Notes

The shift from link-based search to answer-based AI is accelerating, with ChatGPT acting as the catalyst. After OpenAI’s RLHF-enhanced large language model reached massive adoption, Microsoft and OpenAI gained momentum through investments, Azure support, and distribution via products like GitHub Copilot. Google’s challenge is existential because its ad and data engine depends on search behavior, yet its share is stagnant and it faces credibility risks when its own answer-style system (Bard) makes confident factual mistakes. The next phase of competition is expected to hinge on trust, fast inference, and how well answer engines handle limitations and refusals. Alternative “chat + search” tools may also carve out space, but the core battle is over who owns the moment when users want direct answers.

Why does ChatGPT’s user growth matter for search companies, beyond just being a new app?

The transcript links ChatGPT’s adoption to a behavioral change: people increasingly want direct answers instead of clicking through multiple websites. After the late-2022 RLHF-based release, ChatGPT reached nearly 600 million visitors in two months, including 100 million unique visitors. That scale suggests the “answer engine” experience can replace parts of the search journey—reducing clicks, browsing time, and exposure to link-based ad inventory that powers Google’s revenue model.

What advantages does Microsoft have in turning large language models into products?

Microsoft’s edge is portrayed as a combination of funding and deployment. It invested over $1 billion into OpenAI (with Azure compute for training) and acquired GitHub for $7.5 billion. That set up GitHub Copilot, a GPT-style code assistant released in 2021, which is described as fast and effective—key traits for usability. The transcript treats speed and reliability at scale as the hard engineering barrier that Microsoft/OpenAI cleared with ChatGPT.

Why is Google’s risk framed as higher than Bing’s?

Google’s business is tightly tied to search: it dominates traffic (about 85%) and monetizes through search ads and data. The transcript argues that if users shift to answer engines, Google can’t easily “maintain” its position by changing UI alone, because the total number of searches may not grow faster than population. Bing, by contrast, can potentially gain share even if users still search—so it has less to lose in the short term.

How does the transcript connect model behavior to credibility problems?

Large language models can generate fluent, confident text that may still be wrong. The transcript cites Google’s Bard ad about James Webb Space Telescope discoveries as an example where the answer was factually incorrect, yet presented with confidence. That matters because answer engines must be trusted; repeated errors can damage user willingness to rely on generated responses instead of verifying via links.

What factors decide who wins the “answer engine” race?

The transcript highlights trust, inference speed, and limitations. It notes that ChatGPT has restrictions and refusal patterns that can reduce enjoyment and usefulness, and it suggests a more “unrestricted” experience could win purely on usability. It also points to latency as a silent killer—fast inference improves user experience, not just technical performance.

What role do competitors like you.com and openassistance.io play in the outlook?

They’re presented as alternative “chat + search” approaches. you.com is described as allowing users to chat with an AI while mixing in search results and letting users up/down vote answers to improve them over time. openassistance.io is framed as aiming for an open-source, crowdsourceable, potentially locally runnable ChatGPT-like system. These options indicate the market may diversify beyond a single winner, even if the main battle is Google versus Microsoft/OpenAI.

Review Questions

  1. What specific user-behavior shift does the transcript claim is happening when people move from search to answer engines?
  2. Which capabilities (capital, distribution, speed, trust) does the transcript treat as decisive for Microsoft/OpenAI versus Google?
  3. How do factual errors and refusal/restriction behaviors influence user trust in generated-answer systems?

Key Points

  1. 1

    ChatGPT’s RLHF-based answer experience triggered a measurable shift toward direct responses, with rapid adoption that threatens link-based search behavior.

  2. 2

    Google’s dominance depends on search-driven ads and data, so losing the “answer moment” carries higher downside than simply losing some queries.

  3. 3

    Microsoft’s advantage is framed as both financial backing for OpenAI and practical product scaling through tools like GitHub Copilot.

  4. 4

    Inference speed and reliability are treated as core usability requirements for answer engines, not just model quality.

  5. 5

    Credibility is a make-or-break issue because large language models can produce confident but incorrect facts, as illustrated by the Bard ad example.

  6. 6

    The next competitive phase is expected to hinge on trust, latency, and how restrictive or user-friendly the systems feel.

  7. 7

    Alternative “chat + search” and open-source approaches (you.com, openassistance.io) suggest the market may evolve beyond a single winner.

Highlights

ChatGPT’s late-2022 RLHF release quickly became a mass-market “answer engine,” reaching nearly 600 million visitors in two months—fast enough to change how people satisfy information needs.
Microsoft’s strategy blends OpenAI investment with distribution and scaling experience, including GitHub Copilot’s fast, effective GPT-style code assistance.
Google’s risk is tied to search’s ad/data flywheel; the transcript argues answer engines could bypass the click-and-browse path that fuels that model.
Bard’s confidently wrong James Webb Space Telescope ad is used as a credibility warning about answer-style systems.
The race ahead is framed as trust + speed + user experience, including whether systems avoid restrictive refusal patterns.

Topics

  • AI Answer Engines
  • Google vs Bing
  • OpenAI Investments
  • ChatGPT Adoption
  • Bard Credibility

Mentioned

  • RLHF