This AI Chatbot Searches The Web For You!
Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Internet-connected AI can improve time-sensitive answers, but only if it truly searches the web rather than relying on older stored data.
Briefing
Internet-connected chatbots are turning into practical research tools—but early versions still stumble on accuracy, relevance, and working links. The clearest takeaway from the comparisons is that “chat” quality depends less on the interface and more on whether the system truly searches the web, how fresh its underlying data is, and how reliably it returns sources.
Bing Chat is framed as the big news because it behaves like ChatGPT while being tied to live internet access. That matters because ChatGPT’s knowledge cutoff leaves it stuck in older pricing and product facts, while a web-connected system can (in theory) pull current information. The transcript then tests “YouChat,” a Bing Chat clone presented as an AI assistant that can answer questions, summarize text, and provide sources. In practice, YouChat often returns outdated or mismatched results: when asked for the least expensive brand-new car in the U.S. “as of 2023,” it initially responds with a 2021 Nissan Versa and then repeats similar answers while failing to deliver truly relevant 2023-specific sourcing. The MSRP and year details drift, and at least one source link appears to be missing or incorrect.
ChatGPT is also tested with the same car prompt. It refuses to browse and leans on its cutoff knowledge, giving an answer tied to 2021—specifically naming the Mitsubishi Mirage as the cheapest option in that older timeframe. That creates a direct contrast: ChatGPT can be coherent, but it can’t reliably update prices or availability without browsing.
Perplexity AI is introduced as a middle ground: less of a conversational chatbot and more of an AI-powered search engine that accesses the internet. On the car question, Perplexity produces results that align more closely with the web, including a Nissan-owned source. Even when the “cheapest” model differs across systems, Perplexity’s citations are treated as more trustworthy than YouChat’s, which appears to rely too heavily on its older database rather than live search.
The transcript then shifts from cars to identity and shopping-style queries to stress-test reliability. When asked “who is MattVidPro AI,” ChatGPT denies knowledge and asks for context, while YouChat and Perplexity generate different descriptions—Perplexity provides subscriber and view counts and references a Twitter handle, but the counts are later challenged as potentially inaccurate. For eBay listing links, both web-connected systems show weaknesses: YouChat returns broken links, and Perplexity returns irrelevant or duplicated results (including a clearly wrong category unrelated to laptops). Finally, keyboard recommendations show mixed consistency across systems, with Perplexity producing more current-feeling answers and citations, including a “best under $25” conclusion that the transcript treats as more credible.
Overall, the comparisons land on a pragmatic verdict: Perplexity’s search-and-citation behavior is often more dependable, YouChat’s interface is fast and friendly but can be undermined by stale data and link problems, and ChatGPT’s lack of browsing keeps it from staying current. The promise of web-connected AI is real, but the transcript repeatedly shows that “connected” doesn’t automatically mean “correct.”
Cornell Notes
Web-connected AI assistants can act like research tools, but accuracy hinges on real internet access, freshness of data, and whether sources and links actually work. YouChat (a Bing Chat clone) often returns outdated or mismatched results—such as using 2021 car pricing when asked for 2023—plus broken or missing source links. ChatGPT, which can’t browse, stays limited by its knowledge cutoff and therefore can’t reliably update prices or availability. Perplexity AI behaves more like an AI search engine and tends to provide more credible, web-sourced answers with citations, though it can still return irrelevant or duplicated shopping links. The practical lesson: treat citations and links as part of the output quality, not an afterthought.
Why does “internet access” matter when comparing ChatGPT, YouChat, and Perplexity AI?
What happened in the “cheapest brand new car in the US as of 2023” test across the three systems?
How did the “who is MattVidPro AI” prompt reveal differences in knowledge and citation behavior?
What went wrong with the eBay “top three listings for laptop” link test?
How did the keyboard recommendation tests differ in usefulness and consistency?
Review Questions
- When a chatbot provides a “source,” what specific failure modes in the transcript suggest that the source may still be unreliable?
- How do knowledge cutoff limitations show up differently in the car prompt versus the “who is MattVidPro AI” prompt?
- Which system performed best on time-sensitive product questions, and what evidence from the transcript supports that conclusion?
Key Points
- 1
Internet-connected AI can improve time-sensitive answers, but only if it truly searches the web rather than relying on older stored data.
- 2
YouChat’s fast, chat-like interface doesn’t guarantee current accuracy; the car test shows 2023 queries can still produce 2021-based results and questionable MSRP sourcing.
- 3
ChatGPT’s inability to browse keeps it anchored to its knowledge cutoff, which can make it unsuitable for current pricing and availability.
- 4
Perplexity AI often earns trust through web citations, and its car results include a more credible, domain-specific source (Nissan’s site).
- 5
Working links matter: broken or non-opening links reduce practical usefulness even when the answer text sounds plausible.
- 6
Shopping-style queries (like eBay listings) are a weak spot for these systems; irrelevant, duplicated, or wrong-category results can appear even with web access.
- 7
Credibility improves when answers combine up-to-date recommendations with verifiable citations, not just confident wording.