Get AI summaries of any video or article — Sign up free
Google #Bard for #Research and Comparing it with #ChatGPT thumbnail

Google #Bard for #Research and Comparing it with #ChatGPT

Research With Fawad·
4 min read

Based on Research With Fawad's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Bard access may require joining a waitlist and receiving an email, depending on country eligibility.

Briefing

Google’s experimental Bard is positioned as a research assistant that can draft literature reviews and generate reference lists, but it still comes with limitations and variable citation formatting. After gaining access via a waitlist (with an email granting entry), Bard presents itself as a “creative and helpful collaborator,” explicitly noting it may not always get things right and that user feedback helps improve it.

For research tasks, the transcript demonstrates a practical workflow: a user prompts Bard to research a topic—specifically, how servant leadership can lead to project success—and requests (1) a brief literature review, (2) references embedded in the text, and (3) full references collected at the end. Bard produces a structured response explaining that servant leadership emphasizes the needs of others, which can foster a positive, supportive work environment and, in turn, contribute to project success. It also outputs a set of references at the end of the response.

The key difference emerges when the user tries to verify and refine those citations. The transcript describes checking Bard’s references by searching them in Google Scholar, noting that some citation details may be slightly off (for example, a title mismatch). It also shows that Bard can generate multiple drafts for the same prompt, and those drafts differ in how well references appear inside the paragraph. One draft includes references in the text; another draft places references only at the end; a third draft again fails to embed citations within the paragraph. This variability matters for researchers who need properly formatted in-text citations for academic writing.

To benchmark Bard against ChatGPT, the same query is run through ChatGPT. The transcript reports that ChatGPT’s output includes references and appears more consistently usable, with at least one response described as “correct” after verification. The comparison isn’t framed as a universal winner; instead, it highlights that both tools can help generate literature-review drafts, but citation accuracy and formatting require human checking.

Overall, the transcript’s takeaway is a workflow mindset: use Bard (and similar AI tools) as an assistant for drafting and organizing research content, then validate citations through tools like Google Scholar and adjust formatting as needed. The closing reminder stresses that these systems should not replace reading and expertise; they function best as collaborators that reduce drafting effort while researchers retain responsibility for accuracy and academic standards.

Cornell Notes

Bard, an experimental Google AI tool, can help generate brief literature reviews for research topics like servant leadership and project success. After access is granted through a waitlist email, Bard produces a narrative explanation and can include references, but citation placement and formatting can vary across drafts. The transcript shows users verifying references in Google Scholar and adjusting outputs—sometimes requesting paragraph-form text with in-text citations and sometimes selecting a draft where citations appear correctly inside the paragraph. When the same prompt is tested in ChatGPT, the results are described as more detailed and at least one response as correctly formatted after checking. In all cases, citation accuracy still depends on human verification.

How does a researcher get access to Bard, and what does Bard communicate about its reliability?

Access is described as country-dependent: opening bard.google.com may not work in places like Pakistan unless access is granted via a waitlist. After requesting access, an email provides entry. Bard then presents itself as a “creative and helpful collaborator,” explicitly warning it has limitations and won’t always get things right, while user feedback is framed as a way to improve results.

What prompt structure is used to turn Bard into a literature-review drafting tool?

The transcript uses a research scenario (servant leadership → project success) and asks for a brief literature review explaining why servant leadership can lead to project success. It also requests references in the text and a full reference list at the end, producing both narrative content and citation material.

Why is reference verification necessary even when Bard provides citations?

The transcript describes checking Bard’s references in Google Scholar and finding issues such as slight problems with titles. It also shows that references may not appear in the paragraph in some drafts, meaning the output may not meet academic citation expectations without selecting the right draft or reformatting.

How do Bard’s multiple drafts affect citation placement?

Different drafts for the same query can place references differently. One draft is described as better because references appear in the text (in-paragraph citations). Other drafts omit in-text citations, leaving references only at the end. This draft-to-draft variation changes how directly usable the output is for writing.

How does ChatGPT’s output compare in the transcript’s side-by-side test?

The same query is copied into ChatGPT. The transcript reports that ChatGPT returns a more detailed response and includes references. At least one ChatGPT reference set is described as correct after verification, while other references still require checking in Google Scholar—reinforcing that both tools need human validation.

Review Questions

  1. When using Bard for a literature review, what two citation requirements are requested, and how can Bard fail to meet them?
  2. What steps in the transcript are used to validate AI-generated references, and why?
  3. How does the draft selection process change the usability of Bard’s output for academic writing?

Key Points

  1. 1

    Bard access may require joining a waitlist and receiving an email, depending on country eligibility.

  2. 2

    Bard can draft literature-review style text and generate reference lists, but it warns it may not always be correct.

  3. 3

    Citation accuracy and formatting are inconsistent across Bard drafts, including whether references appear inside the paragraph.

  4. 4

    Researchers should verify AI-provided references in Google Scholar and correct mismatches (e.g., title differences).

  5. 5

    Selecting the right Bard draft (or re-prompting) can improve in-text citation placement.

  6. 6

    ChatGPT can also produce literature-review drafts with references, but its outputs still require checking for correctness.

  7. 7

    AI tools should assist with drafting and organization, not replace reading, expertise, and citation responsibility.

Highlights

Bard’s drafts can differ: some include in-text references, while others only provide a reference list at the end.
Verifying citations in Google Scholar is treated as essential because reference details can be slightly wrong or inconsistently formatted.
A side-by-side test suggests ChatGPT may produce more detailed, usable citation outputs, but still needs reference checking.
Bard is explicitly framed as limited and not always correct, making human validation part of the workflow.