Get AI summaries of any video or article — Sign up free
Exploring the Possibilities of Science-Based AI Models thumbnail

Exploring the Possibilities of Science-Based AI Models

MattVidPro·
5 min read

Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Notion AI is positioned as an embedded productivity assistant inside Notion pages, focused on drafting and editing structured work like blog posts, agendas, press releases, and job descriptions.

Briefing

Two science-and-productivity focused AI tools are getting attention for different reasons: Notion AI is aimed at turning everyday writing and planning tasks inside Notion into faster drafts, while Meta’s Galactica is built to act like a research-oriented interface to scientific knowledge—complete with citations, but also prone to confident errors.

Notion AI (currently in an “Alpha” phase with a waitlist) is positioned as an AI layer inside Notion pages rather than a standalone chatbot. The pitch is tightly productivity-driven: it can help users write first drafts for blog posts, YouTube descriptions, and other text-heavy outputs; brainstorm ideas; outline meeting agendas; generate social media posts; draft press releases; and even produce job descriptions from basic requirements. It also targets common workflow friction—summarization, spelling and grammar fixes, and translation—while offering lighter “creative” utilities like generating haikus and pros-and-cons lists. A key theme is specialization: the transcript repeatedly contrasts Notion AI with OpenAI’s GPT-3, arguing that GPT-3 is a general text model, whereas Notion AI is streamlined around structured, business and writing tasks.

The second tool, Galactica, comes from Meta AI and is described as a 120 billion-parameter text model trained on humanity’s scientific knowledge. Instead of focusing on general productivity writing, it’s framed as a way to generate research materials such as literature reviews, Wikipedia-style articles, and lecture notes, plus answer questions. A live demo shows the model producing “lecture notes” from a prompt about lemons, but the output is flagged as unverified and the content appears largely off-topic—an early sign that the system can sound authoritative while missing the actual intent of the query.

Further tests highlight both strengths and limits. When asked about “doll E2,” the model produces a coherent, high-level explanation of how image generation works, and it appears to know the term. A “product review of Coca-Cola” request mostly returns encyclopedia-like history rather than a true review, suggesting the model’s strongest behavior is structured informational writing. The transcript also emphasizes a built-in warning: outputs may be unreliable because language models can hallucinate. Still, Galactica is presented as more than just free-form text generation—its design includes large context windows and citation support, with claims of massive scale (hundreds of millions of context and tens of millions of unique references) intended to help users discover related papers and ground answers.

Taken together, the two systems point to a broader shift: AI assistance is moving from generic chat toward embedded, task-specific workflows (Notion) and toward research-style knowledge synthesis with citations (Galactica). The tradeoff is consistent across both: speed and convenience come with the need for human verification, especially when prompts are unusual or when the model confidently fills gaps.

Cornell Notes

Notion AI and Meta’s Galactica are presented as two different approaches to “science-based” productivity. Notion AI is an AI assistant embedded in Notion pages, focused on drafting and editing work like blog posts, agendas, press releases, job descriptions, summaries, and translations. Galactica is a 120B-parameter text model trained on scientific knowledge, designed to generate research-style outputs such as lecture notes, literature reviews, and Wiki-like articles, with citation support. Demos show both capability and risk: Galactica can produce plausible, structured answers while still being wrong or off-topic, and it includes warnings that outputs may be unreliable. The practical takeaway is that these tools can accelerate writing and research, but they require careful human checking.

How does Notion AI differ from a general text model like GPT-3 in the transcript’s framing?

Notion AI is described as being “fine-tuned” for productivity tasks inside Notion pages—writing first drafts for specific business outputs (blog posts, YouTube descriptions, press releases, job descriptions), outlining agendas, generating social posts, and providing editing utilities like summarization, spelling/grammar fixes, and translation. GPT-3 is treated as a general text model that could do similar things, but Notion AI is positioned as more streamlined for structured work and writing workflows.

What kinds of tasks does Notion AI handle in the examples shown?

The transcript lists blog post assistance (including generating a full first draft), brainstorming ideas, meeting agenda outlines, social media post drafting, press releases, job descriptions from requirements, sales emails intended to cut through “psychological noise,” haiku/poem generation, pros-and-cons lists, and outlining to reduce the blank-page problem. It also mentions summarization, spelling and grammar correction, and translation.

What is Galactica designed to do, and what makes it different from a typical chatbot?

Galactica is described as a Meta AI 120 billion parameter text model trained on humanity’s scientific knowledge. It’s presented as an interface for producing research outputs—literature reviews, Wiki-style articles, lecture notes—and for answering questions. The transcript also highlights citation-related capabilities (large context and many unique references) intended to help users discover related papers, not just generate text.

What went wrong in the Galactica “lecture notes about lemons” demo?

The output is explicitly labeled as “not verified by a human,” and the content appears largely mismatched to the prompt. The generated “lecture notes” end up discussing Paul Davies, mind-body philosophy, and dualism—plausible-sounding academic material, but not actually connected to lemons. This illustrates how the model can be confident and structured while still failing the user’s intent.

How does the transcript characterize Galactica’s reliability and the risk of hallucination?

A warning in the demo notes that outputs may be unreliable because language models can hallucinate text. The transcript uses “hallucination” as a key concept: the model can invent ideas or details that sound credible. Even when it seems knowledgeable (e.g., about “doll E2”), the system can still be wrong or off-target, so human verification is necessary.

What does the “product review of Coca-Cola” test suggest about Galactica’s strengths?

Instead of producing a true review, Galactica returns encyclopedia-like information: history and basic facts (e.g., widespread consumption, global distribution, and corporate background). The transcript interprets this as the model being more suited to educational, scientific, or reference-style writing than to subjective product review formats.

Review Questions

  1. Which specific Notion AI writing and planning tasks are listed, and which ones are framed as business-focused?
  2. What evidence from the Galactica demos supports both its usefulness (structured research outputs) and its failure modes (off-topic or unverified content)?
  3. How do citation support and large context windows relate to the transcript’s claims about Galactica’s research value—and why doesn’t that eliminate hallucination risk?

Key Points

  1. 1

    Notion AI is positioned as an embedded productivity assistant inside Notion pages, focused on drafting and editing structured work like blog posts, agendas, press releases, and job descriptions.

  2. 2

    The transcript repeatedly contrasts Notion AI’s task specialization with GPT-3’s general-purpose text generation.

  3. 3

    Notion AI includes utilities beyond drafting, including summarization, spelling/grammar fixes, and translation.

  4. 4

    Galactica is described as a Meta AI 120 billion-parameter model trained on scientific knowledge, aimed at generating research-style outputs such as lecture notes and Wiki-like articles.

  5. 5

    Galactica demos show that outputs can be confidently structured yet wrong or off-topic, especially when prompts are unusual.

  6. 6

    A built-in warning emphasizes that language models can hallucinate, so human verification remains essential.

  7. 7

    Galactica’s design claims citation and reference-scale features intended to support research discovery, even though reliability is not guaranteed.

Highlights

Notion AI is framed as a productivity layer inside Notion—turning prompts into first drafts for business writing like press releases and job descriptions.
Galactica can generate lecture-note style content, but a “lemons” prompt produced philosophy-focused material unrelated to the topic, underscoring the need for verification.
Galactica’s outputs include a clear reliability warning: language models can hallucinate, even when the text sounds academic and well-structured.
The transcript suggests Galactica is strongest at reference-style scientific writing (e.g., encyclopedia-like history) rather than subjective formats like product reviews.

Topics

  • Notion AI
  • Galactica
  • Productivity Writing
  • Scientific Knowledge Models
  • Hallucination Risk