Exploring the Possibilities of Science-Based AI Models
Based on MattVidPro's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Notion AI is positioned as an embedded productivity assistant inside Notion pages, focused on drafting and editing structured work like blog posts, agendas, press releases, and job descriptions.
Briefing
Two science-and-productivity focused AI tools are getting attention for different reasons: Notion AI is aimed at turning everyday writing and planning tasks inside Notion into faster drafts, while Meta’s Galactica is built to act like a research-oriented interface to scientific knowledge—complete with citations, but also prone to confident errors.
Notion AI (currently in an “Alpha” phase with a waitlist) is positioned as an AI layer inside Notion pages rather than a standalone chatbot. The pitch is tightly productivity-driven: it can help users write first drafts for blog posts, YouTube descriptions, and other text-heavy outputs; brainstorm ideas; outline meeting agendas; generate social media posts; draft press releases; and even produce job descriptions from basic requirements. It also targets common workflow friction—summarization, spelling and grammar fixes, and translation—while offering lighter “creative” utilities like generating haikus and pros-and-cons lists. A key theme is specialization: the transcript repeatedly contrasts Notion AI with OpenAI’s GPT-3, arguing that GPT-3 is a general text model, whereas Notion AI is streamlined around structured, business and writing tasks.
The second tool, Galactica, comes from Meta AI and is described as a 120 billion-parameter text model trained on humanity’s scientific knowledge. Instead of focusing on general productivity writing, it’s framed as a way to generate research materials such as literature reviews, Wikipedia-style articles, and lecture notes, plus answer questions. A live demo shows the model producing “lecture notes” from a prompt about lemons, but the output is flagged as unverified and the content appears largely off-topic—an early sign that the system can sound authoritative while missing the actual intent of the query.
Further tests highlight both strengths and limits. When asked about “doll E2,” the model produces a coherent, high-level explanation of how image generation works, and it appears to know the term. A “product review of Coca-Cola” request mostly returns encyclopedia-like history rather than a true review, suggesting the model’s strongest behavior is structured informational writing. The transcript also emphasizes a built-in warning: outputs may be unreliable because language models can hallucinate. Still, Galactica is presented as more than just free-form text generation—its design includes large context windows and citation support, with claims of massive scale (hundreds of millions of context and tens of millions of unique references) intended to help users discover related papers and ground answers.
Taken together, the two systems point to a broader shift: AI assistance is moving from generic chat toward embedded, task-specific workflows (Notion) and toward research-style knowledge synthesis with citations (Galactica). The tradeoff is consistent across both: speed and convenience come with the need for human verification, especially when prompts are unusual or when the model confidently fills gaps.
Cornell Notes
Notion AI and Meta’s Galactica are presented as two different approaches to “science-based” productivity. Notion AI is an AI assistant embedded in Notion pages, focused on drafting and editing work like blog posts, agendas, press releases, job descriptions, summaries, and translations. Galactica is a 120B-parameter text model trained on scientific knowledge, designed to generate research-style outputs such as lecture notes, literature reviews, and Wiki-like articles, with citation support. Demos show both capability and risk: Galactica can produce plausible, structured answers while still being wrong or off-topic, and it includes warnings that outputs may be unreliable. The practical takeaway is that these tools can accelerate writing and research, but they require careful human checking.
How does Notion AI differ from a general text model like GPT-3 in the transcript’s framing?
What kinds of tasks does Notion AI handle in the examples shown?
What is Galactica designed to do, and what makes it different from a typical chatbot?
What went wrong in the Galactica “lecture notes about lemons” demo?
How does the transcript characterize Galactica’s reliability and the risk of hallucination?
What does the “product review of Coca-Cola” test suggest about Galactica’s strengths?
Review Questions
- Which specific Notion AI writing and planning tasks are listed, and which ones are framed as business-focused?
- What evidence from the Galactica demos supports both its usefulness (structured research outputs) and its failure modes (off-topic or unverified content)?
- How do citation support and large context windows relate to the transcript’s claims about Galactica’s research value—and why doesn’t that eliminate hallucination risk?
Key Points
- 1
Notion AI is positioned as an embedded productivity assistant inside Notion pages, focused on drafting and editing structured work like blog posts, agendas, press releases, and job descriptions.
- 2
The transcript repeatedly contrasts Notion AI’s task specialization with GPT-3’s general-purpose text generation.
- 3
Notion AI includes utilities beyond drafting, including summarization, spelling/grammar fixes, and translation.
- 4
Galactica is described as a Meta AI 120 billion-parameter model trained on scientific knowledge, aimed at generating research-style outputs such as lecture notes and Wiki-like articles.
- 5
Galactica demos show that outputs can be confidently structured yet wrong or off-topic, especially when prompts are unusual.
- 6
A built-in warning emphasizes that language models can hallucinate, so human verification remains essential.
- 7
Galactica’s design claims citation and reference-scale features intended to support research discovery, even though reliability is not guaranteed.