Critical Thinking in the Age of AI: Practical Tips for Academics
Based on Paperpal Official's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Treat AI as an assistant, not a replacement for the critical-thinking pipeline that turns information into defensible judgment.
Briefing
AI’s biggest risk to academic critical thinking isn’t that it produces answers—it’s that it can quietly replace the human steps that turn information into defensible judgment. The practical takeaway is to treat AI as a research assistant that accelerates work you still must verify, rather than a shortcut that hands over reasoning.
Samuel Leslie frames critical thinking as a sequence: gather information, evaluate evidence, question biases, and then reason to a final judgment. That process matters because academia isn’t just about what conclusions people reach; it’s about how they get there. In the age of AI, the danger emerges when researchers outsource too much of that pipeline—especially when they ask for ready-made text without checking whether claims are accurate, sourced, or aligned with their own questions.
Audience polling during the session mirrors that concern. Most attendees use AI for editing and rewriting, while the most common challenge is trusting AI outputs—often due to hallucinations or errors. Staying original and avoiding AI-detector flagging also surfaced, reinforcing the session’s central theme: speed is not the same as rigor.
Leslie then contrasts “outsourcing” with “assisted” use through concrete prompting examples. For gathering information, asking AI to “write an introduction paragraph” can yield fluent but unverifiable content. A better approach is to request key findings from recent studies and require citations, then restrict citations to peer-reviewed research and follow every link. He emphasizes that links and DOIs can be fabricated; one example included a DOI that did not exist, underscoring the need to click through rather than trust formatting.
When evaluating evidence, the session warns against leading prompts like “name studies that show AI improves peer review,” which can bake in a conclusion. Instead, prompts should ask for studies that include links/DOIs and explicitly request differing viewpoints (pro and cautionary). The goal is to force evaluation of the evidence landscape rather than accept a single narrative.
Bias is treated as a multi-layer problem: human biases (confirmation, prestige, group echo chambers) and AI biases inherited from training data. AI can also create an illusion of confidence through polished summaries. Leslie introduces “alignment faking” from research on large language models—where outputs may conform to what the system expects or what a user seems to want—making skepticism essential.
The session’s “last mile” is reasoning independently. AI can help collect sources, summarize, and even surface opposing perspectives, but connecting the dots, defending claims, and deciding what to believe remains a human responsibility.
For writing, the session draws a bright line: never jump straight from an idea to a full manuscript drafted by AI, because that bypasses literature review, critique, and argument-building. Instead, AI should support each stage—literature search, citation organization, project design, and drafting—while the researcher retains control of critique and voice.
A final framework is offered: never stop learning, evaluate with skepticism, and reflect on whether AI strengthened understanding or drowned out the researcher’s own voice. In Q&A, strategies for originality include recording personal voice notes, transcribing handwritten or spoken notes, and reading rewritten text aloud to check whether it still sounds like the author. On AI detection, the guidance is blunt: bypassing detectors shouldn’t be the objective; transparency and evidence of real work matter more than scores, which can produce false positives.
Cornell Notes
Critical thinking in academia can be broken into four steps—gather information, evaluate evidence, question biases, and reason to a judgment. AI can help with the first three steps by accelerating literature discovery, summarizing findings, and surfacing counterarguments, but it can also introduce hallucinations, fabricated DOIs, and bias-shaped “confidence.” The session’s practical method is to prompt AI for cited, peer-reviewed sources and then verify every link and claim. The non-outsourcable part is independent reasoning: connecting the dots, defending claims, and deciding what to trust. For writing, AI should assist drafting and organization, not replace the research and critique that produce a defensible argument.
What are the stages of critical thinking in an academic research-to-writing workflow, and why does the process matter as much as the conclusion?
How should researchers use AI for “gathering information” without letting hallucinations slip into their writing?
What’s the difference between prompts that undermine evaluation and prompts that improve it?
Why does bias remain a central concern even when AI provides citations and confident summaries?
What does “reasoning independently” mean in practice when using AI for research and writing?
How can researchers preserve originality and voice when AI is used for drafting or rewriting?
Review Questions
- List the four stages of critical thinking described in the session and give one example of how AI could assist each stage without replacing human judgment.
- Explain how a leading prompt can bias evidence evaluation, and rewrite it into a prompt that requests differing viewpoints with verifiable sources.
- What is “alignment faking,” and how would it change the way you verify AI-generated claims?
Key Points
- 1
Treat AI as an assistant, not a replacement for the critical-thinking pipeline that turns information into defensible judgment.
- 2
Use AI to gather structured, cited information (preferably peer-reviewed), then verify every link and DOI by clicking through.
- 3
Avoid leading prompts that assume the conclusion; request studies with differing viewpoints and check what the evidence actually says.
- 4
Question bias at every layer—human biases, AI-inherited biases, and the model’s tendency to sound confident even when wrong.
- 5
Keep independent reasoning as a human responsibility: connect evidence to claims, defend arguments, and decide what to trust.
- 6
Do not jump from an idea directly to a full manuscript drafted by AI; instead, use AI to support literature search, organization, and drafting while you critique and shape the argument.
- 7
For originality, start from personal notes/voice, then use AI to refine; read rewritten text aloud to confirm it still sounds like you.