Get AI summaries of any video or article — Sign up free
Academic Writing with ChatGPT (Part 1 - Beginner) thumbnail

Academic Writing with ChatGPT (Part 1 - Beginner)

E-Research Skills·
6 min read

Based on E-Research Skills's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

ChatGPT can support academic writing by generating ideas, outlines, and drafts, but students must rewrite and edit to avoid generic copy and to maintain ownership.

Briefing

Academic writing with ChatGPT is less about copying finished text and more about steering the model through careful prompting, structured workflows, and controlled “memory” so outputs match a specific thesis or literature review plan. The core message: use ChatGPT to generate ideas, outlines, and drafts as starting material, then edit heavily, verify facts, and format the work to look like the student’s own voice—because wholesale copy-paste triggers plagiarism/AI-detection risk and produces generic, hard-to-defend writing.

The session begins with practical guidance on which ChatGPT versions matter for academic work. ChatGPT 3.5 is positioned as a beginner-friendly option with a knowledge cutoff around September 2021, while ChatGPT 4 extends coverage to roughly March 2023, and “GPT-4o” (referred to as “4 40”) reaches about October 2023. Multimodal capability is highlighted as a major upgrade: text plus image, audio, and video-related features (with “Sora AI” mentioned as upcoming). The instructor also warns that free tiers may be slower during peak usage and that message limits can change over time.

A second pillar is prompt engineering—turning a vague request into a detailed task. The guidance emphasizes giving ChatGPT enough context (goal, audience, role/persona, tone, format, and examples) so it can produce more accurate and usable academic output. Role-play is encouraged (e.g., “act as a professor” or “act as a reviewer”), along with specifying output structure such as tables, bullet points, or email formats. The session also recommends batching tasks efficiently: instead of one request per message, combine multiple sub-tasks in a single prompt (e.g., analyze, summarize, categorize, then draft), while noting that splitting into multiple prompts can help if results degrade.

To keep outputs consistent across a long writing process, the session introduces “memory” as a way to store reusable preferences—like a thesis structure template. A demonstration shows saving a thesis outline into memory (e.g., title page, abstract, acknowledgements, table of contents, introduction, literature review, methodology, results, discussion, conclusion) and then reusing it later so ChatGPT can generate section-by-section drafts aligned to that plan. The instructor cautions that saving incorrect or overly broad instructions can “corrupt” future outputs, so memory should be set deliberately and updated when needed.

For literature reviews, the session outlines an 11-step approach built around keywords and synthesis rather than copying: brainstorming keywords, forming a research problem statement, creating an outline, optionally defining key terms, then writing a critical review that is coherent, well-structured, and properly referenced. “Coherence” is treated as a skill: paragraphs must link logically, not just summarize separate papers. Critical review is framed as identifying both supporting and contradicting findings and using citations to prove claims. The session repeatedly stresses verification: rely on multiple sources/tools, use citation tools or plugins for accurate referencing, and never trust AI-generated citations blindly.

Finally, the session provides a hands-on workflow for building a research proposal and literature review around an example topic (virtual reality and mathematics education for elementary learners). It demonstrates generating keywords, proposing research gaps, drafting research objectives and research questions, and sketching hypotheses and study design elements (e.g., experimental vs traditional groups, pre-test/post-test, and statistical comparisons). The overarching takeaway is that ChatGPT becomes useful when treated like a research assistant that needs direction, constraints, and human editing—not a copy-and-paste author.

Cornell Notes

The session frames academic writing with ChatGPT as a controlled process: generate ideas and structure, then edit, verify, and cite properly. It distinguishes ChatGPT 3.5, 4, and GPT-4o by knowledge cutoffs and emphasizes multimodal capabilities for richer academic assistance. Prompt engineering is presented as the main lever—requests should include goal, role/persona, tone, format, and examples to produce more accurate outputs. “Memory” is introduced to store reusable thesis structure and writing preferences so later drafts stay consistent. For literature reviews, an 11-step workflow is recommended, with special focus on coherence and critical review (supporting and contradicting evidence) backed by citations.

Why does the session discourage copy-pasting AI-generated academic text, even if it’s paraphrased?

Copy-pasting (or paraphrasing without ownership) tends to produce generic wording that many students will generate, making it hard to defend academically and increasing AI-detection/plagiarism risk. The guidance suggests using AI for structure, ideas, and partial drafting, then rewriting in the student’s own voice. A safer approach described is to copy only a small portion (e.g., one or two sentences), edit it, and add proper citations by locating the original sources. It also warns that AI tools can produce incorrect information or fake citations, so verification is required.

How should a user design prompts to get better academic outputs?

Prompts should include: (1) the task (summarize, outline, draft, translate, extract), (2) the goal (what the output is for), (3) relevant background/context, (4) the role/persona ChatGPT should adopt (e.g., professor, reviewer), (5) tone and audience (academic, friendly, professional), and (6) the required format (table, bullet points, email structure). The session also recommends specifying constraints like word limits and requesting multiple steps in one message (analyze → summarize → categorize → draft), while splitting into separate prompts if quality drops.

What is “memory” in this workflow, and how is it used for thesis writing?

Memory is used to store reusable instructions—such as a thesis/dissertation section structure—so ChatGPT can generate later sections consistently. The demonstration shows saving a thesis outline (title page, abstract, acknowledgements, table of contents, introduction, literature review, methodology, results, discussion, conclusion) into memory and then asking ChatGPT to produce section details based on that stored template. The session cautions that saving the wrong instructions can lead to inconsistent or “corrupted” outputs, so memory should be set carefully and updated when needed.

What does the session mean by “coherence” and “critical review” in a literature review?

Coherence means paragraphs must connect logically—paper A’s idea should link to paper B’s idea with a clear reasoning thread, not just a sequence of summaries. Critical review means going beyond describing studies: it should include supporting and contradicting perspectives, explain why findings differ, and justify claims with citations. The session emphasizes that supervisors often reject drafts that sound “good” but lack evidence of analytical judgment, so critical review must be explicit and referenced.

How does the session suggest building a research proposal from scratch using ChatGPT?

The workflow starts with keywords, then moves to a research topic, then a research problem statement (often framed as a gap), followed by research objectives and research questions. The example uses virtual reality and mathematics education to generate keywords, propose gaps (e.g., limited guidance or lack of integration for elementary contexts), and then draft objectives/questions. It further sketches hypotheses and study design elements such as experimental vs traditional groups and pre-test/post-test comparisons, with data analysis guidance (e.g., comparing post-test scores using appropriate statistics).

What verification steps are recommended when using AI for academic writing?

The session recommends not relying on a single AI tool, cross-checking information across multiple tools/sources, and using citation/search plugins for references. It also warns that AI can generate incorrect information or fake citations, so citations must be verified by finding the original papers. For AI-detection concerns, it advises testing with tools first rather than panicking, and still submitting through the required institutional system (e.g., Turnitin) for an accurate view.

Review Questions

  1. What specific elements should be included in a prompt (task, goal, role, tone, format, constraints) to improve academic writing outputs?
  2. How would you use “memory” to keep a thesis outline consistent across multiple ChatGPT sessions, and what risks come from saving incorrect memory instructions?
  3. In an 11-step literature review workflow, where do coherence and critical review fit, and what evidence should support each?

Key Points

  1. 1

    ChatGPT can support academic writing by generating ideas, outlines, and drafts, but students must rewrite and edit to avoid generic copy and to maintain ownership.

  2. 2

    Use ChatGPT 3.5, 4, and GPT-4o with awareness of knowledge cutoffs (around Sep 2021, Mar 2023, and Oct 2023 respectively) and expect different capabilities.

  3. 3

    Prompt engineering improves results when requests specify goal, background, role/persona, tone, output format, and constraints like word limits.

  4. 4

    Store reusable thesis preferences (like section structure) using ChatGPT “memory,” but set it carefully because incorrect memory can distort later outputs.

  5. 5

    Literature reviews should prioritize coherence (logical paragraph links) and critical review (supporting and contradicting evidence) backed by verified citations.

  6. 6

    Never trust AI-generated citations or facts blindly; verify with original sources and cross-check across multiple tools.

  7. 7

    When building a research proposal, move systematically from keywords → topic → problem statement (gap) → objectives/questions → hypotheses → study design and analysis plan.

Highlights

The session treats academic writing as a workflow: prompt for structure and ideas, then rewrite, verify, and cite—never copy-paste finished text.
“Memory” can lock in a thesis/dissertation structure so later drafts stay consistent, but saving the wrong instructions can derail outputs.
Coherence and critical review are framed as the real differentiators in literature reviews: paragraphs must link logically, and claims must be supported with evidence that includes both agreement and disagreement.
Prompt engineering is presented as the key skill: role/persona, tone, format, and constraints determine whether outputs become usable academic material.
Research proposal building is demonstrated end-to-end: keywords and gaps lead to objectives/questions, then to hypotheses and an experimental vs traditional study design with pre-test/post-test logic.

Mentioned

  • AI
  • GPT
  • SPSS
  • AI detection
  • Turnitin