Get AI summaries of any video or article — Sign up free
How to Write Research Papers with ChatGPT: Step-by-Step Guide (2023) thumbnail

How to Write Research Papers with ChatGPT: Step-by-Step Guide (2023)

5 min read

Based on SABIYA'S PROGRAMMING SCHOOL's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use ChatGPT to rewrite long abstracts into journal-specific word counts (including 150-word targets), then manually verify meaning and missing details.

Briefing

ChatGPT can speed up research-paper writing by turning long draft sections into journal-ready text—especially when word limits and structured sections (abstract, title, results discussion, and outlines) are the bottleneck. The workflow described starts with logging into ChatGPT (free access is presented as sufficient) and pasting an existing draft abstract that exceeds common limits. After requesting a rewrite to a specific word count (examples include 250–300 words for some journals and 150 words for others), ChatGPT compresses the abstract to the target length within seconds. That speed, however, comes with a non-negotiable step: the rewritten content must be read and verified to ensure meaning stays intact and no valuable details are lost.

Beyond abstract compression, ChatGPT is used to generate multiple candidate titles for a paper based on the topic and model approach. Starting from a draft title, it can produce several alternatives, including more creative options. The process is iterative: pick a strong baseline title, then ask for additional creative variants until a final choice fits the paper’s focus.

Results writing is treated as another major use case. After running experiments (the transcript references applying deep learning and obtaining results), the user pastes the results into ChatGPT and asks for a detailed “results discussion” section. The generated discussion is expected to include concrete evaluation details such as classification performance, validation accuracy, model accuracy, comparative analysis versus existing methods, hardware and software used, training time, number of epochs, validation metrics, number of classes, and other performance parameters. Even with this automation, the content still requires human verification before being inserted into the final paper.

The transcript also warns that ChatGPT can produce unreliable academic references. When prompted to suggest related papers, it may return titles that do not appear in Google Scholar, and the transcript claims many such suggestions are fabricated. The practical takeaway is to treat any “related paper” or citation candidate as untrusted until it is checked for existence and authorship in reliable databases. If a paper cannot be verified, it should not be cited or used in a literature review.

For structure, ChatGPT can generate a paper outline with standard research-paper headings—introduction (background, motivation, problem statement, objectives), literature review (existing methods, techniques, advantages, gaps), methodology (data used and steps), results and analysis (visualize and compare), and discussion. Finally, it can help with datasets by providing links, but those links must be validated because incorrect or fake URLs may be returned. The transcript concludes with encouragement: ChatGPT can reduce thesis-start confusion by offering topic ideas, future directions, and even potential applications such as mobile tools for plant-disease image analysis—yet success still depends on the researcher’s own knowledge and critical checking.

Cornell Notes

ChatGPT can accelerate research-paper writing by rewriting long sections into journal-compliant formats, generating multiple title options, drafting detailed results discussions, and producing standard paper outlines. The workflow emphasizes speed for tasks like compressing an abstract to 150 words (or other journal limits) and expanding results into discussion text that includes metrics such as validation accuracy, epochs, and performance parameters. Despite strong drafting assistance, every output must be verified to prevent meaning changes, missing details, or fabricated references. Related-paper suggestions and dataset links should be checked in Google Scholar and by testing URLs, since incorrect or fake items can appear. The overall value is time savings—paired with human critical review.

How does the transcript recommend handling abstract word limits when using ChatGPT?

It describes copying an existing abstract that exceeds common journal limits (e.g., 300 words) and asking ChatGPT to rewrite it to a specific target such as 150 words. After pasting the text and requesting “short to 150 words,” ChatGPT converts it quickly. The key requirement is manual verification afterward: the rewritten abstract must be read to confirm valuable information remains and meaning hasn’t shifted.

What steps are suggested for using ChatGPT to write a results discussion section?

The workflow is: run experiments first, then paste the results into ChatGPT and request a detailed “results discussion.” The expected output should include classification and validation accuracy, model accuracy, comparative analysis, and practical details like hardware/software used, training time, number of epochs, number of classes, and performance parameters. Even after ChatGPT drafts the section, the transcript stresses reading and verifying the content before inserting it into the paper.

Why does the transcript caution against citing ChatGPT-suggested related papers?

When asked for “suggest papers related to” a topic, the transcript claims ChatGPT may return titles that do not exist in Google Scholar. It describes checking results in Google Scholar and finding mismatches or missing papers. The guidance is to verify any citation candidate before referencing it in a literature review or bibliography.

How should dataset links provided by ChatGPT be treated?

Dataset links must be validated. The transcript describes requesting a “provide the link of the dataset” for a plant leaf dataset and then checking multiple returned links. Some links are described as incorrect or fake, while at least one is confirmed as working and actually provides the dataset. The takeaway: test links before relying on them for experiments.

What paper structure does ChatGPT provide for outlines, according to the transcript?

It lists typical research-paper headings: Introduction (background, motivation, problem statement, research objectives), Literature Review (existing methods/techniques, advantages, and gaps), Methodology (data used and step-by-step method), Results and Analysis (visualize results, compare with existing methods), and Discussion/Conclusion-style closing. The outline is presented as detailed enough to guide what to write under each heading.

What future directions and application ideas does the transcript suggest ChatGPT can generate?

It gives examples of extending work beyond prediction—such as interpretability and visualizations of features and reasons, investigating the potential of deep learning models for disease detection, and building a mobile application to capture and analyze apple leaf images for identification and recommendation. It also mentions possible robotics applications and broader high-level deployment ideas.

Review Questions

  1. When compressing an abstract to a journal’s word limit, what verification step does the transcript insist on before pasting the text into the final paper?
  2. What specific signs in the transcript indicate that ChatGPT-generated citations or dataset links may be unreliable?
  3. Which research-paper sections does the transcript say ChatGPT can help draft, and what kinds of details should appear in each (e.g., metrics in results discussion)?

Key Points

  1. 1

    Use ChatGPT to rewrite long abstracts into journal-specific word counts (including 150-word targets), then manually verify meaning and missing details.

  2. 2

    Generate multiple title options from a draft topic, then iterate until a title matches the paper’s focus and tone.

  3. 3

    Paste experiment results into ChatGPT and request a detailed results discussion that includes metrics (validation accuracy, epochs, classes) and practical experiment details (hardware/software, training time).

  4. 4

    Treat related-paper suggestions as untrusted until verified in Google Scholar; avoid citing papers that cannot be found or authenticated.

  5. 5

    Validate dataset links returned by ChatGPT by checking whether they actually provide the claimed dataset before using them in experiments.

  6. 6

    Use ChatGPT-generated outlines to structure writing across introduction, literature review, methodology, results/analysis, and discussion—but still fill in and verify content with domain knowledge.

  7. 7

    ChatGPT can help brainstorm future directions and applications (including mobile tools), but the researcher must critically assess feasibility and correctness.

Highlights

Abstract compression to strict limits (like 150 words) can happen in seconds, but the rewritten text must be read to ensure meaning and key details survive.
Results discussion can be drafted from raw experiment outputs, with expected inclusion of validation accuracy, epochs, hardware/software, and comparative analysis—yet human verification remains essential.
Citation and dataset suggestions require verification: the transcript reports fabricated or non-existent papers and incorrect links when checked in Google Scholar or by testing URLs.
A standard paper outline can be generated on demand, mapping each section (introduction, literature review, methodology, results/analysis, discussion) to concrete subtopics to write.

Topics

  • ChatGPT Research Papers
  • Abstract Word Count
  • Results Discussion
  • Title Generation
  • Literature Review Verification