How To Write An A+ Essay Using AI in 3 Simple Steps
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Treat the rubric as the primary checklist and revisit it during refinement, not just after writing.
Briefing
A+ essays become far more achievable when writers treat a grading rubric as a checklist—and then use a large language model to repeatedly score and improve drafts against that rubric. Instead of writing and hoping the essay “feels” strong, the method centers on feeding the rubric into ChatGPT in a usable format, marking the draft, and then iterating on the exact weak spots the rubric flags. The payoff is practical: faster editing, more targeted revisions, and fewer missed requirements that cost points.
The process starts with understanding what rubrics actually demand. Rubrics spell out expectations for introductions, conclusions, main points, organization, citation/works cited, and even mechanics like sentence structure and punctuation. Many students write essays without revisiting those criteria during refinement. The approach fixes that by turning the rubric into something a large language model can reliably process.
Rubrics often arrive as tables with columns, which don’t work well inside ChatGPT. The transcript recommends converting the rubric into a simple list format so the model can read every requirement without losing detail. One option is ChatGPT’s “Advanced Data analysis” feature to upload a PDF and reformat it, but that can truncate or shorten rubric sentences—an outcome the method treats as risky because rubric wording often carries the real scoring power. The preferred workaround is manual: copy and paste the rubric into a text editor (like Notepad), restructure it into a list, and preserve the full wording and grade equivalents. Once done, that list can be reused for future essays.
With the rubric list ready, the next step is to paste it into ChatGPT and instruct it to “read this rubric” and then “mark this essay using the rubric.” The workflow is easier when split into steps—first load the rubric, then paste the essay—so the model can score accurately. In the example walkthrough, ChatGPT assigns scores for sections like the introduction, conclusion, and main points, and also provides specific feedback when the draft falls short. One highlighted issue: the essay acknowledges an opposing view but doesn’t provide a sufficiently comprehensive refutation, which the rubric treats as a main-point weakness.
After scoring, the method goes beyond critique by generating revision material aligned to the rubric. Prompts can ask for sentence starters to refute evidence already mentioned, or example sentences that strengthen rebuttals. The model can then suggest multiple ways to bolster the essay—such as questioning research methodology or addressing incomplete evidence—turning revision from a vague “make it better” task into concrete, rubric-aligned edits.
The same loop applies to weaker drafts: ChatGPT flags missing citations and the absence of a Works Cited page, then recommends using other AI tools (like “Jenny AI” and “site” are mentioned) to find references. Finally, the workflow becomes iterative: ask for improvements to the introduction, then the conclusion, then other sections until the model’s rubric-based evaluation reaches an A+.
In short, the core insight is that AI helps most when it’s used as a rubric-driven grader and revision partner—so every edit maps directly to the criteria that determine the grade.
Cornell Notes
The key to earning A+ grades with AI is to use a rubric as a strict checklist and run every draft through a large language model against that rubric. Rubrics must be converted into a list format so ChatGPT can read all requirements without truncation, and the full wording matters because it carries scoring weight. After ChatGPT marks the essay, it provides targeted feedback—such as whether the opposing view is acknowledged but not properly refuted, or whether sources and a Works Cited page are missing. The workflow then iterates: ask for improved introduction or conclusion text, generate rebuttal sentence starters, and keep revising until the rubric-based score reaches A+.
Why does the rubric matter more than “writing a strong essay” in general terms?
What’s the best way to get a rubric into ChatGPT so it doesn’t lose scoring detail?
How does the rubric-driven workflow work after the rubric is formatted?
What kinds of revision prompts turn rubric feedback into usable writing?
How are missing citations and Works Cited handled in the workflow?
How does the method reach an A+ rather than stopping at one round of feedback?
Review Questions
- How would you convert a table-style rubric into a format that a large language model can score without losing important wording?
- In a rubric-based critique, what’s the difference between acknowledging an opposing view and providing a comprehensive refutation, and how would you prompt ChatGPT to fix that?
- If an essay scores poorly for citations, what steps does the workflow recommend taking next to meet the rubric’s Works Cited requirement?
Key Points
- 1
Treat the rubric as the primary checklist and revisit it during refinement, not just after writing.
- 2
Convert rubric tables into a list format so ChatGPT can read every requirement without truncation.
- 3
Prefer manual rubric formatting when automated extraction shortens or paraphrases rubric sentences.
- 4
Run a stepwise prompt: load the rubric first, then paste the essay, then ask ChatGPT to mark using the rubric.
- 5
Use rubric-specific feedback to generate concrete revision text, such as rebuttal sentence starters and example sentences.
- 6
Iterate section-by-section (introduction, conclusion, main points) until the rubric-based evaluation reaches A+.
- 7
When citations are missing, use AI-assisted reference finding to add sources and a Works Cited page that match rubric expectations.