AI Feedback That’s So Good, It Feels Like Cheating (It’s Not)
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Thesa 5.0 provides structured pre-submission feedback for academic documents, including purpose, evidence, thesis alignment, and overall assessment.
Briefing
An AI writing assistant called **Thesa 5.0** is positioning itself as an “academic toolkit” for thesis and research writing—giving structured, field-aware feedback plus a built-in research workflow for finding relevant literature, journals, and conferences. The core value is not just critique; it’s a checklist-style assessment that maps a manuscript’s claims, purpose, evidence, and analysis to what reviewers typically expect, helping writers tighten arguments before submission.
After logging in, users upload a manuscript for a **pre-submission assessment**. The system supports multiple document types—scientific papers, thesis/essay, grant proposals, reports, and bibliographies—and it adjusts feedback based on the selected category and study field (the transcript uses chemistry as an example). Uploading a full thesis is limited by a **10 MB cap**, so the workflow currently favors chapters, drafts, or sections rather than entire dissertations.
Once the upload finishes, the interface presents feedback in a side panel with expandable sections and dropdowns such as **general feedback**, **what works well**, **what can be improved**, and an **overall assessment**. The feedback is framed as detailed and specific enough to resemble a supervisor’s comments—down to pointing out missing elements reviewers look for. For instance, the system evaluates whether the paper clearly states its **purpose**, whether it **summarizes key findings**, and whether it **evaluates advantages and limitations**. In the example provided, the assessment flags that limitations and areas for improvement were not sufficiently addressed, and it notes that alternative interpretations or counterarguments weren’t extensively considered.
The assistant also runs argument-quality checks tied to thesis structure. In the transcript, the “thesis statement” section includes tests such as whether the thesis statement can be challenged (marked green when the statement is supported by presented facts and figures) and whether the essay supports the thesis statement (marked green when alignment is strong). Another feedback area targets **evidence quality**, including cases where evidence is considered missing or where claims are supported but analysis is thin—summarized as issues like weak analysis or insufficient interpretation.
Beyond critique, Thesa 5.0 adds a research layer. In a **PDF view**, it surfaces **resources and collections**, including similar publications and recommended reading. It also provides journal and conference suggestions with metrics like **match factor** and **impact factor** (example values shown include match factor **86%**, impact factor **5.53**, and another listed impact factor **31.8**). A generated section highlights the **research question**, plus **research opportunities** that suggest logical next steps based on gaps detected in the submission. For open-access papers, the tool offers quick scanning via an **abstract digest** with keywords and main claims, and it supports downloading or sharing links for citation workflows.
Overall, the transcript frames Thesa 5.0 as a “researcher-made” system that combines reviewer-style argument checking with literature discovery and publication planning—aimed at improving acceptance odds and reducing the back-and-forth that happens when supervisors or reviewers want clearer purpose, stronger evidence, and more complete analysis before peer review.
Cornell Notes
Thesa 5.0 is an AI pre-submission assessment tool for academic writing that combines reviewer-style feedback with research discovery. Users upload a scientific paper, thesis/essay, grant proposal, or bibliography (with a current 10 MB upload limit), then select document type, study field, and draft stage to get field-aware critique. Feedback is organized into sections like purpose, evidence, thesis statement checks, and overall assessment, including flags for missing limitations, weak analysis, and insufficient consideration of alternatives. The platform also recommends related publications, journals, and conferences using metrics such as match factor and impact factor, and it generates a research question plus research opportunities to guide next steps. This matters because it targets the specific argument components reviewers often look for before peer review.
What does Thesa 5.0 do after a user uploads an academic document, and what inputs shape the feedback?
How does the feedback evaluate argument quality—especially purpose, thesis alignment, and evidence?
What limitations does the transcript mention about uploading full theses?
What research-planning features go beyond writing feedback?
How does the platform help with publication targeting (journals and conferences)?
Review Questions
- Which specific feedback categories (e.g., purpose, evidence, thesis statement checks) are used to diagnose weaknesses in an academic argument?
- How do upload constraints (like the 10 MB limit) change what parts of a thesis a writer should submit for assessment?
- What publication-planning metrics and discovery tools does Thesa 5.0 provide to support journal and conference selection?
Key Points
- 1
Thesa 5.0 provides structured pre-submission feedback for academic documents, including purpose, evidence, thesis alignment, and overall assessment.
- 2
Document type and study field selections are used to tailor feedback to the kind of writing and subject area.
- 3
A current 10 MB upload limit means full theses may not upload; chapters or smaller sections are the practical approach.
- 4
Feedback can flag missing limitations, insufficient analysis, and lack of engagement with alternative interpretations or counterarguments.
- 5
The platform adds research discovery features: recommended publications, journal suggestions with match factor and impact factor, and conference recommendations.
- 6
Generated outputs include a research question and “research opportunities” that suggest next steps based on gaps detected in the submission.
- 7
Open-access paper support includes an abstract digest (keywords and main claims) plus download/share options for citation workflows.