Get AI summaries of any video or article — Sign up free
AI Feedback That’s So Good, It Feels Like Cheating (It’s Not) thumbnail

AI Feedback That’s So Good, It Feels Like Cheating (It’s Not)

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Thesa 5.0 provides structured pre-submission feedback for academic documents, including purpose, evidence, thesis alignment, and overall assessment.

Briefing

An AI writing assistant called **Thesa 5.0** is positioning itself as an “academic toolkit” for thesis and research writing—giving structured, field-aware feedback plus a built-in research workflow for finding relevant literature, journals, and conferences. The core value is not just critique; it’s a checklist-style assessment that maps a manuscript’s claims, purpose, evidence, and analysis to what reviewers typically expect, helping writers tighten arguments before submission.

After logging in, users upload a manuscript for a **pre-submission assessment**. The system supports multiple document types—scientific papers, thesis/essay, grant proposals, reports, and bibliographies—and it adjusts feedback based on the selected category and study field (the transcript uses chemistry as an example). Uploading a full thesis is limited by a **10 MB cap**, so the workflow currently favors chapters, drafts, or sections rather than entire dissertations.

Once the upload finishes, the interface presents feedback in a side panel with expandable sections and dropdowns such as **general feedback**, **what works well**, **what can be improved**, and an **overall assessment**. The feedback is framed as detailed and specific enough to resemble a supervisor’s comments—down to pointing out missing elements reviewers look for. For instance, the system evaluates whether the paper clearly states its **purpose**, whether it **summarizes key findings**, and whether it **evaluates advantages and limitations**. In the example provided, the assessment flags that limitations and areas for improvement were not sufficiently addressed, and it notes that alternative interpretations or counterarguments weren’t extensively considered.

The assistant also runs argument-quality checks tied to thesis structure. In the transcript, the “thesis statement” section includes tests such as whether the thesis statement can be challenged (marked green when the statement is supported by presented facts and figures) and whether the essay supports the thesis statement (marked green when alignment is strong). Another feedback area targets **evidence quality**, including cases where evidence is considered missing or where claims are supported but analysis is thin—summarized as issues like weak analysis or insufficient interpretation.

Beyond critique, Thesa 5.0 adds a research layer. In a **PDF view**, it surfaces **resources and collections**, including similar publications and recommended reading. It also provides journal and conference suggestions with metrics like **match factor** and **impact factor** (example values shown include match factor **86%**, impact factor **5.53**, and another listed impact factor **31.8**). A generated section highlights the **research question**, plus **research opportunities** that suggest logical next steps based on gaps detected in the submission. For open-access papers, the tool offers quick scanning via an **abstract digest** with keywords and main claims, and it supports downloading or sharing links for citation workflows.

Overall, the transcript frames Thesa 5.0 as a “researcher-made” system that combines reviewer-style argument checking with literature discovery and publication planning—aimed at improving acceptance odds and reducing the back-and-forth that happens when supervisors or reviewers want clearer purpose, stronger evidence, and more complete analysis before peer review.

Cornell Notes

Thesa 5.0 is an AI pre-submission assessment tool for academic writing that combines reviewer-style feedback with research discovery. Users upload a scientific paper, thesis/essay, grant proposal, or bibliography (with a current 10 MB upload limit), then select document type, study field, and draft stage to get field-aware critique. Feedback is organized into sections like purpose, evidence, thesis statement checks, and overall assessment, including flags for missing limitations, weak analysis, and insufficient consideration of alternatives. The platform also recommends related publications, journals, and conferences using metrics such as match factor and impact factor, and it generates a research question plus research opportunities to guide next steps. This matters because it targets the specific argument components reviewers often look for before peer review.

What does Thesa 5.0 do after a user uploads an academic document, and what inputs shape the feedback?

After login, users upload a manuscript for a pre-submission assessment. The system asks whether the user is the author, then collects the document type (e.g., scientific paper, thesis/essay, grant proposal, report, annotated bibliography) and the study field (the transcript uses chemistry). It also includes a draft-stage selector (submitted, outline, early draft, advanced draft, final draft), with the note that draft-stage input may not strongly affect responses yet. Once submitted, the tool runs an AI processing step and returns structured feedback in expandable sections.

How does the feedback evaluate argument quality—especially purpose, thesis alignment, and evidence?

Feedback is presented with dropdown sections such as general feedback, what works well, what can be improved, and overall assessment. The transcript’s example highlights checks for whether the paper clearly states its purpose and whether it summarizes key findings. It also flags missing components like evaluating advantages and limitations and not sufficiently addressing alternative interpretations or counterarguments. For thesis/essay work, it includes thesis statement checks (e.g., whether the thesis can be challenged and whether the essay supports the thesis statement), and evidence checks that can mark alignment as strong or flag missing evidence.

What limitations does the transcript mention about uploading full theses?

The tool currently enforces a **10 megabyte** upload limit. The transcript notes that a full thesis was around **41–42 MB**, so the system couldn’t accept the entire document at once. The workaround is uploading a chapter or smaller sections rather than the full thesis.

What research-planning features go beyond writing feedback?

In addition to critique, Thesa 5.0 provides research workflow tools: recommended resources and similar publications, journal suggestions with metrics like **match factor** and **impact factor**, and conference recommendations. It also generates a research question and a “research opportunities” section that proposes logical next steps based on detected gaps. For open-access papers, it offers an abstract digest with keywords and main claims to help decide whether to read or discard.

How does the platform help with publication targeting (journals and conferences)?

The transcript describes a journal tab that lists potential journals and includes a **match factor** (example shown: **86%**) alongside **impact factor** (example shown: **5.53** and another listed value **31.8**). A conferences tab similarly lists relevant conferences. The intent is to help writers find appropriate venues for submission and presentation by matching the submission’s topic to similar work.

Review Questions

  1. Which specific feedback categories (e.g., purpose, evidence, thesis statement checks) are used to diagnose weaknesses in an academic argument?
  2. How do upload constraints (like the 10 MB limit) change what parts of a thesis a writer should submit for assessment?
  3. What publication-planning metrics and discovery tools does Thesa 5.0 provide to support journal and conference selection?

Key Points

  1. 1

    Thesa 5.0 provides structured pre-submission feedback for academic documents, including purpose, evidence, thesis alignment, and overall assessment.

  2. 2

    Document type and study field selections are used to tailor feedback to the kind of writing and subject area.

  3. 3

    A current 10 MB upload limit means full theses may not upload; chapters or smaller sections are the practical approach.

  4. 4

    Feedback can flag missing limitations, insufficient analysis, and lack of engagement with alternative interpretations or counterarguments.

  5. 5

    The platform adds research discovery features: recommended publications, journal suggestions with match factor and impact factor, and conference recommendations.

  6. 6

    Generated outputs include a research question and “research opportunities” that suggest next steps based on gaps detected in the submission.

  7. 7

    Open-access paper support includes an abstract digest (keywords and main claims) plus download/share options for citation workflows.

Highlights

Thesa 5.0’s feedback is organized like a reviewer checklist, flagging missing limitations and insufficient consideration of alternatives—not just grammar or clarity.
The tool combines critique with publication planning by listing journals and conferences alongside metrics such as match factor and impact factor.
A research workflow layer surfaces related resources and open-access paper digests to help decide what to read next.

Topics

Mentioned