Get AI summaries of any video or article — Sign up free
The MOST INTELLIGENT AI feedback tool for academic writing! thumbnail

The MOST INTELLIGENT AI feedback tool for academic writing!

5 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Upload a document and answer author/document/field questions so theify can apply criteria suited to the writing context.

Briefing

An AI academic writing coach called theify is positioning itself as a “supervisor-style” feedback system rather than a tool that writes papers for users. After uploading a document, it generates structured, section-by-section critique—covering thesis statements, rationales, evidence, and interpretation—along with concrete recommendations, suggested topics for revision, and even leads on future research and publication venues. The pitch matters because academic writing often stalls at the same bottlenecks: clarifying the central argument, passing the “so what” and “how and why” tests, strengthening evidence, and connecting claims back to existing literature.

The workflow starts with uploading a document and answering setup questions such as whether the user is the author, what type of document it is, whether it has been submitted, and the field of study. Those inputs appear to tailor the criteria the system applies. For long works like a PhD thesis, the tool reportedly enforces a word limit, so the thesis can be split into multiple documents. Once the analysis runs, feedback populates across writing components, including a dedicated thesis-statement section that identifies the central argument and scores it. In the example provided, the thesis statement received an “excellent” assessment, with the system explaining why—while also treating critique as more than a rubber stamp.

A key selling point is the tone of the feedback: it’s framed as candid and objective rather than “yes-man” style. The creator contrasts it with ChatGPT, describing a tendency for some AI writing assistants to flatter the user. In the theify output, the critique includes specific gaps. In the evidence and quality-of-evidence sections, the system reportedly marked parts as only partially met, citing issues such as reliance on a limited set of key studies, insufficient engagement with opposing evidence or perspectives, and weaker analysis—such as restating claims instead of offering deeper interpretation.

Beyond diagnosis, the tool provides an actionable improvement layer. Each section includes recommendations, and a feedback summary consolidates what works well and what needs strengthening. The example recommendations include adding more detail on methodological limitations and tightening the link between findings and existing literature. It also generates “suggested topics,” pointing to what could be added to the discussion section—an especially challenging task when a paper is already accepted.

Theify extends usefulness past revision. It offers “opportunities” for future research, with examples tied to English medium instruction (EMI), such as investigating how participation in EMI programs affects graduates’ career trajectories and job prospects across local and international markets. It also surfaces related resources: publications and journals, plus a suitability-style match percentage for where the work might fit. The user notes that even if some journal matches don’t pan out, the list reduces the time spent hunting for appropriate outlets.

Finally, the tool is tested on a quickly generated grant-style proposal (produced via ChatGPT). That draft fares worse, particularly on rationale criteria like the “so what” test and the “how and why” test, reinforcing the message that theify is meant to evaluate and guide improvement rather than simply validate output. The overall takeaway is that theify aims to deliver ongoing, structured academic feedback—potentially functioning like continuous supervision—while also helping users plan next steps for research and publication.

Cornell Notes

Theify is presented as an AI academic writing coach that gives structured, section-by-section feedback after a user uploads a document. It scores and critiques core components such as the thesis statement and rationale, then flags weaknesses in evidence, interpretation, and engagement with opposing perspectives. Unlike “yes-man” style assistants, it provides candid improvement recommendations, including methodological limitations and stronger links between findings and existing literature. The tool also generates suggested topics for revisions, future research opportunities, and lists of publications and journals with suitability-style match percentages. That combination is positioned as useful both for early drafts and for polishing work that is already close to acceptance.

How does theify tailor its feedback after a document is uploaded?

It begins with an upload and a short set of questions, including whether the user is the author, what type of document it is, whether it has been submitted, and the field of study. Those inputs are used to apply different criteria and expectations. After analysis, the system populates feedback section by section, including a thesis-statement area that identifies the central argument and assigns a score with an explanation.

What does the “so what” and “how and why” testing refer to in the feedback?

In the rationale portion, the tool checks whether the writing does more than describe a problem—it must justify why the study is needed. The “so what” test is treated as central to the rationale, and the “how and why” test evaluates whether the argument explains both the problem and the necessity of the work. In the example, a published article’s thesis rationale passes well, while a quickly generated proposal fails multiple rationale tests.

What kinds of weaknesses does theify flag in evidence and analysis?

The system can mark evidence and quality as only partially met. In the example, it criticized insufficient addressing of opposing evidence or perspectives that could challenge the thesis, limited reliance on a few key studies, and weak analysis characterized by restating claims rather than deeper interpretation. It also notes where credibility would improve by engaging counterpoints.

What does the tool provide besides critique—how does it help users revise?

Each section includes recommendations, and there’s also a consolidated feedback summary listing what works well and what needs improvement. Suggested improvements in the example include adding a more detailed discussion of methodological limitations and strengthening the connection between findings and existing literature. It also generates “suggested topics” for what could be added to the discussion section.

How does theify support next steps like future research and publication planning?

It generates “opportunities” for future research, with examples such as investigating how English medium instruction (EMI) participation influences graduates’ career trajectories and job prospects in local and international markets. It also provides related resources: publications and journals. For publication planning, it lists journals with a suitability-style match percentage (e.g., a reported 92% match with a slightly lower figure elsewhere), helping users shortlist venues to explore and submit to.

Review Questions

  1. What setup inputs does theify ask for, and how might those inputs change the criteria applied to feedback?
  2. Which specific rationale tests (e.g., “so what,” “how and why”) did the example proposal fail, and what does that imply about the quality of its argument?
  3. List three categories of critique theify provides (e.g., thesis statement, evidence, interpretation) and describe one concrete weakness mentioned for each in the examples.

Key Points

  1. 1

    Upload a document and answer author/document/field questions so theify can apply criteria suited to the writing context.

  2. 2

    Expect section-by-section scoring and critique, including thesis statement and rationale checks tied to “so what” and “how and why” expectations.

  3. 3

    Use the recommendations and feedback summary to revise specific weaknesses, such as methodological limitations and weaker links between findings and literature.

  4. 4

    Leverage generated “suggested topics” and “opportunities” to plan additions to the discussion and future research directions.

  5. 5

    Use the publications and journals lists—along with match percentages—to shortlist where a manuscript might fit and reduce time spent scouting outlets.

  6. 6

    For long works like a PhD thesis, plan to split the thesis into multiple documents to meet word limits.

Highlights

Theify is framed as an academic writing coach that delivers structured, candid feedback rather than writing on the user’s behalf.
Feedback targets core academic criteria—thesis rationale, evidence quality, and interpretation—flagging issues like missing opposing perspectives and shallow analysis.
Beyond critique, it generates revision ideas, future research opportunities, and journal shortlists with suitability-style match percentages.
A quickly generated proposal performed poorly on rationale tests, reinforcing that the tool evaluates quality rather than simply validating drafts.

Topics

Mentioned

  • EMI