Get AI summaries of any video or article — Sign up free
These Unknown AI Tools Give PhD Students an Unfair Advantage (Before Everyone Else Finds Them) thumbnail

These Unknown AI Tools Give PhD Students an Unfair Advantage (Before Everyone Else Finds Them)

Andy Stapleton·
5 min read

Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Sitely targets citation reliability by verifying pasted references and returning statuses like “verified” or “needs review,” plus an authenticity score.

Briefing

A new wave of early-stage AI tools is aiming to give PhD students an edge not by repeating the same “chat with papers” playbook, but by tackling specific pain points—verifying sources, rehearsing defenses, drafting thesis components, and streamlining academic writing and review. The common thread is practical workflow help, paired with the expectation that researchers will test, critique, and improve these tools while they’re still rough around the edges.

Sitely is positioned as a source-finding and reference-verification assistant built to reduce one of large language models’ biggest headaches: hallucinated citations. Instead of only generating search results, it offers a “verify references” workflow where users paste literature content and receive an authenticity status such as “verified” or “needs review.” It can also check multiple references in a batch, though formatting quirks show up when references span multiple lines—requiring manual cleanup before the tool can properly validate them. Once references are in the right format, Sitely can cross-reference and produce an authenticity score, helping researchers quickly separate real citations from partial or nonexistent ones.

For oral presentations and committee Q&A, academate (spelled as “academate” in the transcript) focuses on rehearsal. Users upload a thesis or paper (the creator’s thesis upload failed despite size limits, but a paper worked) and then run a timed question-and-answer practice session. The system generates committee-style prompts, records the user’s responses, and provides feedback at the end. The goal is to reduce anxiety by letting students practice the hardest questions in a low-stakes setting.

Thesisit.ai shifts toward thesis production support, offering modules like topic generation, drafting an introduction, fetching and annotating literature, and drafting sections such as theory and data. The tool also supports inserting references by uploading materials, and it warns users about credit limits—though the transcript notes it may not stop entirely when credits run out. Still, there’s a clear caution: the tool is framed as assistance for structure and writer’s block, not a substitute for critical thinking, and the transcript raises the ethical concern that a “single button” thesis generator could enable cheating.

For writing workflows, libright.app emphasizes end-to-end editing and submission prep. After uploading a manuscript, it can reformat for journal submission, generate reporting-guidelines checklists, and support peer review. It also integrates with Word, offering actions like shorten, paraphrase, polish, and cover-letter generation, plus peer-review and guideline tools directly inside the document.

Finally, Ascent targets literature synthesis and comparison. It supports asking research questions, uploading papers, and selecting fields explicitly. Features include “compare papers,” citation generation, and a “paper reader” experience that pulls relevant open-access papers (sourced from Semantic Scholar in the transcript). A synthesis step analyzes and synthesizes multiple papers in the background, aiming to produce a consolidated view—though the transcript notes the synthesis status can take time and may appear stalled during early testing.

Across all tools, the message is straightforward: these platforms are still early, may be imperfect, and often gate usage behind credits or payments. But early adoption—testing workflows, reporting bugs, and sharing feedback—is presented as the fastest path to building better research software before the mainstream ecosystem catches up.

Cornell Notes

Early-stage AI tools for academia are being pitched as targeted upgrades over generic chatbots—especially for verifying citations, rehearsing defenses, drafting thesis sections, and improving writing workflows. Sitely focuses on reducing hallucinated references by verifying whether pasted literature actually exists and returning statuses like “verified” or “needs review,” along with authenticity scores. academate offers committee-style Q&A practice by recording responses and generating feedback. Thesisit.ai provides modular thesis support (topic ideas, introductions, literature fetching/annotation, and drafting sections), while libright.app and Ascent streamline manuscript preparation and multi-paper synthesis. The practical takeaway: test these tools now, report issues, and treat them as assistance that still requires human judgment.

How does Sitely try to prevent hallucinated citations from derailing research?

Sitely includes a “verify references” workflow where users paste literature content and receive an authenticity outcome such as “verified” or “needs review.” It can process multiple references, then cross-reference and generate an authenticity score. The transcript also flags a practical limitation: if references are split across multiple lines, the tool may format them incorrectly, forcing manual selection and cleanup before verification works reliably.

What does academate add for PhD students who fear sounding unprepared in committee Q&A?

academate is designed for rehearsal. Users upload a thesis or paper (a thesis upload failed due to size guidelines in the transcript, but a paper upload worked) and then start a practice session that simulates a committee-style question-and-answer flow. The system records responses, shows prompts, and produces a report with feedback at the end—aimed at reducing anxiety by practicing under realistic pressure.

What kinds of thesis tasks does Thesisit.ai claim to support, and what ethical concern is raised?

Thesisit.ai is presented as a suite of thesis modules: topic generation, drafting an introduction, fetching and annotating literature, and drafting sections like theory and data. It can also insert references by letting users upload materials. The transcript raises a concern that a sufficiently capable “instant thesis” workflow could enable cheating if students rely on it to produce full theses without original thinking.

How does libright.app position itself compared with typical AI writing assistants?

libright.app emphasizes academic workflow integration. After uploading a manuscript, it can reformat for journal submission, generate reporting-guidelines checklists, and support peer review. It also offers a Word-based experience where users can shorten, paraphrase, and polish selected text, plus generate cover letters and peer-review/reporting-guideline actions within the document.

What distinctive capabilities does Ascent highlight for literature work?

Ascent supports research-question input plus paper upload, and it lets users select fields explicitly rather than relying only on inferred topics. It includes “compare papers,” citation generation, and a paper reader experience that pulls relevant open-access papers from Semantic Scholar. A synthesis feature analyzes and synthesizes multiple papers in the background (the transcript notes it can take 40–60 seconds, and early testing may look incomplete if it’s still running).

Review Questions

  1. Which specific workflow in Sitely is meant to validate whether a citation exists, and what output statuses does it provide?
  2. What rehearsal loop does academate create for committee-style questioning, and how does it deliver feedback?
  3. Across Thesisit.ai, libright.app, and Ascent, which tasks are most directly about drafting, which are about editing/submission prep, and which are about multi-paper synthesis?

Key Points

  1. 1

    Sitely targets citation reliability by verifying pasted references and returning statuses like “verified” or “needs review,” plus an authenticity score.

  2. 2

    Reference verification can be sensitive to formatting; multi-line references may require manual cleanup before checks work smoothly.

  3. 3

    academate provides committee-style Q&A rehearsal by recording responses and generating a feedback report after sessions.

  4. 4

    Thesisit.ai offers modular thesis drafting support (topic, introduction, literature annotation, theory/data sections) but raises concerns about over-reliance and academic integrity.

  5. 5

    libright.app focuses on manuscript-to-submission workflows, including journal reformatting, reporting-guidelines checklists, and peer-review assistance inside Word.

  6. 6

    Ascent emphasizes literature comparison and synthesis, including “compare papers” and background synthesis across multiple open-access papers from Semantic Scholar.

  7. 7

    Early adoption and feedback are framed as essential because these tools are still early-stage and may be imperfect or credit-gated.

Highlights

Sitely’s “verify references” workflow is built to confirm whether citations actually exist, aiming to cut through the hallucination problem common to large language models.
academate turns thesis or paper content into a simulated committee Q&A rehearsal, recording responses and producing a feedback report.
Thesisit.ai bundles thesis components—from topic generation to drafting theory and data—while explicitly raising the risk of turning assistance into cheating.
libright.app’s Word integration targets practical submission tasks like journal reformatting and peer-review support, not just generic rewriting.
Ascent combines paper comparison and multi-paper synthesis, pulling open-access papers from Semantic Scholar and attempting to consolidate them into a single view.

Topics

Mentioned