Get AI summaries of any video or article — Sign up free
The Ultimate AI Toolkit for 2026 PhD Success — Free Tools + Pro Options thumbnail

The Ultimate AI Toolkit for 2026 PhD Success — Free Tools + Pro Options

Dr Rizwana Mustafa·
5 min read

Based on Dr Rizwana Mustafa's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use a four-part prompt structure—role, task, constraints, and formatting—to get more reliable academic writing outputs.

Briefing

AI tools can accelerate every stage of a PhD writing workflow—from turning rough ideas into structured drafts to interpreting figures and polishing language—but success depends on how researchers prompt the systems and how they verify outputs. The central takeaway is that large language models and related AI services work best when instructions are precise, role-based, and constrained (length, format, and content requirements), and when researchers treat AI as support for structuring and drafting rather than a source of final truth.

A key example is prompt design. Instead of sending vague requests, the workflow described uses a four-part structure: first, define the AI’s role (e.g., “act as a chemistry teacher”); second, assign the task (what the AI should do, such as explaining chemical bonding); third, specify constraints like word count, bullet vs. paragraph format, and other limits; and fourth, add any additional formatting requirements for the output. The guidance stresses that clearer, more detailed prompts generally produce more accurate and usable responses. It also draws a distinction between tools that rely on free-form prompting for brainstorming and information gathering versus tools with built-in templates that may not respond well to the same prompting approach.

The transcript also frames AI as a productivity engine rather than an authority. Researchers are urged to use AI for assistance—especially for organizing ideas, overcoming writing blocks, and improving efficiency—while cross-checking anything that could be wrong. Ethical use is treated as a non-negotiable part of the workflow: human review must happen at each step, and claims should not be presented without verification to avoid misinformation.

From there, the guidance shifts into a step-by-step toolkit for research writing. For the literature review and keyword discovery, free options recommended include “our discovery” and Semantic Scholar, both described as offering large paper databases and AI-integrated summaries, with features like notifications and mobile access. For paid literature review, ScienceSpace and Consensus are suggested, particularly their “deep research” or “deep seed” sections that generate comprehensive, downloadable documents with clickable links.

For moving from research to structure, Google AI Studio is positioned as the preferred option for building a professional outline, with instructions to provide the topic, and to specify target word counts by section or heading. After outlining, the transcript points to tools for expansion, writing, and editing—naming DeepSeek and NotebookLM as professional options that can do substantial work with references. When discussing paid writing/expansion tools, the guidance becomes more cautious: it recommends Jani, Scace, and Visa as paid choices, but warns that Site.ai’s outline expansion and reference accuracy have declined, citing issues like incorrect DOI links and mismatches between cited information and the source.

Finally, the transcript addresses plagiarism and long-term usage. It suggests limited-access or credit-based approaches for trying tools (e.g., “H by bari ste.ai”), but recommends “safe right” for longer-term use. It also notes that discount codes and yearly plans may be available via links in the description, and encourages researchers to screenshot slides and explore related content for more detailed walkthroughs.

Cornell Notes

The workflow emphasizes that AI can speed up PhD research writing, but only when researchers prompt carefully and verify everything. A four-part prompt structure—role, task, constraints, and formatting—helps large language models produce more usable outputs, especially for academic writing. AI should be used for structuring ideas and drafting, not for unverified advice; human checks are required to prevent misinformation. For literature review, free keyword and paper discovery options include Semantic Scholar and “our discovery,” while paid deep literature review options include ScienceSpace and Consensus. For outlining and drafting, Google AI Studio is recommended for outline building, with additional tools suggested for expansion and editing, alongside warnings about reference accuracy in some paid tools.

Why does prompt structure matter for academic outputs, and what four-part format is recommended?

Prompt quality is treated as the difference between generic text and a draft that matches academic needs. The recommended structure starts with a role (e.g., “act as a chemistry teacher”), then assigns the task (what the AI should do, such as explain chemical bonding), followed by constraints (word limit, bullet vs. paragraph format, and other limits like “under 100 words”), and finally formatting requirements (how the output should be organized, including section/heading structure). The guidance also notes that this approach is best for tools used for brainstorming and information gathering, not necessarily for tools with built-in templates.

How should researchers use AI without over-trusting it?

AI is positioned as an assistant for structuring ideas, drafting, and reducing writing blocks—not as an authority. Because misinformation is possible, researchers must cross-check AI outputs before presenting them to others. Ethical use is framed as a step-by-step workflow where human input is required at each stage, and no step should proceed without verification.

What tools are suggested for literature review and keyword discovery, and what features are highlighted?

For free literature review and keyword discovery, Semantic Scholar and “our discovery” are recommended, described as having large paper datasets and AI-integrated summaries for detailed insights. “Our discovery” is also described as offering a mobile app with notifications and email updates about new literature in a researcher’s field. For paid deep literature review, ScienceSpace and Consensus are recommended, especially their “deep research”/“deep seed” sections that produce comprehensive documents with clickable links that can be downloaded.

How does the outline-building step fit into the AI workflow?

Outline building is treated as a critical bridge between research and writing. Google AI Studio is recommended for creating a professional outline by providing the research topic and specifying target word counts by section or heading. The tool is then used to generate a structured outline that allocates how much information each section should contain, based on the researcher’s instructions.

Which tools are recommended for expansion and editing, and what accuracy concerns are raised?

For expansion, writing, and editing, the transcript names DeepSeek and NotebookLM as professional tools that can do substantial work with references. For paid writing and expansion, it mentions Jani, Scace, and Visa, but it warns that Site.ai’s outline expansion and reference accuracy have declined—citing incorrect DOI links and mismatches between the information returned and the cited sources. The takeaway is to be critical and verify references before submission.

What guidance is given about plagiarism risk and long-term tool use?

AI plagiarism risk is flagged as something researchers must manage, including through human review (“to human eyes”). For trying tools, limited access or daily free credits are suggested (example mentioned: “H by bari ste.ai”). For long-term use, “safe right” is recommended, and the transcript notes that discount codes and yearly plans may be available via links in the description.

Review Questions

  1. What four elements should be included in an academic prompt to improve the quality of outputs, and how do constraints like word count affect results?
  2. Which steps in the workflow require mandatory human verification, and why is that emphasized?
  3. How do the recommended tools differ across literature review, outline building, and drafting/editing, and what reference-accuracy warning is given about Site.ai?

Key Points

  1. 1

    Use a four-part prompt structure—role, task, constraints, and formatting—to get more reliable academic writing outputs.

  2. 2

    Treat AI as support for structuring and drafting, not as a source of final truth; cross-check claims before using them in submissions.

  3. 3

    Build a step-by-step research workflow that specifies where AI can help and where human review is required at every stage.

  4. 4

    For literature review, start with free keyword and paper discovery tools like Semantic Scholar and “our discovery,” then consider paid deep review tools like ScienceSpace and Consensus.

  5. 5

    Use Google AI Studio to generate outlines by providing section/heading word targets and required content details.

  6. 6

    Be skeptical about reference accuracy in some tools; verify DOI links and ensure cited information matches the source before submission.

  7. 7

    Manage plagiarism risk with human review and careful handling of AI-generated text, especially when producing final drafts.

Highlights

Prompting works best when instructions are role-based and constrained—word count, bullet vs. paragraph format, and explicit output structure.
AI should assist with idea organization and drafting, but every step needs human verification to prevent misinformation.
Semantic Scholar and “our discovery” are positioned as strong free starting points for literature review and keyword discovery.
Google AI Studio is recommended specifically for outline building, with section-wise word targets to control depth.
Site.ai is flagged for declining reference accuracy, including incorrect DOI links and mismatched cited information.

Topics