Get AI summaries of any video or article — Sign up free
Complete AI Guide for Researchers | How to use AI ethically and responsibly | Dr. Mushtaq Bilal thumbnail

Complete AI Guide for Researchers | How to use AI ethically and responsibly | Dr. Mushtaq Bilal

SciSpace·
6 min read

Based on SciSpace's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use AI to outsource research labor (searching, organizing, summarizing) while keeping intellectual responsibility and verification with the researcher.

Briefing

Researchers used to find literature by walking through physical library catalogs—index cards in drawers, then shelves, then journals and books—an approach that demanded time and specialized training. Over roughly the last two decades, that workflow has shifted toward digital search tools like Google Scholar and PubMed. The key distinction is that these tools outsource “labor” (finding and organizing information) while leaving “thinking” with the researcher: the goal is to generate a research idea first, then use digital databases to locate supporting evidence.

That labor-versus-thinking split becomes the ethical framework for using generative AI. The first rule is to outsource only labor to AI, not cognition. AI systems can assist reasoning, but they don’t “think” independently; they can’t replace the researcher’s responsibility for the direction, judgment, and validity of the work. The second rule is to use AI for structure rather than content. Generative models produce text by prediction, and predicted text tends to be familiar rather than novel. More importantly, AI-generated content can introduce hallucinations—confidently fabricated claims or citations—and the burden of correctness still lands on the person submitting the work.

The third rule is to treat AI as a research assistant, not a research supervisor. A useful mental model is to imagine each AI app as a capable undergraduate helper: it can summarize, draft, and organize tasks, but it still requires guidance and verification. That means not copying and pasting AI output into a manuscript without reading and checking sources, because the author’s name is on the final product.

The fourth, and most urgent, rule is not to over-rely on AI and to keep common sense front and center. Several real-world examples illustrate what happens when people stop verifying. Steven Schwarz used ChatGPT to generate fake legal citations and then submitted the fabricated reports in court, leading to sanctions and a $5,000 fine. In academia, Professor Jared Mum reportedly treated ChatGPT as if it were a plagiarism detector, running student work through it and failing an entire class—despite the fact that AI detectors cannot reliably confirm whether text was AI-generated. Another scandal involved researchers publishing an AI-generated image with explicit anatomy, triggering widespread backlash and media coverage.

After laying out those guardrails, the session shifts to practical use of SciSpace (SciSpace agent) for researchers. The standout capability is “semantic search”: unlike Google Scholar or PubMed keyword matching, SciSpace can interpret the meaning of a research question and then search across academic sources. In demonstrations, it builds a research plan, retrieves and ranks relevant papers (including from Google Scholar, PubMed, and preprint archives), extracts insights from abstracts, and provides narrative answers with real citations that can be checked. Features like “deep search” add clarifying questions to tailor the literature review, while “high quality review” synthesizes top papers into structured summaries and tables.

SciSpace also supports workflows that typically consume hours: adding columns such as methods or limitations to a paper table, saving notes to an AI notebook, chatting with uploaded PDFs, explaining tables and math, and even generating podcasts for commuting. The overall message is clear: AI can accelerate literature discovery and organization, but ethical research still depends on human judgment, source verification, and original thinking.

Cornell Notes

The session frames ethical AI use for researchers around a labor-versus-thinking distinction: use AI to handle information work (searching, organizing, summarizing) while keeping intellectual responsibility with the researcher. It warns against generating “content” with AI because models predict text and can produce hallucinations, including fake citations; AI output must be checked before publication. AI should function as a research assistant—guided, verified, and never blindly copied into manuscripts. SciSpace is presented as a practical tool for semantic search and literature synthesis, producing narrative answers with real paper citations, plus table-based reviews, deep searches with clarifying questions, and note-taking features. The emphasis throughout is to avoid overreliance and to use common sense, supported by real cases where failures led to sanctions, academic harm, and public scandals.

Why does the “labor vs. thinking” distinction matter for ethical AI use in research?

The distinction separates tasks that can be outsourced from tasks that require human judgment. Digital tools and AI can outsource labor such as locating relevant papers, extracting summaries, and organizing results. But the researcher must still supply the research idea, decide what counts as evidence, and verify claims. The talk uses library catalogs as an analogy: researchers used catalogs to find materials, but they still had to think and interpret. Applied to AI, the same principle means AI can help with searching and drafting, while the researcher remains responsible for direction, correctness, and originality.

What’s wrong with using AI to generate the “content” of a paper, and why is structure safer?

Generative models are predictive and trained on human-written text, so they tend to produce predictable writing rather than new insights. More critically, AI can hallucinate—fabricating details or citations. In academic publishing, editors expect a specific research-paper structure and originality; AI-generated content can introduce errors that the author cannot deflect as “ChatGPT’s mistake.” Using AI for structure (outlines, formatting, organizing thoughts) is framed as acceptable because it supports the researcher’s own writing and verification rather than replacing the evidence-based contribution.

How should researchers treat AI—as a supervisor or an assistant?

AI should be treated as a research assistant. The talk proposes a mental model: each AI app resembles a smart, eager undergraduate helper. It can summarize and draft tasks, but it cannot think independently and still needs guidance. A key behavior is verification: even if an assistant summarizes papers, the researcher should not copy and paste summaries into a manuscript without reading and confirming sources, since the author’s name carries responsibility.

Why is “don’t over-rely” paired with “use common sense,” and what real incidents support that?

Overreliance leads people to skip verification and trust AI outputs blindly. The talk cites Steven Schwarz, who used ChatGPT to generate fake legal citations and submitted them in court, resulting in sanctions and a $5,000 fine. It also cites Professor Jared Mum, who allegedly used ChatGPT as if it were a plagiarism checker and failed students; the talk stresses that AI detectors can’t reliably confirm whether text was AI-generated. A third example describes a scandal involving AI-generated explicit imagery in a published paper, showing how unchecked outputs can trigger major professional and public consequences.

What does “semantic search” mean in the context of SciSpace, and how is it different from Google Scholar or PubMed?

Semantic search means the system understands the meaning of a question rather than only matching keywords. Google Scholar and PubMed largely highlight related terms (e.g., “social media” and “mental health”) but don’t fully interpret the intent behind a query. SciSpace is presented as combining a generative model with academic databases so it can interpret the question, search relevant literature, and synthesize results into narrative answers with citations to published papers.

What workflow features does SciSpace offer beyond answering a question?

SciSpace is presented as supporting multiple research stages: (1) narrative answers with cited sources, (2) “deep search” that asks clarifying questions to tailor a review, (3) “high quality review” that synthesizes top papers into structured summaries and tables, (4) table customization via added columns like methods and limitations, (5) saving notes to an AI notebook (“save to notebook”), (6) “chat with PDF” for interacting with uploaded papers, and (7) generating a podcast version for listening during commuting. It also mentions an integration with a university library via a Chrome extension and LibKey to fetch PDFs when available.

Review Questions

  1. How does the labor-versus-thinking framework change what you should verify when using AI for literature review?
  2. What are hallucinations, and why do they create ethical and professional risk even when AI output looks persuasive?
  3. In what ways does semantic search improve on keyword-based search for forming and refining research questions?

Key Points

  1. 1

    Use AI to outsource research labor (searching, organizing, summarizing) while keeping intellectual responsibility and verification with the researcher.

  2. 2

    Avoid generating full “content” with AI for submission; use AI for structure and formatting while ensuring claims are evidence-based.

  3. 3

    Treat AI as a research assistant that needs guidance and checking, not as a supervisor that can replace reading and judgment.

  4. 4

    Don’t over-rely on AI or AI detectors; common sense and source verification are essential, supported by real cases of fabricated citations and misuse of detectors.

  5. 5

    Prefer tools that support semantic search and provide traceable citations to published papers so claims can be checked.

  6. 6

    Use table-based literature review features (e.g., methods/limitations columns) and note-taking workflows to reduce time spent on manual synthesis.

  7. 7

    Integrate AI-assisted workflows with institutional access (e.g., library/PDF retrieval) to streamline reading without copying unverified summaries into manuscripts.

Highlights

The ethical core is outsourcing labor, not thinking: AI can help find and organize evidence, but the researcher must still generate the idea and verify the output.
AI-generated “content” is risky because predictive text can hallucinate, including fake citations—responsibility remains with the author.
SciSpace is positioned as enabling semantic search that interprets question meaning and returns narrative syntheses tied to real, checkable references.
Deep search and high-quality review features aim to tailor and synthesize literature into structured outputs, including customizable tables and citations.
Common sense is treated as non-negotiable, with examples ranging from fake legal citations to misuse of AI detectors in grading.

Topics

Mentioned