Complete AI Guide for Researchers | How to use AI ethically and responsibly | Dr. Mushtaq Bilal
Based on SciSpace's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use AI to outsource research labor (searching, organizing, summarizing) while keeping intellectual responsibility and verification with the researcher.
Briefing
Researchers used to find literature by walking through physical library catalogs—index cards in drawers, then shelves, then journals and books—an approach that demanded time and specialized training. Over roughly the last two decades, that workflow has shifted toward digital search tools like Google Scholar and PubMed. The key distinction is that these tools outsource “labor” (finding and organizing information) while leaving “thinking” with the researcher: the goal is to generate a research idea first, then use digital databases to locate supporting evidence.
That labor-versus-thinking split becomes the ethical framework for using generative AI. The first rule is to outsource only labor to AI, not cognition. AI systems can assist reasoning, but they don’t “think” independently; they can’t replace the researcher’s responsibility for the direction, judgment, and validity of the work. The second rule is to use AI for structure rather than content. Generative models produce text by prediction, and predicted text tends to be familiar rather than novel. More importantly, AI-generated content can introduce hallucinations—confidently fabricated claims or citations—and the burden of correctness still lands on the person submitting the work.
The third rule is to treat AI as a research assistant, not a research supervisor. A useful mental model is to imagine each AI app as a capable undergraduate helper: it can summarize, draft, and organize tasks, but it still requires guidance and verification. That means not copying and pasting AI output into a manuscript without reading and checking sources, because the author’s name is on the final product.
The fourth, and most urgent, rule is not to over-rely on AI and to keep common sense front and center. Several real-world examples illustrate what happens when people stop verifying. Steven Schwarz used ChatGPT to generate fake legal citations and then submitted the fabricated reports in court, leading to sanctions and a $5,000 fine. In academia, Professor Jared Mum reportedly treated ChatGPT as if it were a plagiarism detector, running student work through it and failing an entire class—despite the fact that AI detectors cannot reliably confirm whether text was AI-generated. Another scandal involved researchers publishing an AI-generated image with explicit anatomy, triggering widespread backlash and media coverage.
After laying out those guardrails, the session shifts to practical use of SciSpace (SciSpace agent) for researchers. The standout capability is “semantic search”: unlike Google Scholar or PubMed keyword matching, SciSpace can interpret the meaning of a research question and then search across academic sources. In demonstrations, it builds a research plan, retrieves and ranks relevant papers (including from Google Scholar, PubMed, and preprint archives), extracts insights from abstracts, and provides narrative answers with real citations that can be checked. Features like “deep search” add clarifying questions to tailor the literature review, while “high quality review” synthesizes top papers into structured summaries and tables.
SciSpace also supports workflows that typically consume hours: adding columns such as methods or limitations to a paper table, saving notes to an AI notebook, chatting with uploaded PDFs, explaining tables and math, and even generating podcasts for commuting. The overall message is clear: AI can accelerate literature discovery and organization, but ethical research still depends on human judgment, source verification, and original thinking.
Cornell Notes
The session frames ethical AI use for researchers around a labor-versus-thinking distinction: use AI to handle information work (searching, organizing, summarizing) while keeping intellectual responsibility with the researcher. It warns against generating “content” with AI because models predict text and can produce hallucinations, including fake citations; AI output must be checked before publication. AI should function as a research assistant—guided, verified, and never blindly copied into manuscripts. SciSpace is presented as a practical tool for semantic search and literature synthesis, producing narrative answers with real paper citations, plus table-based reviews, deep searches with clarifying questions, and note-taking features. The emphasis throughout is to avoid overreliance and to use common sense, supported by real cases where failures led to sanctions, academic harm, and public scandals.
Why does the “labor vs. thinking” distinction matter for ethical AI use in research?
What’s wrong with using AI to generate the “content” of a paper, and why is structure safer?
How should researchers treat AI—as a supervisor or an assistant?
Why is “don’t over-rely” paired with “use common sense,” and what real incidents support that?
What does “semantic search” mean in the context of SciSpace, and how is it different from Google Scholar or PubMed?
What workflow features does SciSpace offer beyond answering a question?
Review Questions
- How does the labor-versus-thinking framework change what you should verify when using AI for literature review?
- What are hallucinations, and why do they create ethical and professional risk even when AI output looks persuasive?
- In what ways does semantic search improve on keyword-based search for forming and refining research questions?
Key Points
- 1
Use AI to outsource research labor (searching, organizing, summarizing) while keeping intellectual responsibility and verification with the researcher.
- 2
Avoid generating full “content” with AI for submission; use AI for structure and formatting while ensuring claims are evidence-based.
- 3
Treat AI as a research assistant that needs guidance and checking, not as a supervisor that can replace reading and judgment.
- 4
Don’t over-rely on AI or AI detectors; common sense and source verification are essential, supported by real cases of fabricated citations and misuse of detectors.
- 5
Prefer tools that support semantic search and provide traceable citations to published papers so claims can be checked.
- 6
Use table-based literature review features (e.g., methods/limitations columns) and note-taking workflows to reduce time spent on manual synthesis.
- 7
Integrate AI-assisted workflows with institutional access (e.g., library/PDF retrieval) to streamline reading without copying unverified summaries into manuscripts.