Get AI summaries of any video or article — Sign up free
DeepSeek: Research Topic Selection to Systematic Literature Review || Ethical use of DeepSeek thumbnail

DeepSeek: Research Topic Selection to Systematic Literature Review || Ethical use of DeepSeek

5 min read

Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

Use DeepSeek to generate candidate research topics and titles from a starting question and keywords, then validate relevance against real literature.

Briefing

A practical workflow for using DeepSeek’s free v3 version to jump-start a systematic literature review (SLR)—without outsourcing academic thinking—centers on turning a research question into a usable search strategy, then structuring the thesis-style output (research gap, objectives, methodology, expected outcomes). The core value is speed with guardrails: generate candidate topics and titles, produce Boolean-style search queries, and draft an outline that can later be validated and expanded through real database searches.

The process starts with topic selection in a specific research area—here, “image classification and detection” for “MIA detection” (as referenced in the transcript), including constraints like efficiency and interpretability. DeepSeek is prompted with a question and keywords, then returns multiple candidate research topics (the transcript mentions 10 topics) and a suggested direction that accounts for current trends such as transfer learning, GNNs, and federated learning. One example topic that emerges is an “enhancing interpretability and efficiency” approach for MIA detection using a hybrid CNN-attention framework designed for “wearable devices,” with both software and hardware feasibility considered.

Next comes the SLR mechanics: the workflow emphasizes building a search query that can be reused across databases like Scopus, Web of Science, and Dimensions. DeepSeek generates Boolean search strings (including AND/OR logic and optional specificity such as document type filters and related constraints). The transcript demonstrates using Scopus as a test case, where the query returns hundreds of documents (e.g., 326), indicating the query is broad enough to retrieve relevant literature. The key point is not the exact count, but the iterative loop: adjust the query, export results, and then proceed with bibliometric and screening steps.

From there, the workflow shifts to SLR-to-proposal translation. DeepSeek can help draft an SLR-ready outline and a table-style synthesis plan: research question, methodology, limitations, databases/datasets used in core papers, and a structured comparison across references. The transcript also suggests a “50 references” check as a sanity test for feasibility—whether a systematic review can be completed with a manageable corpus—while acknowledging that real SLRs may require more careful subject-area and time-window decisions.

Finally, the ethical-use section stresses that AI-generated content must be proofread, properly cited, and declared in acknowledgements if required. The guidance warns against blindly copying AI output into a thesis proposal; instead, users should understand the outline, brainstorm, validate claims via manual database work, and ensure citations follow academic standards to avoid retraction or ethical issues. The overall takeaway is a repeatable pipeline: generate candidates → craft search queries → run SLR searches → build a defensible proposal structure → verify and cite everything responsibly.

Cornell Notes

DeepSeek’s free v3 can be used as a structured assistant for research planning: it helps convert a starting research question into candidate topics, a thesis-style title, and—most importantly—Boolean search queries suitable for SLRs. The workflow then moves from query generation to execution in databases like Scopus (and potentially Web of Science/Dimensions), where results are exported and screened. After literature retrieval, the same outline logic supports building an SLR-to-proposal bridge: research gap, 3–5 objectives, methodology, limitations, expected outcomes, and a table format for comparing core papers. The transcript repeatedly emphasizes ethical use: AI output must be proofread, citations must be accurate, and any tool use should be declared per institutional rules.

How does the workflow turn a vague research idea into an SLR-ready search strategy?

It starts by prompting DeepSeek with a research question plus keywords, then using the returned “relevant keywords” to form Boolean logic (AND/OR, and sometimes NOT) for a search query. The transcript notes optional specificity such as document type filters and related constraints. That query is then tested in a database (Scopus in the demonstration), exported, and iteratively refined based on whether the results are relevant and manageable.

Why test the search query in Scopus (or another database) before committing to an SLR?

Because query quality determines whether the literature set is usable. In the transcript’s example, the Scopus query produced hundreds of documents (326), suggesting the query is broad enough to retrieve relevant work. Testing early helps catch overly narrow or irrelevant queries and prevents wasting time on an SLR plan built on a poor search string.

What does “50 references” mean in the proposed SLR workflow?

It functions as a feasibility sanity check: after a topic and query are chosen, running an SLR-style collection with around 50 seminal references can indicate whether the topic is tractable and whether enough core papers exist to support a systematic review. The transcript also cautions that real SLRs may require more careful selection (subject area, time window, and screening), so 50 is a planning heuristic rather than a strict rule.

How does the workflow connect SLR outputs to a research proposal structure?

After literature retrieval, the outline is reorganized into proposal components: research gap, objectives (kept to a number that can realistically be achieved), methodology, limitations, expected outcomes, and future scope. The transcript suggests arranging this into a table format so each objective can be defended with corresponding methods, datasets, and findings from the core papers.

What ethical safeguards are recommended when using AI-generated research content?

The transcript stresses three safeguards: (1) proofread and verify AI output rather than copying blindly; (2) cite sources correctly—AI-generated references must be checked and formatted according to academic standards; and (3) declare tool usage in acknowledgements if required by the institution. It also warns that unverified AI content can lead to ethical problems or even retraction risk.

What is the practical role of “objectives” in the proposal-building step?

Objectives act as the backbone for a defensible plan. The transcript recommends selecting 3–5 objectives (not too many) so each can be achieved by the end of the degree. It also highlights that having clear objectives makes it easier to justify the research gap, explain the chosen methodology, and map expected outcomes to specific parts of the literature review.

Review Questions

  1. When building an SLR search query, what kinds of constraints (e.g., Boolean operators, filters) should be tested early in a database like Scopus?
  2. How can a research gap and objectives be structured so they remain defensible during proposal review?
  3. What steps should be taken to ensure AI-assisted writing remains ethically compliant and academically reliable?

Key Points

  1. 1

    Use DeepSeek to generate candidate research topics and titles from a starting question and keywords, then validate relevance against real literature.

  2. 2

    Convert the research question into a Boolean-style SLR search query (AND/OR logic, optional filters) and test it in a database before committing.

  3. 3

    Export and screen retrieved results; refine the query iteratively based on relevance and manageability of the document set.

  4. 4

    Translate SLR planning into proposal structure by drafting research gap, 3–5 achievable objectives, methodology, limitations, expected outcomes, and future scope.

  5. 5

    Use a table-style comparison plan to map each objective to core papers’ methods, datasets, and constraints.

  6. 6

    Treat “50 references” as a feasibility check for whether the topic is reviewable, not as a substitute for full SLR rigor.

  7. 7

    Follow ethical safeguards: proofread, verify claims, cite sources correctly, and declare AI/tool usage when required.

Highlights

The workflow’s centerpiece is turning a research question into a reusable Boolean search query, then validating it by running the query in Scopus and exporting results.
A practical “SLR feasibility” sanity check uses about 50 seminal references to see whether the topic can support a systematic review.
DeepSeek can help draft an SLR-to-proposal outline—research gap, objectives, methodology, expected outcomes—so the proposal has a clear logic chain.
Ethical use requires verification, accurate citation, and acknowledgement/declaration of tool use to reduce retraction and compliance risk.

Mentioned

  • SLR
  • MIA
  • GNN
  • CNN
  • AI
  • DL
  • GNN
  • SVM