DeepSeek: Research Topic Selection to Systematic Literature Review || Ethical use of DeepSeek
Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Use DeepSeek to generate candidate research topics and titles from a starting question and keywords, then validate relevance against real literature.
Briefing
A practical workflow for using DeepSeek’s free v3 version to jump-start a systematic literature review (SLR)—without outsourcing academic thinking—centers on turning a research question into a usable search strategy, then structuring the thesis-style output (research gap, objectives, methodology, expected outcomes). The core value is speed with guardrails: generate candidate topics and titles, produce Boolean-style search queries, and draft an outline that can later be validated and expanded through real database searches.
The process starts with topic selection in a specific research area—here, “image classification and detection” for “MIA detection” (as referenced in the transcript), including constraints like efficiency and interpretability. DeepSeek is prompted with a question and keywords, then returns multiple candidate research topics (the transcript mentions 10 topics) and a suggested direction that accounts for current trends such as transfer learning, GNNs, and federated learning. One example topic that emerges is an “enhancing interpretability and efficiency” approach for MIA detection using a hybrid CNN-attention framework designed for “wearable devices,” with both software and hardware feasibility considered.
Next comes the SLR mechanics: the workflow emphasizes building a search query that can be reused across databases like Scopus, Web of Science, and Dimensions. DeepSeek generates Boolean search strings (including AND/OR logic and optional specificity such as document type filters and related constraints). The transcript demonstrates using Scopus as a test case, where the query returns hundreds of documents (e.g., 326), indicating the query is broad enough to retrieve relevant literature. The key point is not the exact count, but the iterative loop: adjust the query, export results, and then proceed with bibliometric and screening steps.
From there, the workflow shifts to SLR-to-proposal translation. DeepSeek can help draft an SLR-ready outline and a table-style synthesis plan: research question, methodology, limitations, databases/datasets used in core papers, and a structured comparison across references. The transcript also suggests a “50 references” check as a sanity test for feasibility—whether a systematic review can be completed with a manageable corpus—while acknowledging that real SLRs may require more careful subject-area and time-window decisions.
Finally, the ethical-use section stresses that AI-generated content must be proofread, properly cited, and declared in acknowledgements if required. The guidance warns against blindly copying AI output into a thesis proposal; instead, users should understand the outline, brainstorm, validate claims via manual database work, and ensure citations follow academic standards to avoid retraction or ethical issues. The overall takeaway is a repeatable pipeline: generate candidates → craft search queries → run SLR searches → build a defensible proposal structure → verify and cite everything responsibly.
Cornell Notes
DeepSeek’s free v3 can be used as a structured assistant for research planning: it helps convert a starting research question into candidate topics, a thesis-style title, and—most importantly—Boolean search queries suitable for SLRs. The workflow then moves from query generation to execution in databases like Scopus (and potentially Web of Science/Dimensions), where results are exported and screened. After literature retrieval, the same outline logic supports building an SLR-to-proposal bridge: research gap, 3–5 objectives, methodology, limitations, expected outcomes, and a table format for comparing core papers. The transcript repeatedly emphasizes ethical use: AI output must be proofread, citations must be accurate, and any tool use should be declared per institutional rules.
How does the workflow turn a vague research idea into an SLR-ready search strategy?
Why test the search query in Scopus (or another database) before committing to an SLR?
What does “50 references” mean in the proposed SLR workflow?
How does the workflow connect SLR outputs to a research proposal structure?
What ethical safeguards are recommended when using AI-generated research content?
What is the practical role of “objectives” in the proposal-building step?
Review Questions
- When building an SLR search query, what kinds of constraints (e.g., Boolean operators, filters) should be tested early in a database like Scopus?
- How can a research gap and objectives be structured so they remain defensible during proposal review?
- What steps should be taken to ensure AI-assisted writing remains ethically compliant and academically reliable?
Key Points
- 1
Use DeepSeek to generate candidate research topics and titles from a starting question and keywords, then validate relevance against real literature.
- 2
Convert the research question into a Boolean-style SLR search query (AND/OR logic, optional filters) and test it in a database before committing.
- 3
Export and screen retrieved results; refine the query iteratively based on relevance and manageability of the document set.
- 4
Translate SLR planning into proposal structure by drafting research gap, 3–5 achievable objectives, methodology, limitations, expected outcomes, and future scope.
- 5
Use a table-style comparison plan to map each objective to core papers’ methods, datasets, and constraints.
- 6
Treat “50 references” as a feasibility check for whether the topic is reviewable, not as a substitute for full SLR rigor.
- 7
Follow ethical safeguards: proofread, verify claims, cite sources correctly, and declare AI/tool usage when required.