Top 10+ Scopus and SCI Journals of Computer Science || CS Journal Selectors || Hindi || 2023
Based on eSupport for Research's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Start with a tentative title and abstract based on the research area so journal search can match on keywords and scope.
Briefing
Computer science researchers looking to submit papers quickly are urged to avoid “pay-and-publish” traps by using a structured journal-selection workflow—then cross-checking indexing, relevance, and review timelines before final submission. The core message is practical: don’t rely on claims like “fast publication” or “no need to check,” because some journals may accept papers (including special issues) yet fail to deliver proper indexing or end up outside major databases. Instead, build a tentative title/abstract from the research area, search for similar articles, and use journal-selection tools to shortlist venues that match both topic and publication speed.
The process starts with refining the submission metadata. A researcher is advised to create a title and abstract (even tentatively) based on the known research area, then use those keywords to find journals that already publish similar work. The workflow emphasizes relevance checking through “similar articles” results: if the same or closely related topics appear in a journal’s recent publications, that journal is more likely to be a fit. This matters because submitting to an unsuitable journal often leads to desk rejection, wasting time—especially when the goal is faster turnaround.
Two main selection routes are described. First is a journal-selection search that uses the researcher’s title/abstract and subject area (example: Computer Science, with attention to interdisciplinary overlap such as Biomedical + Computer Science). The results include journal names, relevance ordering, and practical signals like review/decision time indicators. The second route uses a journal-selection platform’s subject-area filtering (again using Computer Science as the example) to narrow down results quickly, then sorting by factors such as turnaround time and indexing status.
A key decision point is verifying indexing and journal legitimacy. The guidance includes checking whether a journal is indexed in major services (the transcript mentions Scopus and “Web of Science” as benchmarks) and reviewing the journal’s official website details such as indexing claims, subject scope, and ranking signals. The workflow also recommends checking the journal’s “experience” with similar submissions—using metrics like decision time windows and competition indicators (e.g., how many papers are already present or how crowded the journal appears).
After shortlisting, the transcript stresses cross-verification before submission. Even if a journal appears in a recommendation list, researchers should confirm on external sources such as Google Scholar and the journal’s own site. The final step is to validate the journal’s current indexing status and performance, then submit only after confirming that the journal matches the paper’s topic, has credible indexing, and offers a realistic review timeline. The overall aim is to reduce the risk of rejection and avoid journals that promise speed or special-issue publication but fail on indexing and credibility.
Cornell Notes
The transcript lays out a step-by-step method for choosing Scopus/SCI-relevant computer science journals without falling for “pay-and-publish” traps. It recommends starting with a tentative title and abstract, then searching for similar articles to judge topical fit and reduce desk rejection risk. Two selection routes are used: title/abstract-based search and subject-area filtering (e.g., Computer Science), both with attention to turnaround/decision-time indicators. Before submitting, researchers should verify indexing claims (Scopus/Web of Science benchmarks), check the journal’s official site, and cross-check via Google Scholar. The goal is faster publication with credible indexing and better match quality.
Why does the transcript emphasize checking indexing and not trusting “fast publication” claims?
How does creating a tentative title and abstract help with journal selection?
What are the two main journal-shortlisting approaches described?
What specific checks are recommended before final submission?
How does the transcript suggest handling interdisciplinary work (e.g., Biomedical + Computer Science)?
What is the transcript’s practical strategy for avoiding desk rejection and wasted time?
Review Questions
- What steps in the workflow are meant to reduce the risk of desk rejection, and how do they work?
- Which verification checks should be done even after a journal appears in a recommendation list?
- How does the transcript recommend using title/abstract and subject filters differently to build a shortlist?
Key Points
- 1
Start with a tentative title and abstract based on the research area so journal search can match on keywords and scope.
- 2
Shortlist journals by checking “similar articles” to confirm topical fit and reduce desk rejection risk.
- 3
Use both title/abstract-based search and subject-area filtering (e.g., Computer Science) to narrow options quickly.
- 4
Verify indexing claims against major benchmarks (Scopus/Web of Science) and confirm details on the journal’s official website.
- 5
Cross-check shortlisted journals on external sources such as Google Scholar before submitting.
- 6
Use turnaround/decision-time indicators and competition signals to choose venues aligned with faster review expectations.
- 7
Avoid relying on marketing promises about speed or special issues; credibility depends on indexing and match quality.