Best journal for research paper? | The EASY way to decide
Based on Andy Stapleton's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.
Build a broad journal wish list using reference patterns plus input from co-authors and colleagues, then rank it by impact factor for career-aligned strategy.
Briefing
Choosing the right journal for a research paper is less about finding a “perfect” outlet and more about building a practical submission strategy that matches how journals actually decide what to publish—and how careers are measured. The core approach is to start with a broad list of plausible journals, then narrow it using impact factor, fit with the journal’s scope, indexing, and realistic submission logistics. The payoff is higher visibility, better career metrics (like the H index), and fewer avoidable rejections caused by mismatched expectations or formatting requirements.
The process begins with assembling a “wish list” of journals. That list should include both the high-impact targets people dream about and the more realistic options that frequently publish similar work. Colleagues and co-authors can add valuable perspective, since different researchers often know different journals’ preferences. Once the list is built, the next step is to rank journals by impact factor—an imperfect metric, but one that still drives the system researchers must operate in. Even if impact factor feels overinflated, it affects how institutions evaluate output, so it remains a key sorting tool.
After ranking, the strategy shifts from numbers to compatibility. For each top choice, the researcher should check the journal’s submission guidelines and scope (often via the “about” or “submissions” pages) to confirm the journal actually accepts the type of study and results being produced. This pre-check can prevent time-consuming formatting and submission errors. Another gate is indexing: if the journal isn’t indexed for the relevant field, the work may not be discoverable in the ways that improve citation-based metrics like the H index.
Rejection risk is treated as part of the game. One tactic mentioned is aiming for a very high rejection rate—up to 80%—so the work can “bounce” through peer review outcomes until it lands in a stronger venue than expected. That approach is best suited to research that isn’t urgently time-sensitive; for fast-moving topics, repeated delays can hurt impact. Importantly, submissions should be handled ethically: papers should go to one journal at a time, not multiple places simultaneously.
The most distinctive advice is to be “cheeky,” not delusional: submit to the highest journal the work could plausibly fit, even if it’s a stretch. If rejected, step down the ladder to the next tier of impact factor and repeat until the paper finds its natural home. The goal is to avoid wasting time on journals that would never accept the specific kind of results produced.
Finally, the strategy includes a human element: check whether a supervisor or senior author has a personal relationship with an editor at the target journal. The claim isn’t that weak science gets in—it’s that a warm connection can help the manuscript clear the initial “editor gate” and reach peer review. Overall, the method blends homework, metric awareness, indexing checks, ethical submission discipline, and calculated ambition to maximize both visibility and career returns.
Cornell Notes
A practical journal-selection strategy balances ambition with fit. First, build a large list of potential journals from references, co-author input, and realistic targets, then rank them by impact factor (even if imperfect) because career systems still use it. Next, verify scope and submission requirements on each journal’s site to avoid mismatches and wasted formatting time, and confirm the journal is indexed in the relevant field so citations and H-index gains are possible. Submissions should be one-at-a-time for ethics, and rejection should be treated as a normal step in a ladder strategy. The “cheeky” move is to submit to the highest plausible journal, then step down after rejection until the paper lands in the right venue.
How should a researcher start building a list of journals before ranking anything?
Why rank journals by impact factor if it’s disliked?
What checks prevent wasted time after choosing target journals?
What does “cheeky” submission mean, and how is it different from delusional?
How should rejection rates and timing be handled?
What role can personal relationships play in getting a paper to peer review?
Review Questions
- What specific pre-submission checks should be done on a journal’s site to reduce formatting and scope mismatches?
- How does the “cheeky” ladder strategy change journal choice after a rejection?
- Why does indexing matter for citation metrics like the H index, and how can a researcher verify indexing before submitting?
Key Points
- 1
Build a broad journal wish list using reference patterns plus input from co-authors and colleagues, then rank it by impact factor for career-aligned strategy.
- 2
Confirm each target journal’s scope and submission requirements before formatting to avoid sending work that the journal won’t consider.
- 3
Check that the journal is indexed in the relevant field so the paper can be discovered and contribute to citation metrics like the H index.
- 4
Submit to one journal at a time to maintain ethical standards; don’t run parallel submissions.
- 5
Use a ladder approach: be “cheeky” by targeting the highest plausible journal first, then step down by impact factor after rejection.
- 6
Consider rejection-rate tactics only when timing allows; repeated cycles can harm impact for fast-moving research.
- 7
Ask whether supervisors or senior authors have editor connections at target journals to improve the odds of reaching peer review.