[open-science] No apparent relationship between rejection rate & journal impact factor
ross.mounce at gmail.com
Fri Jan 15 22:57:59 UTC 2016
It's been a long time since I've posted to these mailing lists...
I just thought I'd send along this excellent blog post published late last
With data from 570 different journals, it appears to demonstrate that
rejection rate (the percentage of papers submitted, but NOT accepted for
publication at a journal) has no apparent correlation with journal impact
Why is this significant?
Well, a lot of people seem to think that 'selectivity' is good for
research. That somehow by rejecting lots of perfectly valid papers
submitted to a journal, it somehow ensures increased 'quality' (citations?)
of the papers that _are_ eventually accepted for publication at a journal.
The fact is, high rejection rates often indicate that a lot of good
research papers are being rejected just to satisfy an unjustified fetish
for arbitrary and crude pre-publication filtering. This is important
evidence for advocates of the 'publish first, filter post-publication'
philosophy; as put into practice by journals such as *F1000Research*
Ideas and Outcomes*.
Rejecting perfectly good/sound research causes delays in the dissemination
of knowledge - rejected manuscripts have to be reformatted, resubmitted and
re-reviewed elsewhere at great cost. Most rejected manuscripts get
published somewhere else anyway. So why bother rejecting them in the first
Please show your friends the graph if they haven't already seen it. I think
data like this could change a lot of people's minds...
Happy 2016 everyone,
Ross Mounce, PhD
Software Sustainability Institute Fellow 2016
Dept. of Plant Sciences, University of Cambridge
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the open-science