[open-science] No apparent relationship between rejection rate & journal impact factor

Thomas Krichel krichel at openlib.org
Sat Jan 16 09:58:33 UTC 2016


 
  Ross Mounce writes

> It's been a long time since I've posted to these mailing lists...

  The return of the prodigal son!

> With data from 570 different journals, it appears to demonstrate that
> rejection rate (the percentage of papers submitted, but NOT accepted for
> publication at a journal) has no apparent correlation with journal impact
> factor.
> 
> Why is this significant?
> 
> Well, a lot of people seem to think that 'selectivity' is good for
> research. That somehow by rejecting lots of perfectly valid papers
> submitted to a journal, it somehow ensures increased 'quality' (citations?)
> of the papers that _are_ eventually accepted for publication at a journal.
> 
> The fact is, high rejection rates often indicate that a lot of good
> research papers are being rejected just to satisfy an unjustified fetish
> for arbitrary and crude pre-publication filtering. This is important
> evidence for advocates of the 'publish first, filter post-publication'
> philosophy; as put into practice by journals such as *F1000Research*
> and *Research
> Ideas and Outcomes*.
> 
> Rejecting perfectly good/sound research causes delays in the dissemination
> of knowledge - rejected manuscripts have to be reformatted, resubmitted and
> re-reviewed elsewhere at great cost. Most rejected manuscripts get
> published somewhere else anyway. So why bother rejecting them in the first
> place?

  I do agree that the peer review practice is a relic from times when
  dissemination costs were much higher. But the data that you show
  can't show that peer review is useless at producing quality journals,
  thus failing at highlighting quality research.
  
  Let me put my argument in a crude fashion (as one would expect from
  me). Authors produce good and lousy papers. They submit the good
  papers to the good journals and the lousy papers to the lousy
  journals. With a similar peer review efforts, as demonstrated by
  rejection rates, the good journals will contain the good papers and
  the lousy journals the lousy ones. Only the papers rejected by the
  lousiest journal will be completely filtered.

  And note: rejection rates can't really be objectively observed by
  outsiders. Thus journal have an incentive to boost their rates to
  give authors a feel-good factor that they got into that journal. 

  Scholarly publishing is all a big hoo-ha. The fact that it's so
  expensive comes from libraries' nilly-willy spending on
  subscriptions. Trying to influence authors or publishers is
  not a promising agenda for reform. 
  
-- 

  Cheers,

  Thomas Krichel                  http://openlib.org/home/krichel
                                              skype:thomaskrichel




More information about the open-science mailing list