Does peer review guarantee quality?

Linkov in the J R Soc  Med argues that Scientific Journals are “faith based”, and asks whether there is a science behind peer review? Particularly questioning that whilst journals ‘have survived with little change over centuries, whereas almost all other 300-year-old scientific technologies have died out? Why is this?

His argument goes along the lines of:

‘We would argue that the primary reason that journals have not changed is that they are ‘faith based’: we believe in them, we dare not question them’

So how good is peer review?

In 2002, Tom Jefferson and colleagues undertook a systematic review of the effects of editorial peer review.  Its conclusion from nine included studies are pretty blunt:

‘Editorial peer review, although widely used, is largely untested and its effects are uncertain.’

Five years later, 2007,  a Cochrane review on the same subject, which included 28 studies reported:

‘no clear-cut evidence of effect of the well-researched practice of reviewer and/or author concealment on the outcome of the quality assessment process (9 studies). Checklists and other standardisation media have some evidence to support their use (2 studies). There is no evidence that referees’ training has any effect on the quality of the outcome (1 study). Different methods of communicating with reviewers and means of dissemination do not appear to have an effect on quality (3 studies).’

By 2012, spot the recurring five year time lag, when you can get away with revisiting the same subject, Larson and colleagues systematic review, included 37 studies, without adding much to our previous understanding of the actual benefits of peer review.

Richard Smith, ex editor of the BMJ, is in line with the evidence on peer review when he states that, ‘Peer review would not get onto the market because we have no convincing evidence of its benefits but a lot of evidence of its flaws’.

It gets worse, for grant peer review:

‘There is little empirical evidence on the effects of grant giving peer review. No studies assessing the impact of peer review on the quality of funded research are presently available’

Looking at the figures of submission rates, Alison McCook in TheScientist,  reports journals are inundated with submissions increasing by 10 to15% each year.

The argument that disposes of the quality issues and Cook points out is,  ‘at the 2005 Peer Review Congress, held in Chicago in September, suggested that reviewers were less likely to reject a paper if it cited their work.

Go on; convince me, you wouldn’t be more likely to say yes if a paper cited your work.

Given the evidence tells us peer review does not guarantee equality, then I’m with Smith on this one: The sooner we can let the ‘real’ peer review of post-publication peer review get to work the better.