Skip to content

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

PLOS BLOGS Speaking of Medicine and Health

How can we improve peer review: the impact of reporting guidelines

There’s a lot of nay-saying about peer review out there – it’s messy, inadequate, timeconsuming, boring, and nobody knows what it’s supposed to do anyway. But despite that, peer review is widely regarded as indispensable by many – including bodies such as the UK Government’s Select Committee, which concluded:

“Peer review in scholarly publishing, in one form or another, is crucial to the reputation and reliability of scientific research… However, despite the many criticisms and the little solid evidence on its efficacy, editorial peer review is considered by many as important and not something that can be dispensed with”.

So I was pleased to see a recent study reported in BMJ which takes a rigorous approach to evaluating the impact of one particular component of peer review. This study aimed to find out whether the use of reporting guidelines in peer review can improve the quality of published papers.

The researchers use this definition of a reporting guideline:

“statements that provide advice on how to report research methods and findings… they specify a minimum set of items required for a clear and transparent account of what was done and what was found in a research study, reflecting in particular issues that might have introduced bias into the research”

The study used a randomized design, in which 92 papers under evaluation at the journal Medicina Clinica were randomized to intervention or control arms. For “intervention” papers, the authors received an additional, reporting-guideline driven evaluation from a senior statistician (on top of regular peer reviewers). For “control” papers, authors received just the regular reviews. An interesting feature of the trial design was that all papers received the additional, reporting-guideline led review, but this was only sent to authors in the intervention (and not the control) arm; randomization was actually done after the reporting guideline review was completed. By doing this, the investigators were able to collect detailed baseline data on study quality, and to ensure the person doing the guideline-led review could not be biased with respect to group assignment. The instrument used to rate study quality was the scale developed by Goodman and colleagues.

What did the study show? Well, it’s a little bit hard to tell, and the evidence does not look desperately strong. In the study four papers in the “control” arm had to go through a reporting guideline-led additional review, because the editors were worried about protocol deviations in the studies reported in the papers. So these 4 papers crossed over from one arm to another. The investigators then analyse their data both intention to treat (handling the 4 crossover studies in their randomized group) and as-treated (handling the 4 crossover studies in the reporting-guideline led review group). The intention to treat analysis gives you an effect estimate for improvement of 0.25 (95% CI -0.05 to 0.54) (comparing intervention vs control arm) on the Goodman scale. The “as-treated” comparison is better, 0.33 (95% CI 0.03 to 0.63) – so if you are keen on per-protocol analyses you might conclude from this that reporting guidelines improve study quality (a bit) more than not using them… A cynic might say the study is just a bit underpowered, and if there is an effect, it’s fairly small. Of course, there does also seem to be evidence that papers in both study arms improve during peer review, although this wasn’t an objective of the project.

So what can you conclude? Firstly that it is tough to deliver properly designed studies such as this, which aim to concretely investigate the benefit (or otherwise) conferred by specific facets of peer review. The study did not definitively answer its question as to the benefit provided by use of reporting guidelines but will help others design similar studies in the future. And finally, editors and journals still operate with a huge set of “reasonability” assumptions about what works and what doesn’t, but we lack strong evidence informing a huge amount of what we do.

My competing interests are declared here. In addition, since that page was updated I have received reimbursement for local travel expenses to contribute to seminars organised by the EQUATOR group (an initiative aimed at promoting the quality and transparency of health research – and which collects together reporting guidelines). I have also contributed to the development of a number of reporting guidelines, some of which also had the involvement of some of the authors of the BMJ paper discussed in this blog.

  1. Disappointing is narrowing the debate to yes/no. No per review, or every peer review is good.

    Peer review should not be freefighting – no rules of good review, no responsibility for ditching the good paper, no evaluation of real benefit of review (is it really OK if peer review is only of marginal benefit?). Even if there is an unspoken acceptance that reviewers do their job at minimum effort.

    The answer is probably to quantify and improve the peer review. As a minimum, reviewer must be objective and declare own competing interests (reviewers killing rival groups a fact of life). I have seen reviews where it was very doubtful if the reviewer actually read the paper (the review was favourable btw). Such things must go.

Leave a Reply

Your email address will not be published. Required fields are marked *

Add your ORCID here. (e.g. 0000-0002-7299-680X)

Back to top