PLoS Journals – measuring impact where it matters
Re-posted from the PLoS blog, by Mark Patterson, Director of Publishing, PLoS.
In 2009, in this online world, how do most scientists and medics find the articles they need to read? The answer for the content published by PLoS (and no doubt by many other publishers) is via one of the now ubiquitous search engines, be it one that only searches the scientific literature, or more likely, one that searches the entire web. Given that readers tend to navigate directly to the articles that are relevant to them, regardless of the journal they were published in, why then do researchers and their paymasters remain wedded to assessing individual articles by using a metric (the impact factor) that attempts to measure the average citations to a whole journal? We’d argue that it’s primarily because there has been no strong alternative. But now alternatives are beginning to emerge.
A few months ago, PLoS initiated a program to provide a series of metrics on the individual articles published in all the PLoS Journals. You can see some examples here, here, here and here. There are two complementary benefits to the new approach.
First, we are focusing on articles rather than journals. The dominant paradigm for judging the worth of an article is to rely on the name and the impact factor of the journal in which the work is published. But it’s well known that there is a strong skew in the distribution of citations within a journal – typically, around 80% of the citations accrue to 20% of the articles. So the impact factor is a very poor predictor of how many citations any individual article will obtain, and in any case, journal editors and peer reviewers don’t always make the right decision. Indicators at the article level circumvent these limitations, allowing articles to be judged on their own scientific merits.
Second, we are not confining article-level metrics to a single indicator. As summarized by Michael Jensen, and discussed by many others including recently over at the Scholarly Kitchen, there’s a lot more to scientific impact than citations in the selection of journals covered by the Web of Science – the proprietary source of data that provides the impact factor calculation. Citations can be counted more broadly, along with web usage, blog and media coverage, social bookmarks, expert/community comments and ratings, and so on. Our own efforts are so far confined to citations (as measured by Scopus and PubMed Central), social bookmarks (as made by users of Connotea and CiteULike), and blog coverage (as recorded by Bloglines, Postgenomic and Nature Blogs), and these metrics will be improved and expanded over the coming months. The good news is that many of these indicators can be collated automatically, using openly available web tools that constantly update information on the article itself.
The presentation of a comprehensive array of this data is an enticing prospect. When an article has been published, we have tended to regard that as the end of the story (barring corrections or the occasional retraction). But if, as frequently happens, a very good article has been published in a specialist journal after being rejected from a highly selective one, it would be great to indicate to a user that this article is actually looking pretty significant, and show how its influence develops over the months and years.
Rather than basing judgments on the importance of research on the opinions of two or three reviewers and editors, article-level metrics will attempt to capture the actions and opinions of entire communities of readers to give a rich and sophisticated picture of research impact that will be helpful to authors and readers alike. Readers may then frame that picture in the context of their particular field and their own work.
To realize the vision for article-level metrics there are still some significant hurdles to clear: it won’t be enough simply to provide indicators without some context or guidance on how to interpret them; some indicators (particularly citations) take months to build up limiting their value as early indicators of impact; and standards will need to be developed so that the indicators are reliable and as free as possible from gaming and manipulation.
A clear editorial selection process will always have a place before publication in a scholarly journal. But a reduction in the reliance on the impact factor for so many aspects of research assessment could be massively liberating. PLoS Medicine, to cite an example close to home, has recently restated its mission – focusing on the diseases and risk factors that have the most profound impacts on global health. By carefully selecting articles that are likely to have the biggest influence on global health and using innovative and diverse approaches to assess and indicate that influence, PLoS Medicine will be a greater force, regardless of how many citations an average article accrues
Looking towards other modes of publishing, PLoS ONE is predicated on the notion that judgements about impact and relevance can be left almost entirely to the period after publication. By peer-reviewing submissions purely for scientific rigour, ethical conduct and proper reporting before publication, articles can be assessed and published rapidly. Once articles have joined the published literature, the impact and relevance of the article can then be determined on the basis of the activity of the research community as a whole. Article-level metrics and indicators, along with other post-publication features are part and parcel of the PLoS ONE approach, and could help readers to filter and sort literature after it is published. Ultimately, the aim of adding value to articles after publication is to improve the whole process of scientific communication and accelerate research progress itself. You can read more about article-level metrics in the context of PLoS ONE, and a talk is also available online from Pete Binfield (Managing Editor of PLoS ONE).
Article-level metrics and indicators will become powerful additions to the tools for the assessment and filtering of research outputs, and we look forward to working with the research community, publishers, funders and institutions to develop and hone these ideas. As for the impact factor, the 2008 numbers were released last month. But rather than updating the PLoS Journal sites with the new numbers, we’ve decided to stop promoting journal impact factors on our sites all together. It’s time to move on, and focus efforts on more sophisticated, flexible and meaningful measures.