Sometimes it’s the little things that count. Which is why I have started asking journals to publish their citation distributions alongside their impact factors and applauded the recent decision of the EMBO Journal to do just that. I am happy to do so again on this brand new website devoted to responsible metrics.
The Responsible Metrics website emerged as an off-shoot from the deliberations UK Independent Review of the Role of Metrics in Research Assessment and Management, which was coordinated by HEFCE in 2014-15. Our report, The Metric Tide, is published today and will I hope served as an informed and principled resource for universities and research institutions in the UK — and around the world — as they seek to harness new ways to evaluate their research and their researchers.
Many of those new ways involve assembling tables of numbers to serve as indicators of research performance – often imperfectly, sometimes dangerously. As the report recognises, the rising tide of metrics has seemed unstoppable. Their ill-effects, particularly the perverse incentives associated with journal impact factors, are well understood in the research community. But what is equally well understood is that there are no easy fixes. Indeed the reaction among scholars is often one of hopeless resignation.
In the face of such despair I often think of the movie Touching the Void, which recounts the near-impossible descent of the Siula Grande in Peru by mountaineer Joe Simpson after he had suffered a broken leg, fallen into a crevasse and been abandoned in horrendous weather – presumed dead – by his climbing partner Simon Yates. Over three agonising days Joe eventually crawled and scraped his way back to base-camp, driving on by setting himself infinitesimal targets – a few steps to this rock, a few more to that stream.
And so it is likely to be with struggle to wean ourselves from our addiction to the insidious impact factor metric. Our report is one step on a journey that started some time ago, and has been marked in recent years by the launch of the San Francisco Declaration on Research Assessment and the Leiden Manifesto.
The publication of citation distributions by journals alongside their impact factors (which I don’t suppose we will ever get them to give up) is another small but valuable step. It is simply a matter of being open and transparent about the data that are used in assessments – one of the recommendations of our report. The more people are aware that journal citation distributions are highly skewed, that the mean of the citations embodied in the impact factor is the wrong way to characterise such a distribution, and that there is extensive overlap in the distributions of journals judged to have very different impact factors, the more hope we have of resolving to adopt modes of research evaluation that are rigorous, fair and focused on the research rather than where it is published.
The EMBO Journal is the first this year to commit to publishing citation distributions, but I am also pleased to see that PeerJ announced yesterday that it is to adopt a similar practice. Following the publication of my blog post I have also heard encouraging noises from at least two other publishers, who are now having internal discussions about the providing citation distribution data. I very much hope they – and others – will follow the example set by EMBO J. and PeerJ. It is, in my view, a responsible thing to do.
Posted by @Stephen_Curry, professor of structural biology at Imperial College, and a member of the steering group for the metrics review. He has written extensively about metrics and impact factors on his Reciprocal Space blog.