Responsible Metrics

A forum for the responsible use of metrics in higher education & research

  • About
  • The Metric Tide
  • Resources
  • Contact Us

9th July 2015

Skewering the impact factor

Sometimes it’s the little things that count. Which is why I have started asking journals to publish their citation distributions alongside their impact factors and applauded the recent decision of the EMBO Journal to do just that. I am happy to do so again on this brand new website devoted to responsible metrics.

The Responsible Metrics website emerged as an off-shoot from the deliberations UK Independent Review of the Role of Metrics in Research Assessment and Management, which was coordinated by HEFCE in 2014-15. Our report, The Metric Tide, is published today and will I hope served as an informed and principled resource for universities and research institutions in the UK — and around the world — as they seek to harness new ways to evaluate their research and their researchers.

Many of those new ways involve assembling tables of numbers to serve as indicators of research performance – often imperfectly, sometimes dangerously. As the report recognises, the rising tide of metrics has seemed unstoppable. Their ill-effects, particularly the perverse incentives associated with journal impact factors, are well understood in the research community. But what is equally well understood is that there are no easy fixes. Indeed the reaction among scholars is often one of hopeless resignation.

In the face of such despair I often think of the movie Touching the Void, which recounts the near-impossible descent of the Siula Grande in Peru by mountaineer Joe Simpson after he had suffered a broken leg, fallen into a crevasse and been abandoned in horrendous weather – presumed dead – by his climbing partner Simon Yates. Over three agonising days Joe eventually crawled and scraped his way back to base-camp, driving on by setting himself infinitesimal targets – a few steps to this rock, a few more to that stream.

And so it is likely to be with struggle to wean ourselves from our addiction to the insidious impact factor metric. Our report is one step on a journey that started some time ago, and has been marked in recent years by the launch of the San Francisco Declaration on Research Assessment and the Leiden Manifesto.

The publication of citation distributions by journals alongside their impact factors (which I don’t suppose we will ever get them to give up) is another small but valuable step. It is simply a matter of being open and transparent about the data that are used in assessments – one of the recommendations of our report. The more people are aware that journal citation distributions are highly skewed, that the mean of the citations embodied in the impact factor is the wrong way to characterise such a distribution, and that there is extensive overlap in the distributions of journals judged to have very different impact factors, the more hope we have of resolving to adopt modes of research evaluation that are rigorous, fair and focused on the research rather than where it is published.

The EMBO Journal is the first this year to commit to publishing citation distributions, but I am also pleased to see that PeerJ announced yesterday that it is to adopt a similar practice. Following the publication of my blog post I have also heard encouraging noises from at least two other publishers, who are now having internal discussions about the providing citation distribution data. I very much hope they – and others – will follow the example set by EMBO J. and PeerJ. It is, in my view, a responsible thing to do.

Posted by @Stephen_Curry, professor of structural biology at Imperial College, and a member of the steering group for the metrics review. He has written extensively about metrics and impact factors on his Reciprocal Space blog.

9th July 2015

This blog aims to provide a forum for debating responsible use of metrics in higher education & research. It builds on the UK’s Independent Review of the Role of Metrics in Research Assessment and Management

Comments

  1. Jim Woodgett says

    9th July 2015 at 20:58

    Two movie references in two articles!

    Aside from repeated reminders about the problems with over-reliance on metrics (especially flawed ones), perhaps the best way to deal with the ever encroaching issue of quantifying the unquantifiable is to exemplify elements of science that simply cannot be reduced to a number. Assessments have their purpose but if they are, ultimately, inaccurate as predictors of future performance, they undermine their very utility and actually reduce fiscal accountability. It was certainly good to see that the HEFCE report carefully parsed the pros and cons of metrics and concluded that they cannot be relied upon as the sole means to measure scientific impact. I’m sure there was active debate among the working group.

    Reply
  2. Susan Fitzpatrick says

    10th August 2015 at 19:39

    Wonderful to be having this discussion. But I am not so sure I am willing to accept that academic researchers are at the mercy of the system because they ARE the system. They are the faculty, chairs and deans. They are the reviewers. They are the committees. They are both the standard bearers and the standards. They are the individuals who introduce their colleagues by the number of publicationsand grant dollars and not the importance of contributions. Peer review should be the gold standard but not if the peers have abdicated their responsiblity to be the arbiters of “good science.” They yielded away the power of criticism. There was a time when junk was loudly called out as junk. Now – adapting from the public relations adage that all ink is good ink – all science is GREAT science. A return to core values could go along way…

    Reply

Leave a Reply Cancel reply

You have to agree to the comment policy.

Follow @ResMetrics

  • RT @DalmeetS: Brazil joins the list of countries whose research evaluation systems have come under scrutiny in recent years — my latest for…85 days ago
  • RT @AnnaKClements: Indeed - what's not to like .. "It also means tackling the problematic uses of metrics in research and driving up the in…132 days ago
  • RT @jameswilsdon: ‘Credit comes from manipulating metrics, & the readers that matter are the algorithms that gather the traces of one’s fak…132 days ago
  • RT @ACSCevents: We'd like to let you know that registration for our virtual conference, #Reproducibility, Replicability & Trust in Science…211 days ago

RESOURCES | ABOUT | CONTACT US | BLOG

RESPONSIBLE METRICS
Tweets by @jameswilsdon
Web Development By S G Barker