All impact metrics are wrong, but (with more data) some are useful.

A couple of years ago I wrote about some of the limitations of relying on Altmetrics as an indicator of a paper’s impact, because it doesn’t pick up all online mentions.

Yes, impact metrics are flawed; experts have been pointing this out for years. And I’m not singling out Altmetrics here, there are a few different impact metrics used by different journals for the same goal, e.g. PlumX, Dimensions, CrossRef Event Data.

Despite their flaws, we’re all still using them to demonstrate how our work is reaching global audiences. I used them recently in a promotion application and a major grant application.

But I’m now questioning whether I will keep using them, because they are deeply flawed and are consistently misused and misinterpreted. They are literally a measure of quantity without any context: the number of shares or mentions, but no indication of how and why they are being shared.

This is problematic for a few reasons. Continue reading

Limitations of using Altmetrics in impact analysis

The number of published papers using Altmetrics ‘attention scores’ as a data source to measure impact is rising. According to Google Scholar, there are over 28,000 papers mentioning Altmetrics and impact.

This latest analysis published in PeerJ finds a positive correlation between citation rates and the Altmetric score for papers published in ecology & conservation journals over a 10 year period (2005-2015). This implies: the more a paper gets tweeted, blogged, or talked about in online popular media, the more it will be cited.

This seems commonsense. The more exposure a paper gets online, compared to traditional exposure via journal alerts to the limited number of subscribers, the more people will be aware of it and potentially cite it. This is why we do scicomm. (Although, hopefully people read a paper first and decide on its quality and relevance before citing.) Continue reading