For a long time, the number of citations an article receives has been the gold standard to measure the impact of an article, with the mean number of citations of all articles measuring the impact of the journal. With the arrival of blogs, twitter, Facebook and electronic newspapers it has become possible, in a practical sense, to measure the impact of an article within the society at large. Publishers, journals, and scientists recognize the need to improve the ways in which the outputs of scientific research are evaluated.
To explore new ways impact can be measured, BioMed central publisher of Parasites & Vectors, along with Cell Press, the Royal Society of Chemistry, Cambridge University Press and the Nature Publishing Group, now measure attention from online sources via Altmetric.com, while the alternative ImpactStory has been adopted by PeerJ. The Altmetric (alternative metric) impact is visualized with a score and a donut, with colours representing different kinds of media sources. Most articles will have a score less than one (typically tweeted by one person), while very few articles will have a score greater than 30.
The Altmetric statistics is mostly based on social-media attention, and might not be directly comparable to the assumption that citations represent the influences on the authors citing them. For example, the article with the highest overall Altmetric score is called “Pathology in the Hundred Acre Wood: a neurodevelopmental perspective on A.A. Milne”, discussing neurodevelopmental and psychosocial problems of Winnie-the-Pooh, Christopher Robin and their friends, and hence this metric does not, like the number of citations, measure whether the response is positive, negative, or if the article in itself is funny. Most likely the greatest chance for an article in Parasites & Vectors to get a high Altmetric score would be an article titled “Bugs in the Hundred Acre Wood” which would be irrelevant for science, but amusing for the general public.
So what is Altmetric, and why is it interesting? In a twitter conversation with Euan Adie, founder of Altmetric.com, he explained that the top 5% is what is interesting from an academic perspective. Currently Parasites & Vectors have very few articles in this range, suggesting that either the community is generally not communicating their results via twitter, Facebook, and through news outlets, or that the general public is not interested in parasites and their vectors. My guess is that the first is true.
I asked Euan Adie to give a brief introduction to alternative metrics:
1. Why do you think we need alternative metrics? Are they meant to take the place of or complement citation based metrics?
I think we need to be able to use new indicators of impact on top of citations. It seems pretty clear that citations can’t capture, say, public engagement, or uptake by practitioners, or, more much controversially, quality.
Whether or not Altmetrics can (or should if we’re talking about quality – I don’t think they should) is still in some cases an open question. But I’d argue that we’re selling researchers short unless we allow them to say what kind of impact they want their work to have and then support them with whatever independent evidence of it we can find.
Altmetrics are definitely complementary to citations – they’re looking at something different. Citations will always be useful.
2. Do you think Altmetrics will change the understanding of impact within academia and in society?
Yes, I think it’ll help steer people away from thinking about impact solely in terms of journal impact factor and citation counts.
Plainly there are all sorts of ways that a piece of research can contribute to society beyond influencing other researchers.
I also think it’s clear that researchers can create all sorts of non-traditional outputs (software and datasets being two obvious ones) that aren’t well served by citations.
It’s those two things that are driving altmetrics initiatives, I think: getting credit for different types of impact and for different types of output.
3. Can Altmetrics be used at the journal level, and do you think it is a good idea?
I’m actually on the fence about this one. I think that altmetrics works at the moment because typically you can dive into the data and put it in context pretty easily. Altmetric has an attention score and it’s usually pretty easy to work out by looking at the data why the article is getting attention and from who.
At a journal level you’d lose a lot by aggregating. You’d have to make sure that people knew about things like the relative contribution of the top 10% of papers (which are perhaps doing an order of magnitude better than the remaining 90% and pulling the journal average up) but the more complicated the metrics the less people will use them as intended.
OTOH I can see the value in being able to see how good at, say, distribution a journal is. If that’s one of the reasons you’re paying for your article to be published in a journal (directly or indirectly) then why shouldn’t you be able to check up on how good a job they’re doing compared to their competitors?
It seems Altmetrics is becoming an important part of the game for scientists, and this should be adopted by researchers in the field. The San Francisco Declaration on Research Assessment (DORA), initiated by the American Society for Cell Biology (ASCB) together with a group of editors and publishers of scholarly journals, is currently working to improve the ways in which the outputs of scientific research are evaluated, and Altmetrics – social-media attention – seems to be one of the upcoming alternatives.