Times have changed: dealing with dodgy science in the internet age

Dodgy science, dodgy scientists and dodgy humans are not a new thing. And dodgy scientific papers have been published since the dawn of scientific publishing. In 1667 an article on ‘snakestones’, a pseudoscience medical cure with absolutely no basis in truth, appeared in one of the first issues of the oldest known scientific journal, Philosophical Transactions of the Royal Society (now Phil Trans A, one of the most prestigious modern scientific journals).

Since then, disreputable papers have made regular appearances in reputable journals. And there are different scales of disreputable. The paper claiming that octopi originated from outer space was clearly far-fetched, while the scholars who recently argued there was a ‘moral panic’ over free-ranging cats simply highlighted how interdisciplinary research is often challenged by opposing methodological approaches (note: I agree with most ecologists that free-ranging cats are not good for wild animals, including insects).

Many scientists, including myself, agreed that some of the recent global ‘insect apocalypse’ claims are based on some questionable science. Don’t get me wrong: insects, along with all of nature, are suffering from human activities and we urgently need to change how we engage with our environments. But the popular media hype that all insect species around the world are in decline is simply not backed up by evidence.

Most scientists are understandably upset when papers like these are published in their own discipline. All scientists have suffered unreasonable rejections of perfectly reasonable papers. We have all struggled to get new ideas published in peer-reviewed journals, simply because a reviewer or editor didn’t like it, or because reviewers didn’t know how to review it. So it’s a struggle for many scientists to find valid reasons why these flawed papers were deemed suitable to publish in good journals.

Why does it matter?

Sure, as individuals, we all make mistakes sometimes. But at the discipline level, science is a community endeavour focused on rigour, precision and accuracy. Knowledge self-corrects over time and is built on consensus from reasoned assessment of data. This is often why some scientists simply shrug when a dodgy paper slips through the cracks. In response, we simply write replies, reproduce the science, or do more rigorous testing, right?

But times have changed. These valid scientific responses can take many years and are often published after the audience has moved on to new issues. It’s worrying when a single dodgy paper gets global airtime as ‘evidence’ of a ‘fact’ it can’t prove. Anecdotes and subjective opinions are pieces of information, not the body of evidence. Pieces of information can be questioned and disputed, but the collective body of evidence provides deeper understanding of trends, principles and known unknowns, all of which affect the ‘facts’ we have at hand.

The modern peer review system is a risk management strategy for scholarly research. Ideally, it should stop dodgy papers getting published. Unfortunately, it doesn’t always deliver intended outcomes. Why? Because editors or reviewers don’t read papers thoroughly, don’t read the methods sections, or let a dodgy paper confirm their own biases.

Sometimes editors choose to publish controversial opinions purposefully, in a misguided attempt to ignite scientific discussion. This type of ‘bothsideism’ is akin to the false balance plaguing journalism, where controversial views against the grain of current evidence are presented as on equal footing to the broader consensus.

This is a problem for the reputation of scientific literature. We, as scientists, advocate that scientific evidence is more reliable than opinion and anecdote. We encourage the public to trust peer-reviewed scientific journals. We trust these reputable journals too, because they publish rigorous evidence that has more credence than a newspaper article or a breakfast radio show. Dodgy papers in reputable journals erode that trust and cast doubt on peer-reviewed science. (And no, open access publishing and scrapping peer review for preprints are not the answers to this conundrum.)

This is much more of a problem today than it was back in 1667…even 50 years ago. Because of social media and rapid global dissemination of information, a dodgy paper published today can quickly reach millions more people than it would have 50 years ago. The implications of this are huge.

What to do?

There is no simple answer. Dodgy papers will always be published. Popular media will always publish sensational news reports about those dodgy papers. And some people will always believe those sensational news reports.

Scientists are not the police of the world and it’s not solely up to us to address the shortcomings in public understanding of the scientific process or general critical analysis skills. It’s also not up to us to fact check the work of journalists who interviewed us.

But I think there are two things that Science, the discipline, should give greater attention to in the digital age: peer review and science communication.

Peer review is a community service. The idea of a community service is that someone contributes their time and skills voluntarily because they care about the overall cause they are contributing to. We’re all busy, yet ethical community service demands our time. Whether we say yes or no to an invitation to review or handle a paper, we need to have the skills to recognise dodgy science vs. papers that just aren’t on our wavelength. We don’t teach enough of these skills in undergrad or postgrad training.

Science communication by scientists is needed now more than ever. Scicomm is not just about scientists promoting their own research and results. There are major gaps in public understanding of the scientific process, the role of uncertainty, and the influence of complexity on results. We need to talk more about these issues and engage public audiences of all ages in conversations about complexity, and talk less about ‘facts’ and results of individual studies. We need to acknowledge the value in alternative publications, like blogs and social media. We also need to train scientists in communication skills. And not just media skills that teach us how to front a camera, but how journalism and the media system work.

P1060160

© Manu Saunders 2019

13 thoughts on “Times have changed: dealing with dodgy science in the internet age

  1. naturebackin June 3, 2019 / 4:37 PM

    It is a sad problem that science is regarded as on a par with opinion and dodgy science doesn’t help as you say. Nor does the perception that much science is in the service of big business. It is difficult, to put it mildly, to counter conspiracy theorists but the whole project of academic publishing needs rethinking. Publish or perish has gone crazy with inferior (including plagiarised) postgraduate work turning up even in accredited journals. Science has somehow to fight to restore respect, which can’t be expected as simply going with the territory, and fight to maintain integrity despite the onslaught rampant in social media and elsewhere. There is a lot of value that we the public never get to know or understand, so I agree that science needs to be better communicated and without arrogance. I think it is great when scientist make the time to step aside from their research to discuss these issues.

    Liked by 1 person

  2. cinnabarreflections June 4, 2019 / 2:25 AM

    A disturbing example of how dodgy science can have serious impacts on society is the persistence of anti-vaccination proponents whose foundation rests on the debunked work and subsequent propaganda by Andrew Wakefield (I am unsure how his co-authors let his ideas go to print, not to mention the editors at The Lancet, a premiere medical journal, but as you point out, we all make mistakes). The peer review system is (to my knowledge) the best guard we have against dodgy science such as his, but as you state, we are all busy, and individual reviewers approach the task differently, devote different amounts of time to any one manuscript, and have different standards with respect to their duty to contribute to the process (I subscribed to the idea that I should serve as reviewer for a minimum of 2 submitted manuscripts for every paper I published myself). Consequently it is far from a perfect system. For example, it is inherently conservative, which may contribute to the difficulty of publishing truly revolutionary ideas that could result in paradigm shifts. A reviewer who has based his/her career on a particular world-view, is not likely to gladly recommend work that could undo the validity of their entire career (yes, that is a bit extreme, but potentially a problem IMO)! Add to this the proliferation of journals, some of which may have relatively lax standards, not to mention predatory journals with no standards at all. With respect to scicomm, the concept of probability often gets lost, which may be why media and the public at large tend to exaggerate any report. It is human nature to want definitive answers to questions, and science doesn’t always provide that.In addition, humans are easily convinced by anything that fits with our own value system, whether supported by fact or not, as is disturbingly evidenced by current world politics. Nevertheless, it is important to try. Thanks for raising these issues.

    Liked by 1 person

    • Manu Saunders June 4, 2019 / 9:16 AM

      Great comment. Yes, there are lots of examples of this from medical sciences, including the anti-vax myths. I think ecologists have been a bit complacent about this issue, perhaps because the potential for public misunderstanding & media hype has been less common in ecological disciplines (until recently).

      Like

  3. SlightConfusion July 4, 2019 / 1:36 AM

    “We need to talk more about these issues and engage public audiences of all ages in conversations about complexity, and talk less about ‘facts’ and results of individual studies.”

    I agree with you, but I think this sentence could be strongly misinterpreted in our world of ‘alternative facts’ and media inaccuracy. I’m concerned how we can promote thinking in nuance and grey instead of black and white – especially in the environmental science sphere – when so many studies and papers claim absolute confidence in x or y apocalyptic statement, and media outlets validate those claims because “Hey, it was in a scientific journal, it has to be right.” I mean, blogs have such a stigma, precisely BECAUSE they don’t have any real requirement to be scientifically accurate before they’re published. Some are written accurately – like this one, thankfully ^_^ – but so many aren’t. Heck, it’s the way McPherson and Bendell got to where they are now.

    Like

    • Manu Saunders July 4, 2019 / 9:11 AM

      Thanks for the comment. To clarify, in ecology there are very few generalised global ‘facts’ – every study has a context and the ‘x/y’ statements from individual studies are only relevant to the particular context of the study design, e.g. the taxa, study system and environmental conditions under which data were collected. This context often doesn’t come out in some media coverage, where some headlines will proclaim that the finding of an individual study is a global fact. This is what I’m referring to when I say we need to spend more effort on communicating ‘how science builds knowledge over time’ not just soundbite facts.

      Like

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s