Academia may be unique among careers in its lack of standardised processes or training for so many of the common activities that are essential to being an academic. Instead, new researchers have to bumble blindfolded through the dark room of early career researchhood to work out how to literally do the academic parts of their job. Sometimes we’re lucky to have a supervisor, colleague, or mentor who might guide us to a door (but it may not always be the right door).
Publishing and peer review are part of this bumbling process. Publishing our research in peer-reviewed literature is a key part of our job description, to share knowledge with the discipline and beyond.
It’s increasingly common to see universities publishing press releases about newly published papers from academics. This practice emerged a few decades ago and originally seemed to be associated with health and medical research (educated guess, not sure there are any data on this).
But it has since spread more widely to many other disciplines. Ecology journals are now doing it; some ask you to submit a mandatory media summary with your manuscript ‘just in case’ (most authors will never get a media request). Some of the Big Famous journals operate on a strict authoritarian embargo system, to ensure the author doesn’t exercise their right to talk to people about their own research.
A couple of years ago I wrote about some of the limitations of relying on Altmetrics as an indicator of a paper’s impact, because it doesn’t pick up all online mentions.
Yes, impact metrics are flawed; experts have been pointing this out for years. And I’m not singling out Altmetrics here, there are a few different impact metrics used by different journals for the same goal, e.g. PlumX, Dimensions, CrossRef Event Data.
Despite their flaws, we’re all still using them to demonstrate how our work is reaching global audiences. I used them recently in a promotion application and a major grant application.
But I’m now questioning whether I will keep using them, because they are deeply flawed and are consistently misused and misinterpreted. They are literally a measure of quantity without any context: the number of shares or mentions, but no indication of how and why they are being shared.
This is problematic for a few reasons. Continue reading
Cheers to everyone who has read and shared my blog posts over the years. I’ve had some great discussions here and made some really worthwhile connections because of this little blog. Most importantly, it’s kept me inspired and connected through the highs and lows of academia. Here’s to many more blog posts, discussions, and connections to come!
I’m so happy that my current second most visited post is ‘On the importance of observations to Ecology’, an ode to natural history notes and a reminder that ecological science will stagnate without observing natural interactions occurring around us. It sums up many of the reasons why I started blogging in the first place. (It was pipped to the post by one of my insectageddon articles)
Some more on why I love blogging:
The buzz on (ecology) blogging
On 7 years of ecology blogging
Our paper on why ecology blogs are so valuable to the academic community: Bringing ecology blogging into the scientific fold: measuring reach and impact of science community blogs. For anyone who still needs convincing that academic blogs are not a waste of time, this paper is an excellent piece of evidence: co-authors are from Don’t Forget the Roundabouts, Scientist Sees Squirrel, Dynamic Ecology, Jeff Ollerton’s Biodiversity Blog, and Small Pond Science
© Manu Saunders 2019
The ‘tyranny of the sound bite’ has plagued politicians and celebrities for decades. Pithy one-liners, taken out of context, can be extremely damaging to a person’s reputation.
In science communication, Sexy Science soundbites, condensing complex ecological problems into simple data points or the efforts of single researchers, can damage public understanding of science.
We’ve seen this with Insect Armageddon and the recent ‘3 billion lost birds’ story. Ecology is the science of nuances, and any claim of global patterns or precise data points must be interpreted with context.
Much of the problem with these soundbite disasters lies with the science communication around the story, not necessarily the science itself. Continue reading
What is a species? This apparently simple question is one of the best ways to get scientists arguing.
A recent article by Henry Taylor, a philosopher at University of Birmingham, asks this question from a philosophical perspective. The article itself is okay. But there is zero chance of biologists adopting its recommendation, ‘to scrap the idea of a species’, any time soon (see also this older article at the same platform, on the same subject, from a biologist’s perspective).
What I found interesting is how different audiences interpreted the article in the comments and on social media. I saw a mix of reactions (based on my network and a few searches; obviously not indicative of everyone) – some scientists were condemning the article vocally and aggressively, while others who didn’t appear to be scientists (based on their Twitter bio), shared the article in agreement and support.
‘What is a species?’ is a classic philosophical question, not a scientific one. Philosophical questions are a valuable tool for life. They are conceptual, not factual; they are rarely ‘solved’ (in the scientific sense); and they need to be addressed with complex thinking, not just facts or empirical research. You don’t have to agree with this approach, it’s just how Philosophy differs from Science. Disciplines are defined by different methodologies, standards, systems, and norms.
Dodgy science, dodgy scientists and dodgy humans are not a new thing. And dodgy scientific papers have been published since the dawn of scientific publishing. In 1667 an article on ‘snakestones’, a pseudoscience medical cure with absolutely no basis in truth, appeared in one of the first issues of the oldest known scientific journal, Philosophical Transactions of the Royal Society (now Phil Trans A, one of the most prestigious modern scientific journals).
Since then, disreputable papers have made regular appearances in reputable journals. And there are different scales of disreputable. The paper claiming that octopi originated from outer space was clearly far-fetched, while the scholars who recently argued there was a ‘moral panic’ over free-ranging cats simply highlighted how interdisciplinary research is often challenged by opposing methodological approaches (note: I agree with most ecologists that free-ranging cats are not good for wild animals, including insects). Continue reading
The number of authors included on research papers in many disciplines has increased over time. This editorial in Journal of Applied Ecology is the latest analysis of this trend, finding that published single-author research papers in that journal have declined since 1966 (two years after the journal started publishing). N.B. the authors only quantify research papers (i.e. data papers, but they don’t specify if they include reviews/meta-analyses…see below), and applied ecology should be a multidisciplinary field, so this is a good thing.
The editorial is excellent, and you should read it – the discussion of underlying causes of this trend is mostly reasons why we should encourage more multi-author papers.
But…there will always be a place for single-author papers in research, especially for early career researchers. Continue reading
Leading on from the ‘buzz’ of our recent paper on science community blogging, here is a nice Q&A from my university’s media team (thank you UNE!) about how I started blogging and why I love it. If you’re thinking about blogging, but not sure where it will take you, I hope this gives you some insight!
Read the full story here: The buzz on blogging