By Emma Weitkamp
How do you judge the quality of the science
you consume online? In an increasingly diverse media landscape quality
assessment becomes an important challenge for science communication, both from
a practical and research perspective.
Online, we encounter science via a
patchwork of very different platforms and voices. It is this nexus between
platform – Spotify, Reddit, blogs and newspaper feeds – and voices – climate
activists, science sceptics, lobby groups and scientists – that creates the
challenge of assessing quality. We cannot easily apply the same criteria to a
YouTube video produced by an interested citizen, as we do to an in depth blog
post written by a scientist. Nor should we.
Yet debates around misinformation, science
denial and infodemics
raise questions about the challenges
digital media pose to society, particularly given the lack of critical
engagement on the part of users when it comes to assessing the quality of the material they consume. If readers and viewers are not engaging
critically, then it becomes even more important that the creators of science
content online are offered support to create quality content.
We sought to address this gap by exploring
with science communication academics how they conceptualise quality indicators for digital science content. From the Delphi study, we identified
five pillars through which scholars felt digital science communication could be
assessed.
Context, however, is key. Digital science content, like any other content, needs to match the setting – what you expect to see on Instagram is not the same as what you expect to see on the website of a national newspaper. Nor should we necessarily apply the same quality judgements or expectations to different science communication actors – whether that be a university press officer, a policy maker or a social media influencer.
Likewise, the technical affordances of
digital platforms clearly influence the way in which context is produced and
consumed, and, of course, audience expectations of what they will find there. This
technical context also plays an important role in shaping how we might judge
science communication quality.
Many of the issues raised in our study
point to the need for more critical audiences, and there is clearly a role for
education that supports the development of these critical skills. But there is
also a role for science communicators and ways in which we can facilitate this
critical engagement. Our findings will not offer a panacea for tackling
misinformation, but do offer steps in the right direction.
We see a need for society to promote
science communication quality, and this includes government and social media
companies. Social media companies in particular need to understand their role
in society and the responsibilities this brings, and there does appear to be
some appetite for greater regulation of these industries and discussion
of what this might look like.
At a professional level, societies and
associations, scientific institutions and other bodies can support quality
assurance of digital science content through provision of training and
guidelines. We encourage such organisations to debate questions of quality and
consider how they can foster quality science communication amongst their
members, but also work together with similar professional bodies to raise
standards, such as between science journalism and science public relations.
Finally, at the individual level we see a
role for professional communicators, but also science communication scholars,
scientists and those science enthusiasts who contribute much to science
discourses online. Here is where the content, presentation and process pillars
really come into their own, as they suggest tangible approaches that
practitioners can adopt to both produce quality content and help audiences
assess its quality. Crucially, they move beyond standards such as ‘accuracy’,
to consider how clearly the motivations or goals of the communicator (and their
sources) are presented, as well as an indication of the reliability of the
evidence presented. This type of transparency is essential if audiences are to
be able to judge the quality of what they encounter.
Our study also highlights gaps that science
communication scholars could address. Relatively few respondents chose to
comment on less traditional digital media formats, such as Instagram posts or
communications from non-governmental organisations. We do not know why this
was, but speculate that it may have to do with familiarity. There simply are
fewer studies of science communication on some of these platforms. Yet when I
ask my undergraduate students where they get their news and information, it is
an unusual student that suggests they read the newspaper (online or off) or
watch the news on TV. They are far more likely to mention TikTok, Snapchat or the latest
platform I haven’t even heard of. Legacy media are by no means dead, but we,
scholars of science communication need to move out of our comfort zones and
tackle these less researched places (though for me Truth Social may be a step
too far!).
---
Emma Weitkamp is Professor of Science Communication
and co-director of the Science Communication Unit at the University of the West of
England, where she teaches on the MSc in Science Communication, provides training for science communication professionals. Her research interests explore narrative in
science communication, considering both arts and media practice and the actors
involved in science communication. She participates in the COALESCE project to develop a European competence centre
in science communication.