Skip to main content

“Explainable AI” in Science and Technology Communication?

 By Justin C. Cheung and Shirley S. Ho.


Image source: ChatGPT

Artificial intelligence (AI) is very popular nowadays, from ChatGPT, Apple Intelligence, to digital twin systems, and even autonomous drones. In academia, another term is gaining considerable traction: Explainable AI (XAI)

XAI is a concept that refers to how well the algorithmic decisions (or behaviors), and the underlying algorithmic model can be understood by lay end-users. It helps people comprehend how AI behaves, which as research has shown, can greatly improve trust and thereby behavioral acceptance. In risk-laden AI applications such as autonomous passenger drones, XAI is all the more important because it has great potential to mitigate unwarranted concerns.

At the core of our study, we examined the effects of perceived explainability in autonomous drone algorithms on trust factors. We delineated trust in three dimensions, namely performance (how well the AI operates), purpose (how well the AI’s objectives align with our goals), and process (how well the AI makes decisions appropriately). 

This approach allowed us to make precise observations of trust trajectories leading to AI acceptance. We also found that people could obtain the perception of explainability prior to technological introduction. Through news media attention, our model suggests that people felt they understood the AI better.

So, what does this mean for the field of science and technology communication?

For starters, our study firmly establishes the importance of XAI in people’s perceptions about AI applications. Whether it is through news media reports, public service announcements, or direct interactions with the AI systems, being able to effectively explain the algorithmic decision-making process will be essential to achieve desirable cognitive and affective outcomes associated with AI acceptance.

To date, XAI is not a legal requirement in many jurisdictions. The responsibility of presenting XAI information falls largely on science and technology communicators. We must be aware of this expectation gap and appropriately offer explanations about the algorithms to the lay public. But how?

There is great potential to study the effectiveness of different methods for delivering XAI information. For example, experimental studies can investigate modality-variations (e.g., audio vs. text vs. anthropomorphic), explanation style-variations (e.g., concise vs. detailed language), and content-variations (e.g., what vs. how vs. why explanations). For qualitative researchers, important questions may include how much explanation should be provided, when it should be presented, what types of explanations individuals prefer, and in what kinds of situations.

Answering these questions will greatly benefit our response to the fast-growing demand for AI in modern societies.


---
Justin C Cheung is a PhD student at the Wee Kim Wee School of Communication and Information, Nanyang Technological University, Singapore. His research interest is in science communication and argumentation theory.

Shirley S Ho is the Associate Vice President for Humanities, Social Sciences & Research Communication at Nanyang Technological University (NTU), Singapore. She is concurrently President’s Chair Professor in Communication Studies in the Wee Kim Wee School of Communication and Information at NTU. Her research area focuses on cross-cultural public opinion dynamics related to science and technology, with potential health or environmental impacts.