By Justin C. Cheung and Shirley S. Ho. Image source: ChatGPT Artificial intelligence (AI) is very popular nowadays, from ChatGPT, Apple Intelligence, to digital twin systems, and even autonomous drones. In academia, another term is gaining considerable traction: Explainable AI (XAI) . XAI is a concept that refers to how well the algorithmic decisions (or behaviors), and the underlying algorithmic model can be understood by lay end-users. It helps people comprehend how AI behaves, which as research has shown, can greatly improve trust and thereby behavioral acceptance. In risk-laden AI applications such as autonomous passenger drones, XAI is all the more important because it has great potential to mitigate unwarranted concerns. At the core of our study , we examined the effects of perceived explainability in autonomous drone algorithms on trust factors. We delineated trust in three dimensions, namely performance (how well the AI operates), purpose (how well the AI’s obj...
By Rod Abhari and Emőke-Ágnes Horvát. Whether science is seen as “self-correcting” or “broken” depends in part on how the public understands retractions. Scientific retractions are increasingly used to correct the scientific record. Last year, over 10,000 academic articles were retracted, marking an all-time high. But while scientists may see retractions as an assurance of scientific integrity when a scientific topic has been the subject of political controversy, the public may see them as evidence of incompetence or even corruption. Our recent article , available online in Public Understanding of Science, examined social media posts of the most discussed retracted COVID-19 articles in order to better understand the relationship between scientific retractions and the politicization of science. When Retractions Failed In May 2020, The Lancet published a study concluding that hydroxychloroquine—a drug promoted by then-President Donald Trump—was ineffective a...