By Yuqi Zhu and Jianxun Chu. Image generated by DALL·E and author Have you ever found yourself saying (or at least trying to say) “thank you” to AI assistants, like ChatGPT? If so, have you ever been shocked and wondered why you feel the urge to do so for something that, technically speaking, is lifeless ? After all, there’s no essential difference between AI and other tools in life, like a car, a laptop, or a vacuum cleaner. Yet, that urge just came out of nowhere after receiving a genius answer from AI that fixed the bugs bothering you, or after hearing genuine words from AI when you needed someone to talk to. Is it a common thing, or just a weird act, and does it reflect something deeper? Based on our study recently published on Public Understanding of Science , you’re definitely not alone. Screenshot from Her (2013): a conversation between Theodore (protagonist) and Samantha (AI assistant) A public debate on gratitude toward AI Early in the rise of generative AI, ...
By Justin C. Cheung and Shirley S. Ho. Image source: ChatGPT Artificial intelligence (AI) is very popular nowadays, from ChatGPT, Apple Intelligence, to digital twin systems, and even autonomous drones. In academia, another term is gaining considerable traction: Explainable AI (XAI) . XAI is a concept that refers to how well the algorithmic decisions (or behaviors), and the underlying algorithmic model can be understood by lay end-users. It helps people comprehend how AI behaves, which as research has shown, can greatly improve trust and thereby behavioral acceptance. In risk-laden AI applications such as autonomous passenger drones, XAI is all the more important because it has great potential to mitigate unwarranted concerns. At the core of our study , we examined the effects of perceived explainability in autonomous drone algorithms on trust factors. We delineated trust in three dimensions, namely performance (how well the AI operates), purpose (how well the AI’s obj...