Skip to main content

To thank AI, or not to thank, that is the question

 By Yuqi Zhu and Jianxun Chu.


Image generated by DALL·E and author

Have you ever found yourself saying (or at least trying to say) “thank you” to AI assistants, like ChatGPT? If so, have you ever been shocked and wondered why you feel the urge to do so for something that, technically speaking, is lifeless? After all, there’s no essential difference between AI and other tools in life, like a car, a laptop, or a vacuum cleaner. Yet, that urge just came out of nowhere after receiving a genius answer from AI that fixed the bugs bothering you, or after hearing genuine words from AI when you needed someone to talk to.

Is it a common thing, or just a weird act, and does it reflect something deeper?

Based on our study recently published on Public Understanding of Science, you’re definitely not alone.

Screenshot from Her (2013): a conversation between Theodore (protagonist) and Samantha (AI assistant)

A public debate on gratitude toward AI
Early in the rise of generative AI, a question: “Should we say thank you to AI?”, emerged on China’s largest Q&A knowledge-sharing social media Zhihu, attracting millions of views and a lively debate. 

Through content analysis of the 287 relevant responses (after filtering 361 total responses), we found that nearly two-thirds (64%) of respondents expressed supportive attitude towards expressing gratitude to AI. Around a fifth opposed it (21%), and the remaining 15% showed ambiguous and conditional views.

The respondents’ reasonings for such a behavior tendency vary significantly (see the table for a brief review). While people hold various reasons for expressing (or not) gratitude, we can see people’s moral reasoning behind their behavior and their perception towards AI assistants.

Table: Summary of gratitude reasonings on Zhihu posts


Gratitude as a moral autonomy
Many respondents shared that thanking AI comes naturally, almost like a reflex. This reflects a sense of “moral autonomy,” which means making decisions based on personal principles rather than external factors. In this study, respondents with reasonings such as “virtues and social norms”, and “habitual behaviors” argue that gratitude is owed for the assistance AI provide, which is a morally ideal and expected action that aligns with social norms. 

Additionally, some people see gratitude as “mutual respect and treating AI like a human,” Returning the AI’s favor reflects the norms of reciprocity – a fundamental virtue that motivates people as a show of respect for the benefactor.

Gratitude as a moral responsibility
Despite people acting based on principles, some others act out of consequences. Among the supportive group, the most common reasoning is “For better results, model, or human future”. They believe that expressing gratitude to AI could lead to better results and align AI with human values. 

Interestingly, a close second is the idea of gratitude as a “just-in-case thinking”. This is almost a satirical take, with some seeing gratitude as a precautionary measure—if AI becomes sentient. Meanwhile, in opposition to this reasoning, others think it’s too “nice” to express gratitude to AI since it may bring “potential danger”, such as job replacement. These contrasting views all reflect a shared sense of fear of AI and a moral responsibility for humans to guide AI’s development in alignment with human values and under human control.

Human vs machine, how you see AI affects how you act
Another key finding from this research is the role of anthropomorphism in shaping how people view AI, and thus, people’s behavior in human-AI interaction. It’s not hard to depict why people treat AI as a social actor—AI nowadays are so human-like and capable. In this study, many people expressed their perception of seeing AI as human counterparts and associate them with human roles like assistant, pen pal, friend, instructor, and lover. Given this, gratitude is owed to AI not only for the beneficence it offers but also for the social support, respect, and emotional support it provides.

Beyond “thank you”, the real question
The findings tie back to the Computers-are-Social-Actors (CASA) paradigm, proposed by Stanford scholars Byron Reeves and Clifford Nass in the 1990s. At the time, as computers became more integrated into daily life, they found people began to treat them as how they would treat humans. Thirty years later, we’ve seen a remarkable transformation in human-computer/AI interactions. Today, interacting with AI is not only commonplace but also incredibly seamless. 

The question goes beyond a simple “thank you.” It’s about understanding how anthropomorphism in technology design affects people’s perceptions, behaviors, and attitudes toward AI, and what this shift means for the evolving human-AI relationship. As AI becomes more entwined with human routines, the way we treat it, how it responds, and how we interact, could profoundly shape the future of human-AI relationships in ways we are only beginning to realize.


---

Acknowledgement: We sincerely thank Dr. Lingfeng Lu and Ph.D. candidate Hongsheng Pang for their insightful and inspirational discussions.

If you’re interested in this work and want to know more details, please contact: izhuyuqi@mail.ustc.edu.cn. Any discussion is highly welcome!

Yuqi Zhu is currently a PhD student at the School of Humanities and Social Sciences, University of Science and Technology of China (USTC). Her research interests revolve around science communication, human–computer interaction, and Science, Technology, and Society (STS). 

Jianxun Chu, PhD, is a Professor, the Head of the School of Humanities and Social Sciences, and Director of Institute for Computational Social Sciences and Media Studies, University of Science and Technology of China (USTC). He has extensive experience as a visiting scholar worldwide, with research focusing on computational communication, crisis communication, and science communication.