Should You Trust Your Voice Assistant? It’s Complicated, but No
DOI:
https://doi.org/10.3384/ecp208015Abstract
The widespread use of voice-assisted applications using artificial intelligence raises questions about the dynamics of trust and reliance on these systems. While users often rely on these applications for help, instances where users face unforeseen risks and heightened challenges have sparked conversations about the importance of fostering trustworthy artificial intelligence. In this paper, we argue that the prevailing narrative of trust and trustworthiness in relation to artificial intelligence, particularly voice assistants, is misconstrued and fundamentally misplaced. Drawing on insights from philosophy and artificial intelligence literature, we contend that artificial intelligence systems do not meet the criteria for participating in a relationship of trust with human users. Instead, a narrative of reliance is more appropriate. However, we investigate the matter further to explore why the trust/trustworthiness narrative persists, focusing on the unique social dynamics of interactions with voice assistants. We identify factors such as diverse modalities and complexity, social aspects of voice assistants, and issues of uncertainty, assertiveness, and transparency as contributors to the trust narrative. By disentangling these factors, we shed light on the complexities of human-computer interactions and offer insights into the implications for our relationship with artificial intelligence. We advocate for a nuanced understanding of trust and reliance in artificial intelligence systems and provide suggestions for addressing the challenges posed by the dominance of the trust/trustworthiness narrative.Downloads
Published
2024-06-14
Issue
Section
Contents
License
Copyright (c) 2024 Filippos Stamatiou, Xenofon Karakonstantis
This work is licensed under a Creative Commons Attribution 4.0 International License.