Abstract
Trust certification through so-called trust seals is a common strategy to help users ascertain the trustworthiness of a system. In this study, we examined trust seals for AI systems from two perspectives: (1) In a pre-registered online study participants, we asked whether trust seals can increase user trust in AI systems, and (2) qualitatively, we investigated what participants expect from such AI seals of trust. Our results indicate mixed support for the use of AI seals. While trust seals generally did not affect the participants’ trust, their trust in the AI system increased if they trusted the seal-issuing institution. Moreover, although participants understood verification seals the least, they desired verifications of the AI system the most.
DOI
10.30658/hmc.8.7
Author ORCID Identifier
Magdalena Wischnewski: 0000-0001-6377-0940
Nicole Krämer: 0000-0001-7535-870X
Christian Janiesch: 0000-0002-8050-123X
Emmanuel Müller: 0000-0002-5409-6875
Theodor Schnitzler: 0000-0001-7575-1229
Carina Newen: 0000-0001-8721-6856
Recommended Citation
Wischnewski, M., Krämer, K., Janiesch, C., Müller, E., Schnitzler, T., & Newen, C. (2024). In seal we trust? Investigating the effect of certifications on perceived trustworthiness of AI systems. Human-Machine Communication, 8, 141-162. https://doi.org/10.30658/hmc.8.7
Included in
Communication Technology and New Media Commons, Human Factors Psychology Commons, Other Communication Commons, Social Psychology Commons
Accessibility Statement
This item was created or digitized prior to April 24, 2027, or is a reproduction of legacy media created before that date. It is preserved in its original, unmodified state specifically for research, reference, or historical recordkeeping. In accordance with the ADA Title II Final Rule, the University Libraries provides accessible versions of archival materials upon request. To request an accommodation for this item, please submit an accessibility request form.



Submit Article