•  
  •  
 

Abstract

This study explores AI development practices to understand how trustworthiness is built into AI systems, and how this generates trust in AI. Through a multi-sited ethnography based methodology, we analyze observations, interviews, and documentation from AI developers working on trustworthy AI. Our analysis shows two key practices: transformation and revelation. Through transformational AI development practices trustworthiness is (re)constituted, though more or lesser degrees. Through revelation practices, AI developers communicatively engage with others to generate trust. This focus on developers adds to the user-centric perspective and shows the role nontechnical development practices have in shaping trust and trustworthy AI before it is implemented. Policy guidelines lack clarity on nontechnical aspects, so we argue that further attention on communication can benefit AI practice and policy.

DOI

10.30658/hmc.10.5

Author ORCID Identifier

Tessa Bruijnem: 0000-0002-2872-4006 ORCID logo

Anouk Mols: 0000-0003-0355-9849 ORCID logo

Jason Pridmore: 0000-0001-9159-8623 ORCID logo

Share

COinS