•  
  •  
 

Abstract

The increasing prevalence of communicative Generative AI, such as ChatGPT, highlights their transformative potential for science communication while raising critical questions about users’ trust in these systems conveying science-related information. As perceptual hybrids these agents challenge traditional notions of trustworthiness, and it remains unclear to whom or what users refer to as the object of trust. This qualitative interview study integrates dimensions from human-machine and epistemic trustworthiness within a hybrid framework, complemented by a descriptive source orientation model. It highlights that trustworthiness assessments can extend beyond a chatbot’s interface, emphasizing the perceived salience of its underlying infrastructure, developers, and organizations. By exploring the multifaceted nature of trustworthiness, the study offers a theoretical and empirical contribution to understand how diverse layers contribute to users’ trustworthiness perceptions, particularly in the context of science-related information seeking.

DOI

10.30658/hmc.11.11

Author ORCID Identifier

Evelyn Jonas: 0009-0006-1942-4622 ORCID logo

Esther Greussing: 0000-0001-8655-5119 ORCID logo

Monika Taddicken: 0000-0001-6505-3005 ORCID logo

Share

COinS