•  
  •  
 

Abstract

The increasing prevalence of communicative Generative AI, such as ChatGPT, highlights their transformative potential for science communication while raising critical questions about users’ trust in these systems conveying science-related information. As perceptual hybrids these agents challenge traditional notions of trustworthiness, and it remains unclear to whom or what users refer to as the object of trust. This qualitative interview study integrates dimensions from human-machine and epistemic trustworthiness within a hybrid framework, complemented by a descriptive source orientation model. It highlights that trustworthiness assessments can extend beyond a chatbot’s interface, emphasizing the perceived salience of its underlying infrastructure, developers, and organizations. By exploring the multifaceted nature of trustworthiness, the study offers a theoretical and empirical contribution to understand how diverse layers contribute to users’ trustworthiness perceptions, particularly in the context of science-related information seeking.

DOI

10.30658/hmc.11.11

Author ORCID Identifier

Evelyn Jonas: 0009-0006-1942-4622 ORCID logo

Esther Greussing: 0000-0001-8655-5119 ORCID logo

Monika Taddicken: 0000-0001-6505-3005 ORCID logo

Share

COinS
 

Accessibility Statement

This item was created or digitized prior to April 24, 2027, or is a reproduction of legacy media created before that date. It is preserved in its original, unmodified state specifically for research, reference, or historical recordkeeping. In accordance with the ADA Title II Final Rule, the University Libraries provides accessible versions of archival materials upon request. To request an accommodation for this item, please submit an accessibility request form.