•  
  •  
 

Abstract

This article proposes the notion of Artificial Sociality to describe communicative AI technologies that create the impression of social behavior. Existing tools that activate Artificial Sociality include, among others, Large Language Models (LLMs) such as ChatGPT, voice assistants, virtual influencers, socialbots and companion chatbots such as Replika. The article highlights three key issues that are likely to shape present and future debates about these technologies, as well as design practices and regulation efforts: the modelling of human sociality that foregrounds it, the problem of deception and the issue of control from the part of the users. Ethical, social and cultural implications are discussed that are likely to shape future applications and regulation efforts for these technologies.

DOI

10.30658/hmc.7.5

Author ORCID Identifier

Simone Natale: 0000-0003-1962-2398

Iliana Depounti: 0000-0003-1854-3065

Share

COinS
 

Accessibility Statement

This item was created or digitized prior to April 24, 2027, or is a reproduction of legacy media created before that date. It is preserved in its original, unmodified state specifically for research, reference, or historical recordkeeping. In accordance with the ADA Title II Final Rule, the University Libraries provides accessible versions of archival materials upon request. To request an accommodation for this item, please submit an accessibility request form.