This article proposes the notion of Artificial Sociality to describe communicative AI technologies that create the impression of social behavior. Existing tools that activate Artificial Sociality include, among others, Large Language Models (LLMs) such as ChatGPT, voice assistants, virtual influencers, socialbots and companion chatbots such as Replika. The article highlights three key issues that are likely to shape present and future debates about these technologies, as well as design practices and regulation efforts: the modelling of human sociality that foregrounds it, the problem of deception and the issue of control from the part of the users. Ethical, social and cultural implications are discussed that are likely to shape future applications and regulation efforts for these technologies.



Author ORCID Identifier

Simone Natale: 0000-0003-1962-2398

Iliana Depounti: 0000-0003-1854-3065