•  
  •  
 

Abstract

This review paper provides a conceptualization of AI-assisted content moderation with various degrees of autonomy and summarizes experimental evidence for how different levels of automation in content moderation and related losses of autonomy affect individuals and groups. Our results show that current research predominantly focuses on individuallevel effects, necessitating a shift toward understanding the impact on groups. The study highlights gaps in exploring different levels of AI-assisted moderation interventions and misalignments of different conceptualizations that make comparing research results difficult. The discussion underscores the prevailing emphasis on harmful content removal and advocates for investigating more constructive moderation techniques, emphasizing the potential of AI in fostering normative, higher-level outcomes.

DOI

10.30658/hmc.9.10

Author ORCID Identifier

Zehui Yu: 0009-0003-1728-7829

Luckas Otto: 0000-0002-4374-6924

Dennis Assenmacher: 0000-0001-9219-1956

Claudia Wagner: 0000-0002-0640-8221

Share

COinS