ABSTRACT:
As artificial intelligence (AI) becomes increasingly integral to society, humans’ tendency to forgo their own judgment to adopt algorithmic advice is eliciting substantial concern. Prior research suggests that such overreliance is driven by informational influences (confidence in AI’s superior judgment) or by desire to reduce attentional load. We propose a new mechanism: normative pressure, stemming from the legitimacy afforded to algorithms within social or work-related structures. Using a setup inspired by social conformity research, we conducted four studies involving 1,445 crowd-workers performing straightforward image-classification tasks. Substantial percentages of participants followed erroneous AI recommendations on these tasks, despite being able to perform them perfectly without support. This overreliance was partially mediated by normative pressure, measured as discomfort at disagreeing with AI. Conformity decreased when participants perceived their decisions’ real-life impact as high (versus low). Our findings highlight the risks inherent to human-AI collaboration and the difficulty in ensuring that humans-in-the-loop maintain independent judgment.
Key words and phrases: Artificial Intelligence, AI, Human-AI Interaction, Algorithmic Advice, AI Overreliance, Algorithmic Authority, Human-in-the-loop, AI Trust, Social Conformity, Gen AI