ABSTRACT:
Many news stories that users see on social media are recommended by artificial intelligence (AI), but most users are unaware of this. Policymakers have argued that increasing transparency about the role of AI may reduce users’ sharing of fake news. We use social influence theory to argue that such explicit labeling of news stories may have the opposite effect of what policymakers intend by triggering users to rely on fast System 1 cognition rather than more effortful rational System 2 cognition. We conducted three experiments with two different forms of AI in United States. The results of Study 1 show that labeling stories as recommended by an AI agent had similar effects to labeling them as recommended by a human expert: it encouraged users to use fast cognition, which made them more likely to share fake news (as well as true news). Study 2 examined the effects of using a more machine-like algorithmic depiction of AI and found that, once again, AI labels made users more likely to share fake news, which was validated again in Study 3 in the post-Gen AI era. Our research contributes to theory by showing that labeling social media stories as recommended by AI serves as a signal of positive social influence that triggers the use of fast cognition instead of rational cognition and thereby exacerbates rather than mitigates the spread of fake news.
Key words and phrases: social media, online recommenders, AI recommendations, fake news, fast cognition, social influence, dual process cognition, signaling, system-1 cognition