Journal of Management Information Systems

Volume 38 Number 4 2021 pp. 893-897

Special Issue: Fake News on the Internet

Dennis, Alan R, Galletta, Dennis F, and Webster, Jane

Alan R. Dennis is Professor of Information Systems and holds the John T. Chambers Chair of Internet Systems in the Kelley School of Business at Indiana University. Dr. Dennis has written more than 150 research papers, and has won numerous awards for his theoretical and applied research. His research focuses on four main themes: team collaboration; fake news on social media; cybersecurity; and artificial intelligence. He also has written four books (two on data communications and networking, and two on systems analysis and design). His research has been reported in the popular press almost 1000 times, including the Wall Street Journal, Forbes, USA Today, The Atlantic, CBS, Fox Business Network, PBS, Canada’s CBC and CTV, UK’s Daily Mail and the Telegraph, Australia’s ABC, France’s Le Figaro, South Africa’s Sowetan Live, Chile’s El Mercurio, China Daily, India’s Hindustan Times, and Indonesia’s Tribune News. Dr. Dennis is a Fellow and Past President of the Association for Information Systems (AIS).

Dennis F. Galletta is Professor at the University of Pittsburgh, where he serves as Doctoral Director for the Business School. He has published 56 journal papers, with 19 in the journals on the Financial Times FT50 list: Information Systems Research, Journal of Management Information Systems, MIS Quarterly, and Management Science. He has also published 54 refereed conference papers and four books. He is a senior editor of MIS Quarterly, an Editorial Board member at Journal of Management Information System, and has been a member several other journal boards. He has won a number of prestigious awards for his research and teaching, and chaired several major conferences. Dr. Galletta is a Past President, Fellow and a LEO lifetime achievement awardee of the Association for Information Systems. He was founding co-Editor in Chief of AIS Transactions on Human-Computer Interaction, and established the framework for Special Interest Groups in AIS.

Jane Webster holds the E. Marie Shantz Chair Emerita at the Smith School of Business, Queen’s University, Canada. Her research investigates the impacts of technologies in the support of distributed work, organizational communication, employee recruitment and selection, employee monitoring, training and learning, and human-computer interaction. Her special focus is on the study of ways of encouraging more environmentally sustainable behaviors in organizations. One of the ways she has done so is by applying gamification to the issue – and this has allowed her to come full circle to her Ph.D. thesis at New York University concerning computer playfulness in the workplace. Dr. Webster has published over 100 research papers in such journals as the Academy of Management Journal, European Journal of Information Systems, Information Systems Research, Journal of Business Research, Journal of Organizational Behavior, MIS Quarterly, and Organization Science, and many others. Dr. Webster in an AIS Fellow. She has served as a senior editor at MIS Quarterly and Program Chair for the International Conference on Information Systems.

The online generation and dissemination of false information (e.g., through Facebook, Twitter, Snapchat and other Internet media), commonly referred to as “fake news”, has garnered immense public attention following the 2016 Brexit referendum, three US elections, the 2019 Indian lynchings, and the 2019 rise in polio cases in Pakistan. Fake news undermines public life across the globe, especially in countries where journalistic practices and institutions are weak [3]. Some fake news is created to spread ideological messages or to create mischief, whereas other fake news is created for profit, such as the Macedonian teenagers who created fake news sites during the 2016 US election to drive advertising [22].

Research shows that fake news spreads “significantly farther, faster, deeper, and more broadly” than true news [24:1146] and has had major societal impacts [15]. All signs indicate that it will get worse as political activists, scammers, alternative news media, and hostile governments become more sophisticated in their production and targeting of fake news.

Fake news and other types of false information are also a matter of concern for business and management research and practice [2,9,10,11,12,19]. Businesses have engaged in deceptive communications such as greenwashing, astroturfing, false advertising and other types of false messages [4,5,14], but false content presented as news presents a novel range of issues for individuals, organizations, and societies [1,18].

The widespread adoption and use of information and communication technologies, particularly social and digital media, play a key role in the current wave of fake news and false information sweeping the globe [1,7,13]. We believe that the IS discipline can contribute significantly to the discourse, as it already has in related areas such as cyberdeviance [23,24] and deception [e.g., 5]. Our field can draw on its intellectual core of theories and empirical findings on the design, use, and impacts of IT artifacts at different levels of analysis. A nascent body of IS research on this topic is emerging [6,8,16–18,20,21]. Related areas such as review manipulation [e.g., 11] and social behaviors in online social networks [e.g., 10,12,21] can provide valuable lessons to apply to online fake news and false information more generally. Yet there is a dearth of evidence about many aspects, and many issues remain open to debate.

We received 80 submissions, which went through three rounds of review and revision. The papers spanned a diverse set of experimental, qualitative, econometric, and analytical methods, and focused on fake news around the globe. The set of accepted papers are also diverse in methods and focus. We would like to say that collectively the articles offer several viable solutions to the problem of fake news. However, this is not the case in all instances. Some of the articles show how inventions designed to reduce fake news actually have the opposite effect, and instead act to increase the spreading of fake news. Other articles take a longer-term perspective, by measuring or inserting emotions into headlines, allowing us to examine some of the roots of fake news behaviors for future study. Another approach was to prime readers to think more objectively and fairly (which only helped liberal participants), as well as to measure perceptions of what their peers want. Some examined various antecedents of sharing or suppression behaviors. Taken together, this set of articles allows our field to take a significant step forward, but simultaneously shows how challenging the fake news phenomenon is to solve. Clearly, more research is needed before we will be able to reduce the effects of fake news.

Ka Chung Ng, Jie Tang and Dongwon Lee applied two platform interventions to try and combat fake news, in a large-scale archival study of Sina Weibo, the largest social network in China. One intervention targets the content, as a flag applying to a single fake news post. Another targets the person posting the fake news, imposing a restriction on forwarding further items. After reducing their large sample obtained over a 12-day period, their study of 1,014 matched pairs of truthful fake news posts found that a flag on the content enabled fake news to disseminate in a central manner, encouraging influential users to forward the item. On the other hand, a forwarding restriction kept fake news more dispersed in nature, with a shorter survival time.

Ofir Turel and Babajide Osatuyi took the point of view that users post not only because of their own interests, but also because of their perceptions of the interests of others. In an experiment involving 408 Facebook-using students, they imposed a personal objectivity priming questionnaire that asked them to reflect on their own objectivity, fairness, and rationality in making judgments. This was done to increase the availability and salience of such assessments. Their model also took into account their own credibility bias and political orientation as well as their perceptions of their peers’ political orientation, to ultimately predict their sharing bias. Interestingly, the objectivity prime only had an impact on liberal-leaning participants. Also, the consistency of fake news with people’s political orientation increased credibility bias and sharing bias, and credibility bias increased sharing bias. Finally, the perceived alignment between a user and their peers’ political orientation reduced the effect of credibility bias on sharing bias.

Bingjie Deng and Michael Chau focused on the previously-neglected area of emotions that are expressed in fake news headlines, by testing headlines expressing anger or sadness. Their experiment involved two pretests for instrument development and two main studies of US participants (N=335 and 633, respectively) from Amazon’s Mechanical Turk. The authors hypothesized that embedding angry or sad phrases into the headlines would impact the perceived effort of the author and the believability of the headlines, and in turn impact four ultimate behaviors: reading, commenting, sharing, and liking. They found that anger expressed in a headline lowered the perceived effort of the author and lowered the believability of the headline. Sadness, however, did not have the same effect. Finally, believability did impact the ultimate behaviors, as hypothesized.

Kelvin King, Bin Wang, Diego Esobari, and Tamer Oraby use a combination of analytical modeling and data from Twitter to examine the effects of actively combatting fake news by providing correct information to dispel specific fake news stories. They first develop a theoretical model of the diffusion of both falsehoods and correction messages on Twitter and their mutual relationship. They then use Twitter data from Hurricane Harvey in 2017 and Hurricane Florence in 2018 to examine the bidirectional relationships between the diffusion of falsehoods and their correction messages. The results show that correction messages do not reduce the spread of fake news, but instead have the opposite effect of increasing the spread of fake news; intervening to correct fake news backfires and instead increases its reach. These results suggest that leaving falsehoods to run their course may be the most effective course of action.

Ada Wang, Min-Seok Pang, and Paul Pavlou use data from Weibo in China (combined with surveys of Chinese and American social media users) to examine the effects of identity verification on the spread of fake news. Many social media platforms are attempting to reduce the relative anonymity of those posting news stories by verifying users’ identity, with the idea that after disclosing their identity, users would be less likely to deliberately create and share fake news. The results suggest that identity verification (without a publically viewable verification badge) reduces users’ propensity to post fake news. However, if users receive a verification badge, identity verification has no effect. Moreover, if identity verification is voluntary (rather than mandatory), users who seek an identity verification badge, post more fake news after they receive it. Thus, identity verification backfires, and increases the spread of fake news.

Rather than focusing on interventions to reduce fake news, Christy Galletta Horner, Dennis Galletta, Jennifer Crawford, and Abhijeet Shirsat theorize relations between emotions and the sharing of fake news. In the context of U.S. elections, they conduct a mixed methods study to investigate the process by which individuals experience discrete emotional reactions to political fake news headlines and how these emotions contribute to the perpetuation of fake news. They find through the use of an emotion inventory that strong, activating emotions lead to either the further spread of fake news through actions such as sharing, or suppression by publicly or privately refuting the post. In contrast, deactivating emotions lead to inaction, where readers are more likely to ignore or disengage from the spread of false news. Other findings point out that different headlines appear to stimulate different emotions, depending on the nature of the story, but conservatives did react with strikingly different patterns than liberals. They synthesize their findings into a process model to help drive future research to mitigate fake news.

Jordana George, Natalie Gerhart, and Russell Torres also help set directions for the future by inducing a research model for the investigation of fake news from their analysis of the multidisciplinary fake news literature. They synthesize the literature and then develop a research framework and related propositions to help spark future research. They highlight key research themes for IS researchers, in which potential theoretical perspectives and research questions are proposed. We expect that this paper will become required reading for all future IS research on fake news.

We would like to thank the hundreds of reviewers, the five individuals who served as Associate Editors (Hailiang Chen, Atanu Lahri, Kai Larson, Mingfeng Lin, and Antino Kim), and Indrani Karmakar, who served as Review Coordinator. Without them, this Special Issue would not have been possible.

Disclosure statement

No potential conflict of interest was reported by the author(s).

References

1.Allcott, H. and Gentzkow, M. Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31, 2 (May 2017), 211–236.

2.Aral, S. Truth, Disrupted. Harvard Business Review, July 2018, 3–11.

3.Bradshaw, S. and Howard, P.N. Challenging truth and trust: A global inventory of organized social media manipulation. The Computational Propaganda Project, (2018).

4.Dunlap, R.R. and McCright, A.M. Organized climate change denial. In J.S. Dryzek, R.B. Norgaard and D. Schlosberg, eds., The Oxford handbook of climate change and society. Oxford University Press, 2011.

5.George, J.F., Gupta, M., Giordano, G., Mills, A.M., Tennant, V.M., and Lewis, C.C. The effects of communication media and culture on deception detection accuracy. MIS Quarterly, 42, 2 (February 2018), 551–575.

6.Gimpel, H., Heger, S., Olenberger, C., & Utz, L. The effectiveness of social norms in fighting fake news on social media. Journal of Management Information Systems, 38, 1 (2021), 196–221.

7.Humprecht, E. Where “fake news” flourishes: a comparison across four Western democracies. Information, Communication & Society, (May 2018), 1–16.

8.Kim, A. and Dennis, A.R. Says who? The effects of presentation format and source rating on fake news in social media. MIS Quarterly, 43, 3 (September 2019).

9.Knight, E. and Tsoukas, H. When fiction trumps truth: What “post-truth” and “alternative facts” mean for management studies. Organization Studies, 40, 2 (February 2019), 183–197.

10.Kuem, J., Ray, S., Siponen, M., and Kim, S.S. What leads to prosocial behaviors on social networking services: A tripartite model. Journal of Management Information Systems, 34, 1 (January 2017), 40–70.

11.Kumar, N., Venugopal, D., Qiu, L., and Kumar, S. Detecting review manipulation on online platforms with hierarchical supervised learning. Journal of Management Information Systems, 35, 1 (January 2018), 350–380.

12.Kwon, H.E., Oh, W., and Kim, T. Platform structures, homing preferences, and homophilous propensities in online social networks. Journal of Management Information Systems, 34, 3 (July 2017), 768–802.

13.Lazer, D.M.J., Baum, M.A., Benkler, Y., et al. The science of fake news. Science, 359, 6380 (March 2018), 1094–1096.

14.Lyon, T.P. and Montgomery, A.W. The means and end of greenwash. Organization & Environment, 28, 2 (June 2015), 223–249.

15.Mathew, I. Most Americans say they have lost trust in the media. Columbia Journalism Review, 2018. https://www.cjr.org/the_media_today/trust-in-media-down.php.

16.Moravec, P., Kim, A., and Dennis, A.R. Behind the stars: The effects of news source ratings on fake news in social media. Journal of Management Information Systems, (in press).

17.Moravec, P., Kim, A., and Dennis, A.R. Flagging fake news: System 1 vs. System 2. In ICIS 2018 Proceedings. Association for Information Systems, San Francisco, CA, US, 2018.

18.Moravec, P., Minas, R.A., and Dennis, A.R. Fake news on social media: People believe what they want to believe when it makes no sense at all. MIS Quarterly, (in press).

19.Murphy, M. Study: Fake news hits the workplace. Leadership IQ, 2017. https://www.leadershipiq.com/blogs/leadershipiq/study-fake-news-hits-the-workplace.

20.Murungi, D., Puaro, S., and Yates, D.J. Beyond facts: A new spin on fake news in the age of social media. In AMCIS 2018 Proceedings. Association for Information Systems, New Orleans, LA, US, 2018.

21.Pan, Z., Lu, Y., Wang, B., and Chau, P.Y.K. Who do you think you are? Common and differential effects of social self-identity on social media usage. Journal of Management Information Systems, 34, 1 (January 2017), 71–101.

22.Subramanian, S. Inside the Macedonian fake-news complex. Wired, 2017. https://www.wired.com/2017/02/veles-macedonia-fake-news/.

23.Venkatraman, S., M. K. Cheung, C., Lee, Z.W.Y., D. Davis, F., and Venkatesh, V. The “Darth” side of technology use: An inductively derived typology of cyberdeviance. Journal of Management Information Systems, 35, 4 (October 2018), 1060–1091.

24.Vosoughi, S., Roy, D., and Aral, S. The spread of true and false news online. Science, 359, 6380 (March 2018), 1146–1151.