Journal of Management Information Systems

Volume 33 Number 2 2016 pp. 327-331

Special Issue: Information Systems for Deception Detection

Nunamaker, Jay F, Burgoon, Judee K, and Giboney, Justin Scott

JAY F. NUNAMAKER, JR. ([email protected], corresponding author) is Regents and Soldwedel Professor of MIS, Computer Science and Communication at the University of Arizona. He is director of the Center for the Management of Information and the Center for Border Security and Immigration. He received his Ph.D. in operations research and systems engineering from Case Institute of Technology. He obtained his professional engineer’s license in 1965. He specializes in the fields of system analysis and design, collaboration technology, and deception detection. He has been inducted into the Design Science Hall of Fame and received the LEO Award for Lifetime Achievement from the Association of Information Systems. He has published over 368 journal articles, book chapters, books, and refereed proceedings papers. He has also cofounded five spin-off companies based on his research.

JUDEE K. BURGOON ([email protected]) is a professor of communication, family studies, and human development at the University of Arizona, where she is director of research for the Center for the Management of Information, and site director for the Center for Identification Technology Research, a National Science Foundation Industry/University Cooperative Research Center. She has authored or edited 14 books and monographs and over 300 articles, chapters, and reviews related to nonverbal and verbal communication, interpersonal deception, and computer-mediated communication. Her current program of research centers on developing tools and methods for automated detection of deception and has been funded by the National Science Foundation, Department of Defense, and Department of Homeland Security, among others. She has received numerous awards and has been identified as the most prolific female scholar in the field of communication in the twentieth century.

JUSTIN S. GIBONEY ([email protected]) is an assistant professor in information technology management and digital forensics at the University at Albany. He received his Ph.D. in management information systems from the University of Arizona. His research focuses on behavioral information security, deception detection, expert systems, and meta-analytic processes, with an emphasis on design science. He has published in International Journal of Human-Computer Studies, Communications of the Association for Information Systems, Decision Support Systems (DSS), and other journals.

Security organizations are developing systems to screen human behavior and communication. Such systems keep gaining in importance with the growing complexity of human behaviors in the interactions with information technology. Screening systems search through human communication for signs of abnormal behavior—deceit, malicious intent, fraud, or other threats. As sociotechnical systems, screening technology encompasses human behavior, human communication, and computational algorithms. These systems are studied as information-processing tools focusing largely on the computational capability to capture human behavior and thus this study is well-placed in the information systems (IS) research domain [6]. The research work presented in this special issue discusses deceptive practices and malicious intent, and the interaction between technology, information, people, and organizations [1]. Many of these technologies are in their early development stages, and require further advances and theoretical understanding. This special issue increases the theoretical, design, and process knowledge regarding systems intended to detect deception, fraud, malicious intent, and insider threat.

Due to the potential damage that can be done through lies, especially from fraud scandals such as those of Enron and WorldCom, law enforcement agencies are looking for ways to detect deception in communication. As there is no single telltale sign of deception [4], many academic scholars are working on leveraging computational power to detect deceit and malicious intent. This Special Issue covers two means of detecting deception—linguistics and oculometrics—and two ways of identifying malicious insiders—character analysis and community participation.

Linguistic analysis is based on the idea that our language choices are not always deliberate or conscious, but that they reflect our emotional and cognitive states [2]. Humans cannot typically perceive these emotional and cognitive states by word choice alone, but scholars are designing computer tools to detect deception by uncovering emotional and cognitive states by analyzing communicators’ linguistic choices. These tools are not only being tested experimentally in universities, but also being field tested in various screening environments, such as international border crossings.

Oculometric analysis is based on the idea that people’s emotional state is reflected physiologically. The pupils are considered part of the autonomic nervous system and are controlled by flight-or-flight responses [3] as well as attention-orienting behaviors [5]. Ocularmetrics are measured using eye-trackers that use infrared light reflections from our eyes to detect movement and pupil diameter.

The first article in this special issue, “More Than Meets the Eye: How Oculometric Behaviors Evolve Over the Course of Automated Deception Detection Interactions,” by Jeffrey G. Proudfoot, Jeffrey L. Jenkins, Judee K. Burgoon, and Jay F. Nunamaker, Jr., explains how deception detection systems can employ two oculometric behaviors—pupil dilation and eye-gaze fixation—in conjunction with visual stimuli to distinguish between deceivers and truth-tellers. In a mock-crime experiment where participants—randomly assigned as either smugglers carrying contraband or normal passengers carrying normal luggage—were asked to proceed through a mock security checkpoint with their luggage. At the mock security checkpoint an automated deception detection system asked participants questions about their luggage and showed them images. Some of the images showed contraband items. The study reveals that deceivers (smugglers) had higher levels of initial pupil dilation than truth-tellers (normal passengers) and that deceiver’s pupil dilation will decrease more rapidly than truth-tellers, especially if deceivers do not see relevant contraband stimuli. The article also shows that deceivers will increasingly look toward a neutral area of the screen—one that will never display contraband stimuli—more so than truth-tellers; and this will be accelerated by the display of relevant contraband stimuli.

The second article, “An Empirical Validation of Malicious Insider Characteristics,” by Nan (Peter) Liang, David P. Biros, and Andy Luse, identifies and validates characteristics of malicious insiders. The authors of this article use 133 documents about known malicious insiders to textually identify keywords of characteristics thought to be indicative of malicious insiders such as personality disorders, issues with ethics, and disgruntlement. Using a subsample of the documents, the authors built dictionaries to be identified in the remaining documents. The keywords were then counted and categorized as representing symptoms of one of the characteristics. If the number of keywords met a threshold for a reported individual, the individual was classified as having that characteristic as recommended by clinical psychology for making diagnoses. The authors then compared characteristic rates from their sample of malicious insiders with published characteristic rates for the general population to look for statistical differences. They found that malicious insiders are more likely to have an antisocial personality disorder, an avoidant personality disorder, a disruptive personality disorder, a substance abuse disorder, and disgruntlement. The authors also noted that malicious insiders are described as dedicated to family or work, agreeable, professional, and high academic performers.

The third article, “Computer-Mediated Deception: Strategies Revealed by Language-Action Cues in Spontaneous Communication,” by Shuyuan Mary Ho, Jeffrey T. Hancock, Cheryl Booth, and Xiuwen Liu, studies linguistic cues related to deception in spontaneous online communication. The authors found that deceivers in an online experiment use more cognitive and affective-driven words, use fewer words, and have longer response times compared to their truthful counterparts. The authors designed an experimental game in which two participants were randomly paired and participants were randomly assigned as interviewers or interviewees as well as at times randomly assigned to be deceivers. This setup allowed the authors to study deception success and linguistic features of deception. Linguistic features were analyzed using a decision tree, a support vector machine, and a logistic regression. The results of the three algorithms demonstrate the feasibility of developing a contextual analysis to protect business communications from deception.

The fourth article, “Detecting Fraudulent Behavior on Crowdfunding Platforms: The Role of Linguistic and Content-Based Cues in Static and Dynamic Contexts,” by Michael Siering, Jascha-Alexander Koch, and Amit V. Deokar, investigates deception in crowdfunding projects. The authors investigate 652 projects from a popular crowdfunding website where 326 of the projects were suspended due to fraudulent behavior and 326 were not suspended. The authors use project information, founder information, and linguistic analysis of communications from the founder to build classifiers to distinguish between suspended and nonsuspended projects. The authors found that deception can accurately (near 80 percent) be detected in crowdfunding projects. The authors use various linguistic cues such as complexity, diversity, and expressivity to analyze the textual information from the founders. The authors then compare accuracy rates from various classifiers (SVM, artificial neural networks, naive Bayes, k-nearest neighbor, decision trees, and ensemble methods to investigate which one achieves the highest accuracy rate.

The fifth article, “What Online Reviewer Behaviors Really Matter? Effects of Verbal and Nonverbal Behaviors on Detection of Fake Online Reviews,” by Dongsong Zhang, Lina Zhou, Juan Luo Kehoe, and Isil Yakut Kilic, studies the nonverbal features of deceptive online reviews. Deceptive online reviews can severely damage a business’s reputation either by overselling a product or negatively portraying it. Using 1,033 legitimate and 1,100 fake reviews from Yelp.com, the authors used the Natural Language ToolKit to count various verbal features, including, the number of nouns and the content diversity, as well as nonverbal features including, the number of friends, the number of local photos taken, and the number of compliments that the reviewer had received. Using the full feature set, the authors compared various classification methods. The authors found that the combination of verbal and nonverbal features creates a better prediction model than either feature does alone.

The sixth article, “Examining Hacker Participation Length in Cybercriminal Internet-Relay-Chat Communities,” by Victor Benjamin, Bin Zhang, Jay F. Nunamaker, Jr., and Hsinchun Chen, examines techniques for identifying knowledge leaders inside cybercriminal communities. The authors collected 463,000 messages from 6,300 users for eleven months from two large Internet Relay Chat channels frequented by hackers. The authors then calculated metrics of participation and knowledge of the users including number of messages, days of participation, hacking messages, hyperlinks shared, and keywords related to hacking. Using Cox’s proportional hazard modeling technique, the authors demonstrate that the number of times uniquely contacted by another member of the community and the number of times contacting another unique member best predict continual participation in the community for a long period. These metrics can be used to better understand whether a communication about a hack from a specific member is more credible.

The seventh article, “Untangling a Web of Lies: Exploring Automated Detection of Deception in Computer-Mediated Communication,” by Stephan Ludwig, Tom van Laer, Ko de Ruyter, and Mike Friedman, studies linguistic features of e-mails from business partners participating in deceitful practices. The authors obtained 9,000 e-mails communicated between a Fortune 100 technology vendor and its channel partners. The channel partners e-mail the technology vendor to claim incentive rewards (ranging from US$100 to US$100,000). The claims are evaluated by the technology vendor to assess eligibility. The authors investigate word use (micro level), message development (macro level), and intertextual exchange cues (meta level) from business partners. On a micro level, deceptive e-mails contained fewer pronouns, more positive adjectives, less self-deprecating humor than their truthful counterparts. On a macro level, deceptive e-mails had more excessively structured arguments than truthful e-mails. Finally, on a meta level, deceitful e-mails more closely resembled the program manager e-mails than truthful e-mails.

It is critical for deception detection research to have ground truth (something that is known to be correct). It is easy to ensure ground truth for the experiments, but as we move into the field, we are faced with the problem of obtaining ground truth. The term “ground truth” comes from earth science in measuring their instruments. Most of the time in the field we have no way to measure ground truth. However, it is imperative that researchers find ways to measure ground truth in the field. This might be accomplished by red teaming or simulation.

Each of these articles identifies scientific techniques that law enforcement and businesses can use to develop systems to identify deception, fraud, and insider threat. Each elevates our understanding of the nature of deception and malicious insiders. More important, they contribute to the IS literature by helping enhance systems to detect these activities, and by advancing our knowledge of automated deception detection in particular, and a broad class of human behaviors in general.

References

1.Avison, D., and Elliot, S. Scoping the discipline of information systems. In J.L. King and K. Lyytinen (eds.), Information Systems: The State of the Field. West Sussex: Wiley, 2006, pp. 3–18.

2.Bradac, J.J.; Courtright, J.A.; and Bowers, J.W. Three language variables in communication research: Intensity, immediacy, and diversity. Human Communication Research, 5, 3 (1979), 257–269.

3.Cannon, W.B. The emergency function of the adrenal medulla in pain and the major emotions. American Journal of Physiology, 33, 2 (1914), 356–372.

4.DePaulo, B.M.; Lindsay, J.J.; Malone, B.E.; Muhlenbruck, L.; Charlton, K.; and Cooper, H. Cues to deception. Psychological Bulletin, 129, 1 (2003), 74–118.

5.Nunnally, J.C.; Knott, P.D.; Duchnowski, A.; and Parker, R. Pupillary response as a general measure of activation. Perception and Psychophysics, 2, 4 (1967), 149–155.

6.Orlikowski, W.J., and Iacono, C.S. Research commentary: Desperately seeking the “IT” in IT research–A call to theorizing the IT artifact. Information Systems Research, 12, 2 (2001), 121–134.