Visible to the public Organic Social Firewall: A Human-Computational Study of Trustworthy Communications


Trustworthy communications form the foundation of all collaboration. With the support of the Internet and computer-mediated communications (CMC), people are inventing new ways to communicate, and must adapt new ways to interpret communicative intent. Identifying deceptive behavior is a challenge made even more difficult in the virtual world of CMC. In real-world communications, facial cues and body language often help us to interpret another's intent. In the virtual world there are also subtle but noticeable cues to discern communicative intent. Dynamic metrics in people's virtual interactions can be observed, logged, collected, and triangulated over time and space to provide context for interpretation.

We seek to address the following questions: (1) By what reactive mechanisms do humans detect deception? (2) How do communicative cues and pragmatics translate in the realm of CMC? (3) How can we compute such social interactions based on the language network of a specific individual (or group) over time to be able to discern subtle changes in an individual's disposition and/or social communicative intent?

Firewall technology filters network packets and controls access to information using rule-based policies to either allow authorized access - or deny unauthorized access. A firewall is designed to protect the perimeter of the network, but how do we protect the interior from deceptive behavior within an online community? As activities and data storage move toward the cloud, the need for a new form of firewall has evolved.

A social firewall concept uses the analogy of a firewall to process humans' finite state social interactions as formulated in the form of language. In this work, we propose to identify computational mechanisms that infer human trustworthiness through real-time simulations of cyber threats and deceptions presented as complex socio-technical problems in online game experimentation. Different from traditional in-house lab setting, we set up a live laboratory in the cloud to experiment on people's information behavior. With a rich dataset collected and focused on communicative intent, this work will be a significant extension of current research into social psychology, pragmatics, and natural language processing. We hope to be able to identify - from text and behavioral patterns - social intent that is not explicitly stated. We will build a theory on deceptive behavior in identity theft phenomenon with the ability to discern behavioral anomalies in human communication in cyberspace. This will be based on the social psychological theories of trust and attribution. We will create measures of communicative intent, building on past research that draws from social and cognitive psychology theories, in order to develop an inference model that can organically and dynamically assess trustworthiness in computer-mediated communication. We will extend this work to create a unique analysis of language networks that represent different patterns and models of deception.

The proposed research bridges the intellectual gaps between information science, computational linguistics, information systems security, and psychology. This research will lead to transformative impact in understanding the dynamics of trusting relationships through organic language networks and dyadic attribution mechanisms in the cloud. This research serves as a precursor to a socio-technical schema that will ensure national security and data protection for the general populace while also protecting the individuals' right to privacy. The research will help the scientific community to understand and enable trustworthy communication and collaborative information behavior among computer-mediated groups in a systematic way.

Creative Commons 2.5

Other available formats:

Organic Social Firewall: A Human-Computational Study of Trustworthy Communications
Switch to experimental viewer