Visible to the public A Human Information-Processing Analysis of Online Deception Detection - October 2016Conflict Detection Enabled

Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.

PI(s):  Robert W. Proctor, Ninghui Li
Researchers: Jing Chen; Weining Yang; Aiping Xiong; Wanling Zou



  • Human Behavior - Predicting individual users’ judgments and decisions regarding possible online deception.  Our research addresses this problem within the context of examining user decisions with regard to phishing attacks. This work is grounded within the scientific literature on human decision-making processes.




  • We completed an online study that evaluated a method in which training to identify phishing webpages is embedded within a phishing warning. In the study, participants first made decisions about authentic and fraudulent webpages with the aid of warning. They made similar decisions without the warning aid after a short distracting task and a week later. Although participants’ performances were similar for all interfaces in first phase, training-embedded designs provided better protection than the current Chrome phishing warning on both subsequent tests. Our findings suggest that embedded training is a complementary strategy to compensate for lack of long-term benefit of the current phishing warning.

    In a second experiment, a phishing email identification task (with a phishing detection automated assistant system) was used as a testbed to study human trust in automation in the cyber domain. Factors investigated included the influence of “description” (i.e., whether the user was informed about the actual reliability of the automated system) and “experience” (i.e., whether the user was provided feedback on their choices), in addition to the reliability level of the automated phishing detection system. Higher automation reliability increased the overall quality of the decisionsthat users made and their self-reported trust. The reliability of the system was underestimated even with description. Description affected self-reported trust, and feedback affected perceived automation reliability.