Visible to the public Reasoning about Accidental and Malicious Misuse via Formal MethodsConflict Detection Enabled

PI(s), Co-PI(s), Researchers:

PI: Munindar Singh; Co-PIs: William Enck, Laurie Williams; Researchers: Vaibhav Garg, Hui Guo, Samin Yaseer Mahmud, Md Rayhanur Rahman

This refers to Hard Problems, released November 2012.

  • Policy

This project seeks to aid security analysts in identifying and protecting against accidental and malicious actions by users or software through automated reasoning on unified representations of user expectations and software implementations to identify misuses sensitive to usage and machine context.


Samin Yaseer Mahmud, K. Virgil English, Seaver Thorn, William Enck, Adam Oest, and Muhammad Saad, 2022. Analysis of Payment Service Provider SDKs in Android, In Proceedings of the Annual Computer Security Applications Conference (ACSAC).


Each effort should submit one or two specific highlights. Each item should include a paragraph or two along with a citation if available. Write as if for the general reader of IEEE S&P.
The purpose of the highlights is to give our immediate sponsors a body of evidence that the funding they are providing (in the framework of the SoS lablet model) is delivering results that "more than justify" the investment they are making.

Our work studying the security of 14 Payment SDKs was accepted for publication at the 2022 Annual Computer Security Applications Conference (ACSAC).

We designed a survey for our evaluation for a tool that helps understand HIPAA breach reports and obtained IRB approval for it.

We evaluated iRogue approach on a dataset (known as the snowball dataset) that is different from the training set. After curating the ground truth for rogue apps, we found that iRogue achieves 77.27% recall, which is higher than the recall of baseline methods.

We produced a curated dataset by annotating 4,000 app reviews for three categories of misbehavior by malfeasant app users. We began training advanced NLP models such as XLNet and RoBERTa on the curated dataset. These models can help in identifying misbehavior incidents from app reviews.




We involved one female graduate student in our research this quarter.