Visible to the public Biblio

Filters: Author is Mittal, Prateek  [Clear All Filters]
2020-04-03
Song, Liwei, Shokri, Reza, Mittal, Prateek.  2019.  Membership Inference Attacks Against Adversarially Robust Deep Learning Models. 2019 IEEE Security and Privacy Workshops (SPW). :50—56.
In recent years, the research community has increasingly focused on understanding the security and privacy challenges posed by deep learning models. However, the security domain and the privacy domain have typically been considered separately. It is thus unclear whether the defense methods in one domain will have any unexpected impact on the other domain. In this paper, we take a step towards enhancing our understanding of deep learning models when the two domains are combined together. We do this by measuring the success of membership inference attacks against two state-of-the-art adversarial defense methods that mitigate evasion attacks: adversarial training and provable defense. On the one hand, membership inference attacks aim to infer an individual's participation in the target model's training dataset and are known to be correlated with target model's overfitting. On the other hand, adversarial defense methods aim to enhance the robustness of target models by ensuring that model predictions are unchanged for a small area around each sample in the training dataset. Intuitively, adversarial defenses may rely more on the training dataset and be more vulnerable to membership inference attacks. By performing empirical membership inference attacks on both adversarially robust models and corresponding undefended models, we find that the adversarial training method is indeed more susceptible to membership inference attacks, and the privacy leakage is directly correlated with model robustness. We also find that the provable defense approach does not lead to enhanced success of membership inference attacks. However, this is achieved by significantly sacrificing the accuracy of the model on benign data points, indicating that privacy, security, and prediction accuracy are not jointly achieved in these two approaches.
2019-12-30
Ahn, Surin, Gorlatova, Maria, Naghizadeh, Parinaz, Chiang, Mung, Mittal, Prateek.  2018.  Adaptive Fog-Based Output Security for Augmented Reality. Proceedings of the 2018 Morning Workshop on Virtual Reality and Augmented Reality Network. :1–6.
Augmented reality (AR) technologies are rapidly being adopted across multiple sectors, but little work has been done to ensure the security of such systems against potentially harmful or distracting visual output produced by malicious or bug-ridden applications. Past research has proposed to incorporate manually specified policies into AR devices to constrain their visual output. However, these policies can be cumbersome to specify and implement, and may not generalize well to complex and unpredictable environmental conditions. We propose a method for generating adaptive policies to secure visual output in AR systems using deep reinforcement learning. This approach utilizes a local fog computing node, which runs training simulations to automatically learn an appropriate policy for filtering potentially malicious or distracting content produced by an application. Through empirical evaluations, we show that these policies are able to intelligently displace AR content to reduce obstruction of real-world objects, while maintaining a favorable user experience.
2018-02-27
Song, Liwei, Mittal, Prateek.  2017.  POSTER: Inaudible Voice Commands. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2583–2585.

Voice assistants like Siri enable us to control IoT devices conveniently with voice commands, however, they also provide new attack opportunities for adversaries. Previous papers attack voice assistants with obfuscated voice commands by leveraging the gap between speech recognition system and human voice perception. The limitation is that these obfuscated commands are audible and thus conspicuous to device owners. In this poster, we propose a novel mechanism to directly attack the microphone used for sensing voice data with inaudible voice commands. We show that the adversary can exploit the microphone's non-linearity and play well-designed inaudible ultrasounds to cause the microphone to record normal voice commands, and thus control the victim device inconspicuously. We demonstrate via end-to-end real-world experiments that our inaudible voice commands can attack an Android phone and an Amazon Echo device with high success rates at a range of 2-3 meters.