Visible to the public Biblio

Filters: Author is Saxena, Nitesh  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z 
Shrestha, Babins, Mohamed, Manar, Saxena, Nitesh.  2019.  ZEMFA: Zero-Effort Multi-Factor Authentication based on Multi-Modal Gait Biometrics. 2019 17th International Conference on Privacy, Security and Trust (PST). :1–10.
In this paper, we consider the problem of transparently authenticating a user to a local terminal (e.g., a desktop computer) as she approaches towards the terminal. Given its appealing usability, such zero-effort authentication has already been deployed in the real-world where a computer terminal or a vehicle can be unlocked by the mere proximity of an authentication token (e.g., a smartphone). However, existing systems based on a single authentication factor contains one major security weakness - unauthorized physical access to the token, e.g., during lunch-time or upon theft, allows the attacker to have unfettered access to the terminal. We introduce ZEMFA, a zero-effort multi-factor authentication system based on multiple authentication tokens and multi-modal behavioral biometrics. Specifically, ZEMFA utilizes two types of authentication tokens, a smartphone and a smartwatch (or a bracelet) and two types of gait patterns captured by these tokens, mid/lower body movements measured by the phone and wrist/arm movements captured by the watch. Since a user's walking or gait pattern is believed to be unique, only that user (no impostor) would be able to gain access to the terminal even when the impostor is given access to both of the authentication tokens. We present the design and implementation of ZEMFA. We demonstrate that ZEMFA offers a high degree of detection accuracy, based on multi-sensor and multi-device fusion. We also show that ZEMFA can resist active attacks that attempt to mimic a user's walking pattern, especially when multiple devices are used.
Liu, Jian, Wang, Chen, Chen, Yingying, Saxena, Nitesh.  2017.  VibWrite: Towards Finger-input Authentication on Ubiquitous Surfaces via Physical Vibration. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :73–87.

The goal of this work is to enable user authentication via finger inputs on ubiquitous surfaces leveraging low-cost physical vibration. We propose VibWrite that extends finger-input authentication beyond touch screens to any solid surface for smart access systems (e.g., access to apartments, vehicles or smart appliances). It integrates passcode, behavioral and physiological characteristics, and surface dependency together to provide a low-cost, tangible and enhanced security solution. VibWrite builds upon a touch sensing technique with vibration signals that can operate on surfaces constructed from a broad range of materials. It is significantly different from traditional password-based approaches, which only authenticate the password itself rather than the legitimate user, and the behavioral biometrics-based solutions, which usually involve specific or expensive hardware (e.g., touch screen or fingerprint reader), incurring privacy concerns and suffering from smudge attacks. VibWrite is based on new algorithms to discriminate fine-grained finger inputs and supports three independent passcode secrets including PIN number, lock pattern, and simple gestures by extracting unique features in the frequency domain to capture both behavioral and physiological characteristics such as contacting area, touching force, and etc. VibWrite is implemented using a single pair of low-cost vibration motor and receiver that can be easily attached to any surface (e.g., a door panel, a desk or an appliance). Our extensive experiments demonstrate that VibWrite can authenticate users with high accuracy (e.g., over 95% within two trials), low false positive rate (e.g., less 3%) and is robust to various types of attacks.

Mohamed, Manar, Shrestha, Babins, Saxena, Nitesh.  2016.  SMASheD: Sniffing and Manipulating Android Sensor Data. Proceedings of the Sixth ACM Conference on Data and Application Security and Privacy. :152–159.

The current Android sensor security model either allows only restrictive read access to sensitive sensors (e.g., an app can only read its own touch data) or requires special install-time permissions (e.g., to read microphone, camera or GPS). Moreover, Android does not allow write access to any of the sensors. Sensing-based security applications therefore crucially rely upon the sanity of the Android sensor security model. In this paper, we show that such a model can be effectively circumvented. Specifically, we build SMASheD, a legitimate framework under the current Android ecosystem that can be used to stealthily sniff as well as manipulate many of the Android's restricted sensors (even touch input). SMASheD exploits the Android Debug Bridge (ADB) functionality and enables a malicious app with only the INTERNET permission to read, and write to, multiple different sensor data files at will. SMASheD is the first framework, to our knowledge, that can sniff and manipulate protected sensors on unrooted Android devices, without user awareness, without constant device-PC connection and without the need to infect the PC. The primary contributions of this work are two-fold. First, we design and develop the SMASheD framework. Second, as an offensive implication of the SMASheD framework, we introduce a wide array of potentially devastating attacks. Our attacks against the touchsensor range from accurately logging the touchscreen input (TouchLogger) to injecting touch events for accessing restricted sensors and resources, installing and granting special permissions to other malicious apps, accessing user accounts, and authenticating on behalf of the user –- essentially almost doing whatever the device user can do (secretively). Our attacks against various physical sensors (motion, position and environmental) can subvert the functionality provided by numerous existing sensing-based security applications, including those used for(continuous) authentication, and authorization.

Shrestha, Prakash, Saxena, Nitesh.  2018.  Listening Watch: Wearable Two-Factor Authentication Using Speech Signals Resilient to Near-Far Attacks. Proceedings of the 11th ACM Conference on Security & Privacy in Wireless and Mobile Networks. :99–110.
Reducing the level of user effort involved in traditional two-factor authentication (TFA) constitutes an important research topic. A recent effort in this direction leverages ambient sounds to detect the proximity between the second factor device (phone) and the login terminal (browser), and eliminates the need for the user to transfer PIN codes. This approach is highly usable, but is completely vulnerable against far-near attackers, i.e., ones who are remotely located and can guess the victim's audio environment or make the phone create predictable sounds (e.g., ringers), and those who are in physical proximity of the user. In this paper, we propose Listening-Watch, a new TFA mechanism based on a wearable device (watch/bracelet) and active browser-generated random speech sounds. As the user attempts to login, the browser populates a short random code encoded into speech, and the login succeeds if the watch's audio recording contains this code (decoded using speech recognition), and is similar enough to the browser's audio recording. The remote attacker, who has guessed the user's environment or created predictable phone/watch sounds, will be defeated since authentication success relies upon the presence of the random code in watch's recordings. The proximity attacker will also be defeated unless it is extremely close to the watch, since the wearable microphones are usually designed to be only capable of picking up nearby sounds (e.g., voice commands). Furthermore, due to the use of a wearable second factor device, Listening-Watch naturally enables two-factor security even when logging in from a mobile phone. Our contributions are three-fold. First, we introduce the idea of strong and low-effort TFA based on wearable devices, active speech sounds and speech recognition, giving rise to the Listening-Watch system that is secure against both remote and proximity attackers. Second, we design and implement Listening-Watch for an Android smartwatch (and companion smartphone) and the Chrome browser, without the need for any browser plugins. Third, we evaluate Listening-Watch for authentication errors in both benign and adversarial settings. Our results show that Listening-Watch can result in minimal errors in both settings based on appropriate thresholdization and speaker volume levels.
Mohamed, Manar, Saxena, Nitesh.  2016.  Gametrics: Towards Attack-resilient Behavioral Authentication with Simple Cognitive Games. Proceedings of the 32Nd Annual Conference on Computer Security Applications. :277–288.

Authenticating a user based on her unique behavioral bio-metric traits has been extensively researched over the past few years. The most researched behavioral biometrics techniques are based on keystroke and mouse dynamics. These schemes, however, have been shown to be vulnerable to human-based and robotic attacks that attempt to mimic the user's behavioral pattern to impersonate the user. In this paper, we aim to verify the user's identity through the use of active, cognition-based user interaction in the authentication process. Such interaction boasts to provide two key advantages. First, it may enhance the security of the authentication process as multiple rounds of active interaction would serve as a mechanism to prevent against several types of attacks, including zero-effort attack, expert trained attackers, and automated attacks. Second, it may enhance the usability of the authentication process by actively engaging the user in the process. We explore the cognitive authentication paradigm through very simplistic interactive challenges, called Dynamic Cognitive Games, which involve objects floating around within the images, where the user's task is to match the objects with their respective target(s) and drag/drop them to the target location(s). Specifically, we introduce, build and study Gametrics ("Game-based biometrics"), an authentication mechanism based on the unique way the user solves such simple challenges captured by multiple features related to her cognitive abilities and mouse dynamics. Based on a comprehensive data set collected in both online and lab settings, we show that Gametrics can identify the users with a high accuracy (false negative rates, FNR, as low as 0.02) while rejecting zero-effort attackers (false positive rates, FPR, as low as 0.02). Moreover, Gametrics shows promising results in defending against expert attackers that try to learn and later mimic the user's pattern of solving the challenges (FPR for expert human attacker as low as 0.03). Furthermore, we argue that the proposed biometrics is hard to be replayed or spoofed by automated means, such as robots or malware attacks.