Visible to the public LipPass: Lip Reading-based User Authentication on Smartphones Leveraging Acoustic Signals

TitleLipPass: Lip Reading-based User Authentication on Smartphones Leveraging Acoustic Signals
Publication TypeConference Paper
Year of Publication2018
AuthorsLu, L., Yu, J., Chen, Y., Liu, H., Zhu, Y., Liu, Y., Li, M.
Conference NameIEEE INFOCOM 2018 - IEEE Conference on Computer Communications
Date PublishedApril 2018
ISBN Number978-1-5386-4128-6
KeywordsAcoustic Fingerprints, Acoustic signal processing, acoustic signals, Acoustics, audio signal processing, authentication, authorisation, binary classifiers, binary tree-based authentication, biometric-based authentication, build-in audio devices, composability, data protection, deep learning-based method, Doppler effect, Doppler profiles, feature extraction, Human Behavior, learning (artificial intelligence), lip movement patterns, lip reading-based user authentication system, LipPass, Lips, message authentication, mobile computing, pattern classification, privacy protection, pubcrawl, replay attacks, Resiliency, smart phones, smartphones, spoofer detectors, support vector machine, Support vector machines

To prevent users' privacy from leakage, more and more mobile devices employ biometric-based authentication approaches, such as fingerprint, face recognition, voiceprint authentications, etc., to enhance the privacy protection. However, these approaches are vulnerable to replay attacks. Although state-of-art solutions utilize liveness verification to combat the attacks, existing approaches are sensitive to ambient environments, such as ambient lights and surrounding audible noises. Towards this end, we explore liveness verification of user authentication leveraging users' lip movements, which are robust to noisy environments. In this paper, we propose a lip reading-based user authentication system, LipPass, which extracts unique behavioral characteristics of users' speaking lips leveraging build-in audio devices on smartphones for user authentication. We first investigate Doppler profiles of acoustic signals caused by users' speaking lips, and find that there are unique lip movement patterns for different individuals. To characterize the lip movements, we propose a deep learning-based method to extract efficient features from Doppler profiles, and employ Support Vector Machine and Support Vector Domain Description to construct binary classifiers and spoofer detectors for user identification and spoofer detection, respectively. Afterwards, we develop a binary tree-based authentication approach to accurately identify each individual leveraging these binary classifiers and spoofer detectors with respect to registered users. Through extensive experiments involving 48 volunteers in four real environments, LipPass can achieve 90.21% accuracy in user identification and 93.1% accuracy in spoofer detection.

Citation Keylu_lippass:_2018