Visible to the public Biblio

Filters: Keyword is face recognition  [Clear All Filters]
2021-08-02
Jeste, Manasi, Gokhale, Paresh, Tare, Shrawani, Chougule, Yutika, Chaudhari, Archana.  2020.  Two-point security system for doors/lockers using Machine learning and Internet Of Things. 2020 Fourth International Conference on Inventive Systems and Control (ICISC). :740—744.
The objective of the proposed research is to develop an IOT based security system with a two-point authentication. Human face recognition and fingerprint is a known method for access authentication. A combination of both technologies and integration of the system with IoT make will make the security system more efficient and reliable. Use of online platform google firebase is made for saving database and retrieving it in real-time. In this system access to the fingerprint (touch sensor) from mobile is proposed using an android app developed in android studio and authentication for the same is also proposed. On identification of both face and fingerprint together, access to door or locker is provided.
2021-07-08
Wahyudono, Bintang, Ogi, Dion.  2020.  Implementation of Two Factor Authentication based on RFID and Face Recognition using LBP Algorithm on Access Control System. 2020 International Conference on ICT for Smart Society (ICISS). CFP2013V-ART:1—6.
Studies on two-factor authentication based on RFID and face recognition have been carried out on a large scale. However, these studies didn't discuss the way to overcome the weaknesses of face recognition authentication in the access control systems. In this study, two authentication factors, RFID and face recognition, were implemented using the LBP (Local Binary Pattern) algorithm to overcome weaknesses of face recognition authentication in the access control system. Based on the results of performance testing, the access control system has 100% RFID authentication and 80% face recognition authentication. The average time for the RFID authentication process is 0.03 seconds, the face recognition process is 6.3885 seconds and the verification of the face recognition is 0.1970 seconds. The access control system can still work properly after three days without being switched off. The results of security testing showed that the capabilities spoofing detection has 100% overcome the photo attack.
2021-07-07
Zhao, Qian, Wang, Shengjin.  2020.  Real-time Face Tracking in Surveillance Videos on Chips for Valuable Face Capturing. 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE). :281–284.
Face capturing is a task to capture and store the "best" face of each person passing by the monitor. To some extent, it is similar to face tracking, but uses a different criterion and requires a valuable (i.e., high-quality and recognizable) face selection procedure. Face capturing systems play a critical role in public security. When deployed on edge devices, it is capable of reducing redundant storage in data center and speeding up retrieval of a certain person. However, high computation complexity and high repetition rate caused by ID switch errors are major challenges. In this paper, we propose a novel solution to constructing a real-time low-repetition face capturing system on chips. First, we propose a two-stage association algorithm for memory-efficient and accurate face tracking. Second, we propose a fast and reliable face quality estimation algorithm for valuable face selection. Our pipeline runs at over 20fps on Hisiv 3559A SoC with a single NNIE device for neural network inference, while achieving over 95% recall and less than 0.4 repetition rate in real world surveillance videos.
2021-05-13
Jaafar, Fehmi, Avellaneda, Florent, Alikacem, El-Hackemi.  2020.  Demystifying the Cyber Attribution: An Exploratory Study. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :35–40.
Current cyber attribution approaches proposed to use a variety of datasets and analytical techniques to distill the information that will be useful to identify cyber attackers. In contrast, practitioners and researchers in cyber attribution face several technical and regulation challenges. In this paper, we describe the main challenges of cyber attribution and present a state of the art of used approaches to face these challenges. Then, we will present an exploratory study to perform cyber attacks attribution based on pattern recognition from real data. In our study, we are using attack pattern discovery and identification based on real data collection and analysis.
2021-04-08
Sarma, M. S., Srinivas, Y., Abhiram, M., Ullala, L., Prasanthi, M. S., Rao, J. R..  2017.  Insider Threat Detection with Face Recognition and KNN User Classification. 2017 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM). :39—44.
Information Security in cloud storage is a key trepidation with regards to Degree of Trust and Cloud Penetration. Cloud user community needs to ascertain performance and security via QoS. Numerous models have been proposed [2] [3] [6][7] to deal with security concerns. Detection and prevention of insider threats are concerns that also need to be tackled. Since the attacker is aware of sensitive information, threats due to cloud insider is a grave concern. In this paper, we have proposed an authentication mechanism, which performs authentication based on verifying facial features of the cloud user, in addition to username and password, thereby acting as two factor authentication. New QoS has been proposed which is capable of monitoring and detection of insider threats using Machine Learning Techniques. KNN Classification Algorithm has been used to classify users into legitimate, possibly legitimate, possibly not legitimate and not legitimate groups to verify image authenticity to conclude, whether there is any possible insider threat. A threat detection model has also been proposed for insider threats, which utilizes Facial recognition and Monitoring models. Security Method put forth in [6] [7] is honed to include threat detection QoS to earn higher degree of trust from cloud user community. As a recommendation, Threat detection module should be harnessed in private cloud deployments like Defense and Pharma applications. Experimentation has been conducted using open source Machine Learning libraries and results have been attached in this paper.
2021-03-29
Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

Zhou, J., Zhang, X., Liu, Y., Lan, X..  2020.  Facial Expression Recognition Using Spatial-Temporal Semantic Graph Network. 2020 IEEE International Conference on Image Processing (ICIP). :1961—1965.

Motions of facial components convey significant information of facial expressions. Although remarkable advancement has been made, the dynamic of facial topology has not been fully exploited. In this paper, a novel facial expression recognition (FER) algorithm called Spatial Temporal Semantic Graph Network (STSGN) is proposed to automatically learn spatial and temporal patterns through end-to-end feature learning from facial topology structure. The proposed algorithm not only has greater discriminative power to capture the dynamic patterns of facial expression and stronger generalization capability to handle different variations but also higher interpretability. Experimental evaluation on two popular datasets, CK+ and Oulu-CASIA, shows that our algorithm achieves more competitive results than other state-of-the-art methods.

Makovetskii, A., Kober, V., Voronin, A., Zhernov, D..  2020.  Facial recognition and 3D non-rigid registration. 2020 International Conference on Information Technology and Nanotechnology (ITNT). :1—4.

One of the most efficient tool for human face recognition is neural networks. However, the result of recognition can be spoiled by facial expressions and other deviation from the canonical face representation. In this paper, we propose a resampling method of human faces represented by 3D point clouds. The method is based on a non-rigid Iterative Closest Point (ICP) algorithm. To improve the facial recognition performance, we use a combination of the proposed method and convolutional neural network (CNN). Computer simulation results are provided to illustrate the performance of the proposed method.

Pranav, E., Kamal, S., Chandran, C. Satheesh, Supriya, M. H..  2020.  Facial Emotion Recognition Using Deep Convolutional Neural Network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :317—320.

The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.

John, A., MC, A., Ajayan, A. S., Sanoop, S., Kumar, V. R..  2020.  Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :1328—1333.

Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.

Oğuz, K., Korkmaz, İ, Korkmaz, B., Akkaya, G., Alıcı, C., Kılıç, E..  2020.  Effect of Age and Gender on Facial Emotion Recognition. 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). :1—6.

New research fields and applications on human computer interaction will emerge based on the recognition of emotions on faces. With such aim, our study evaluates the features extracted from faces to recognize emotions. To increase the success rate of these features, we have run several tests to demonstrate how age and gender affect the results. The artificial neural networks were trained by the apparent regions on the face such as eyes, eyebrows, nose, mouth, and jawline and then the networks are tested with different age and gender groups. According to the results, faces of older people have a lower performance rate of emotion recognition. Then, age and gender based groups are created manually, and we show that performance rates of facial emotion recognition have increased for the networks that are trained using these particular groups.

Alamri, M., Mahmoodi, S..  2020.  Facial Profiles Recognition Using Comparative Facial Soft Biometrics. 2020 International Conference of the Biometrics Special Interest Group (BIOSIG). :1—4.

This study extends previous advances in soft biometrics and describes to what extent soft biometrics can be used for facial profile recognition. The purpose of this research is to explore human recognition based on facial profiles in a comparative setting based on soft biometrics. Moreover, in this work, we describe and use a ranking system to determine the recognition rate. The Elo rating system is employed to rank subjects by using their face profiles in a comparative setting. The crucial features responsible for providing useful information describing facial profiles have been identified by using relative methods. Experiments based on a subset of the XM2VTSDB database demonstrate a 96% for recognition rate using 33 features over 50 subjects.

Ozdemir, M. A., Elagoz, B., Soy, A. Alaybeyoglu, Akan, A..  2020.  Deep Learning Based Facial Emotion Recognition System. 2020 Medical Technologies Congress (TIPTEKNO). :1—4.

In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.

Jia, C., Li, C. L., Ying, Z..  2020.  Facial expression recognition based on the ensemble learning of CNNs. 2020 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1—5.

As a part of body language, facial expression is a psychological state that reflects the current emotional state of the person. Recognition of facial expressions can help to understand others and enhance communication with others. We propose a facial expression recognition method based on convolutional neural network ensemble learning in this paper. Our model is composed of three sub-networks, and uses the SVM classifier to Integrate the output of the three networks to get the final result. The recognition accuracy of the model's expression on the FER2013 dataset reached 71.27%. The results show that the method has high test accuracy and short prediction time, and can realize real-time, high-performance facial recognition.

Xu, X., Ruan, Z., Yang, L..  2020.  Facial Expression Recognition Based on Graph Neural Network. 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC). :211—214.

Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.

Gupta, S., Buduru, A. B., Kumaraguru, P..  2020.  imdpGAN: Generating Private and Specific Data with Generative Adversarial Networks. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :64–72.
Generative Adversarial Network (GAN) and its variants have shown promising results in generating synthetic data. However, the issues with GANs are: (i) the learning happens around the training samples and the model often ends up remembering them, consequently, compromising the privacy of individual samples - this becomes a major concern when GANs are applied to training data including personally identifiable information, (ii) the randomness in generated data - there is no control over the specificity of generated samples. To address these issues, we propose imdpGAN-an information maximizing differentially private Generative Adversarial Network. It is an end-to-end framework that simultaneously achieves privacy protection and learns latent representations. With experiments on MNIST dataset, we show that imdpGAN preserves the privacy of the individual data point, and learns latent codes to control the specificity of the generated samples. We perform binary classification on digit pairs to show the utility versus privacy trade-off. The classification accuracy decreases as we increase privacy levels in the framework. We also experimentally show that the training process of imdpGAN is stable but experience a 10-fold time increase as compared with other GAN frameworks. Finally, we extend imdpGAN framework to CelebA dataset to show how the privacy and learned representations can be used to control the specificity of the output.
2021-03-09
Muñoz, C. M. Blanco, Cruz, F. Gómez, Valero, J. S. Jimenez.  2020.  Software architecture for the application of facial recognition techniques through IoT devices. 2020 Congreso Internacional de Innovación y Tendencias en Ingeniería (CONIITI). :1–5.

The facial recognition time by time takes more importance, due to the extend kind of applications it has, but it is still challenging when faces big variations in the characteristics of the biometric data used in the process and especially referring to the transportation of information through the internet in the internet of things context. Based on the systematic review and rigorous study that supports the extraction of the most relevant information on this topic [1], a software architecture proposal which contains basic security requirements necessary for the treatment of the data involved in the application of facial recognition techniques, oriented to an IoT environment was generated. Concluding that the security and privacy considerations of the information registered in IoT devices represent a challenge and it is a priority to be able to guarantee that the data circulating on the network are only accessible to the user that was designed for this.

2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
Hynes, E., Flynn, R., Lee, B., Murray, N..  2020.  An Evaluation of Lower Facial Micro Expressions as an Implicit QoE Metric for an Augmented Reality Procedure Assistance Application. 2020 31st Irish Signals and Systems Conference (ISSC). :1–6.
Augmented reality (AR) has been identified as a key technology to enhance worker utility in the context of increasing automation of repeatable procedures. AR can achieve this by assisting the user in performing complex and frequently changing procedures. Crucial to the success of procedure assistance AR applications is user acceptability, which can be measured by user quality of experience (QoE). An active research topic in QoE is the identification of implicit metrics that can be used to continuously infer user QoE during a multimedia experience. A user's QoE is linked to their affective state. Affective state is reflected in facial expressions. Emotions shown in micro facial expressions resemble those expressed in normal expressions but are distinguished from them by their brief duration. The novelty of this work lies in the evaluation of micro facial expressions as a continuous QoE metric by means of correlation analysis to the more traditional and accepted post-experience self-reporting. In this work, an optimal Rubik's Cube solver AR application was used as a proof of concept for complex procedure assistance. This was compared with a paper-based procedure assistance control. QoE expressed by affect in normal and micro facial expressions was evaluated through correlation analysis with post-experience reports. The results show that the AR application yielded higher task success rates and shorter task durations. Micro facial expressions reflecting disgust correlated moderately to the questionnaire responses for instruction disinterest in the AR application.
2021-02-08
Chiang, M., Lau, S..  2011.  Automatic multiple faces tracking and detection using improved edge detector algorithm. 2011 7th International Conference on Information Technology in Asia. :1—5.

The automatic face tracking and detection has been one of the fastest developing areas due to its wide range of application, security and surveillance application in particular. It has been one of the most interest subjects, which suppose but yet to be wholly explored in various research areas due to various distinctive factors: varying ethnic groups, sizes, orientations, poses, occlusions and lighting conditions. The focus of this paper is to propose an improve algorithm to speed up the face tracking and detection process with the simple and efficient proposed novel edge detector to reject the non-face-likes regions, hence reduce the false detection rate in an automatic face tracking and detection in still images with multiple faces for facial expression system. The correct rates of 95.9% on the Haar face detection and proposed novel edge detector, which is higher 6.1% than the primitive integration of Haar and canny edge detector.

Arunpandian, S., Dhenakaran, S. S..  2020.  DNA based Computing Encryption Scheme Blending Color and Gray Images. 2020 International Conference on Communication and Signal Processing (ICCSP). :0966–0970.

In this paper, a novel DNA based computing method is proposed for encryption of biometric color(face)and gray fingerprint images. In many applications of present scenario, gray and color images are exhibited major role for authenticating identity of an individual. The values of aforementioned images have considered as two separate matrices. The key generation process two level mathematical operations have applied on fingerprint image for generating encryption key. For enhancing security to biometric image, DNA computing has done on the above matrices generating DNA sequence. Further, DNA sequences have scrambled to add complexity to biometric image. Results of blending images, image of DNA computing has shown in experimental section. It is observed that the proposed substitution DNA computing algorithm has shown good resistant against statistical and differential attacks.

2021-01-28
Romashchenko, V., Brutscheck, M., Chmielewski, I..  2020.  Organisation and Implementation of ResNet Face Recognition Architectures in the Environment of Zigbee-based Data Transmission Protocol. 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA). :25—30.

This paper describes a realisation of a ResNet face recognition method through Zigbee-based wireless protocol. The system uses a CC2530 Zigbee-based radio frequency chip with connected VC0706 camera on it. The Arduino Nano had been used for organisation of data compression and effective division of Zigbee packets. The proposed solution also simplifies a data transmission within a strict bandwidth of Zigbee protocol and reliable packet forwarding in case of frequency distortion. The following investigation model uses Raspberry Pi 3 with connected Zigbee End Device (ZED) for successful receiving of important images and acceleration of deep learning interfaces. The model is integrated into a smart security system based on Zigbee modules, MySQL database, Android application and works in the background by using daemons procedures. To protect data, all wireless connections had been encrypted by the 128-bit Advanced Encryption Standard (AES-128) algorithm. Experimental results show a possibility to implement complex systems under restricted requirements of available transmission protocols.

2021-01-25
Rizki, R. P., Hamidi, E. A. Z., Kamelia, L., Sururie, R. W..  2020.  Image Processing Technique for Smart Home Security Based On the Principal Component Analysis (PCA) Methods. 2020 6th International Conference on Wireless and Telematics (ICWT). :1–4.
Smart home is one application of the pervasive computing branch of science. Three categories of smart homes, namely comfort, healthcare, and security. The security system is a part of smart home technology that is very important because the intensity of crime is increasing, especially in residential areas. The system will detect the face by the webcam camera if the user enters the correct password. Face recognition will be processed by the Raspberry pi 3 microcontroller with the Principal Component Analysis method using OpenCV and Python software which has outputs, namely actuators in the form of a solenoid lock door and buzzer. The test results show that the webcam can perform face detection when the password input is successful, then the buzzer actuator can turn on when the database does not match the data taken by the webcam or the test data and the solenoid door lock actuator can run if the database matches the test data taken by the sensor. webcam. The mean response time of face detection is 1.35 seconds.
2021-01-15
Brockschmidt, J., Shang, J., Wu, J..  2019.  On the Generality of Facial Forgery Detection. 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). :43—47.
A variety of architectures have been designed or repurposed for the task of facial forgery detection. While many of these designs have seen great success, they largely fail to address challenges these models may face in practice. A major challenge is posed by generality, wherein models must be prepared to perform in a variety of domains. In this paper, we investigate the ability of state-of-the-art facial forgery detection architectures to generalize. We first propose two criteria for generality: reliably detecting multiple spoofing techniques and reliably detecting unseen spoofing techniques. We then devise experiments which measure how a given architecture performs against these criteria. Our analysis focuses on two state-of-the-art facial forgery detection architectures, MesoNet and XceptionNet, both being convolutional neural networks (CNNs). Our experiments use samples from six state-of-the-art facial forgery techniques: Deepfakes, Face2Face, FaceSwap, GANnotation, ICface, and X2Face. We find MesoNet and XceptionNet show potential to generalize to multiple spoofing techniques but with a slight trade-off in accuracy, and largely fail against unseen techniques. We loosely extrapolate these results to similar CNN architectures and emphasize the need for better architectures to meet the challenges of generality.