Visible to the public Biblio

Filters: Keyword is facial recognition  [Clear All Filters]
Haque, Shaheryar Ehsan I, Saleem, Shahzad.  2020.  Augmented reality based criminal investigation system (ARCRIME). 2020 8th International Symposium on Digital Forensics and Security (ISDFS). :1—6.
Crime scene investigation and preservation are fundamentally the pillars of forensics. Numerous cases have been discussed in this paper where mishandling of evidence or improper investigation leads to lengthy trials and even worse incorrect verdicts. Whether the problem is lack of training of first responders or any other scenario, it is essential for police officers to properly preserve the evidence. Second problem is the criminal profiling where each district department has its own method of storing information about criminals. ARCRIME intends to digitally transform the way police combat crime. It will allow police officers to create a copy of the scene of crime so that it can be presented in courts or in forensics labs. It will be in the form of wearable glasses for officers on site whereas officers during training will be wearing a headset. The trainee officers will be provided with simulations of cases which have already been resolved. Officers on scene would be provided with intelligence about the crime and the suspect they are interviewing. They would be able to create a case file with audio recording and images which can be digitally sent to a prosecution lawyer. This paper also explores the risks involved with ARCRIME and also weighs in their impact and likelihood of happening. Certain contingency plans have been highlighted in the same section as well to respond to emergency situations.
Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

Zhou, J., Zhang, X., Liu, Y., Lan, X..  2020.  Facial Expression Recognition Using Spatial-Temporal Semantic Graph Network. 2020 IEEE International Conference on Image Processing (ICIP). :1961—1965.

Motions of facial components convey significant information of facial expressions. Although remarkable advancement has been made, the dynamic of facial topology has not been fully exploited. In this paper, a novel facial expression recognition (FER) algorithm called Spatial Temporal Semantic Graph Network (STSGN) is proposed to automatically learn spatial and temporal patterns through end-to-end feature learning from facial topology structure. The proposed algorithm not only has greater discriminative power to capture the dynamic patterns of facial expression and stronger generalization capability to handle different variations but also higher interpretability. Experimental evaluation on two popular datasets, CK+ and Oulu-CASIA, shows that our algorithm achieves more competitive results than other state-of-the-art methods.

Makovetskii, A., Kober, V., Voronin, A., Zhernov, D..  2020.  Facial recognition and 3D non-rigid registration. 2020 International Conference on Information Technology and Nanotechnology (ITNT). :1—4.

One of the most efficient tool for human face recognition is neural networks. However, the result of recognition can be spoiled by facial expressions and other deviation from the canonical face representation. In this paper, we propose a resampling method of human faces represented by 3D point clouds. The method is based on a non-rigid Iterative Closest Point (ICP) algorithm. To improve the facial recognition performance, we use a combination of the proposed method and convolutional neural network (CNN). Computer simulation results are provided to illustrate the performance of the proposed method.

Pranav, E., Kamal, S., Chandran, C. Satheesh, Supriya, M. H..  2020.  Facial Emotion Recognition Using Deep Convolutional Neural Network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :317—320.

The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.

John, A., MC, A., Ajayan, A. S., Sanoop, S., Kumar, V. R..  2020.  Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :1328—1333.

Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.

Oğuz, K., Korkmaz, İ, Korkmaz, B., Akkaya, G., Alıcı, C., Kılıç, E..  2020.  Effect of Age and Gender on Facial Emotion Recognition. 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). :1—6.

New research fields and applications on human computer interaction will emerge based on the recognition of emotions on faces. With such aim, our study evaluates the features extracted from faces to recognize emotions. To increase the success rate of these features, we have run several tests to demonstrate how age and gender affect the results. The artificial neural networks were trained by the apparent regions on the face such as eyes, eyebrows, nose, mouth, and jawline and then the networks are tested with different age and gender groups. According to the results, faces of older people have a lower performance rate of emotion recognition. Then, age and gender based groups are created manually, and we show that performance rates of facial emotion recognition have increased for the networks that are trained using these particular groups.

Alamri, M., Mahmoodi, S..  2020.  Facial Profiles Recognition Using Comparative Facial Soft Biometrics. 2020 International Conference of the Biometrics Special Interest Group (BIOSIG). :1—4.

This study extends previous advances in soft biometrics and describes to what extent soft biometrics can be used for facial profile recognition. The purpose of this research is to explore human recognition based on facial profiles in a comparative setting based on soft biometrics. Moreover, in this work, we describe and use a ranking system to determine the recognition rate. The Elo rating system is employed to rank subjects by using their face profiles in a comparative setting. The crucial features responsible for providing useful information describing facial profiles have been identified by using relative methods. Experiments based on a subset of the XM2VTSDB database demonstrate a 96% for recognition rate using 33 features over 50 subjects.

Ozdemir, M. A., Elagoz, B., Soy, A. Alaybeyoglu, Akan, A..  2020.  Deep Learning Based Facial Emotion Recognition System. 2020 Medical Technologies Congress (TIPTEKNO). :1—4.

In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.

Jia, C., Li, C. L., Ying, Z..  2020.  Facial expression recognition based on the ensemble learning of CNNs. 2020 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1—5.

As a part of body language, facial expression is a psychological state that reflects the current emotional state of the person. Recognition of facial expressions can help to understand others and enhance communication with others. We propose a facial expression recognition method based on convolutional neural network ensemble learning in this paper. Our model is composed of three sub-networks, and uses the SVM classifier to Integrate the output of the three networks to get the final result. The recognition accuracy of the model's expression on the FER2013 dataset reached 71.27%. The results show that the method has high test accuracy and short prediction time, and can realize real-time, high-performance facial recognition.

Xu, X., Ruan, Z., Yang, L..  2020.  Facial Expression Recognition Based on Graph Neural Network. 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC). :211—214.

Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.

Muñoz, C. M. Blanco, Cruz, F. Gómez, Valero, J. S. Jimenez.  2020.  Software architecture for the application of facial recognition techniques through IoT devices. 2020 Congreso Internacional de Innovación y Tendencias en Ingeniería (CONIITI). :1–5.

The facial recognition time by time takes more importance, due to the extend kind of applications it has, but it is still challenging when faces big variations in the characteristics of the biometric data used in the process and especially referring to the transportation of information through the internet in the internet of things context. Based on the systematic review and rigorous study that supports the extraction of the most relevant information on this topic [1], a software architecture proposal which contains basic security requirements necessary for the treatment of the data involved in the application of facial recognition techniques, oriented to an IoT environment was generated. Concluding that the security and privacy considerations of the information registered in IoT devices represent a challenge and it is a priority to be able to guarantee that the data circulating on the network are only accessible to the user that was designed for this.

Pradhan, Chittaranjan, Banerjee, Debanjan, Nandy, Nabarun, Biswas, Udita.  2019.  Generating Digital Signature using Facial Landmlark Detection. 2019 International Conference on Communication and Signal Processing (ICCSP). :0180—0184.
Information security has developed rapidly over the recent years with a key being the emergence of social media. To standardize this discipline, security of an individual becomes an urgent concern. In 2019, it is estimated that there will be over 2.5 billion social media users around the globe. Unfortunately, anonymous identity has become a major concern for the security advisors. Due to the technological advancements, the phishers are able to access the confidential information. To resolve these issues numerous solutions have been proposed, such as biometric identification, facial and audio recognition etc prior access to any highly secure forum on the web. Generating digital signatures is the recent trend being incorporated in the field of digital security. We have designed an algorithm that after generating 68 point facial landmark, converts the image to a highly compressed and secure digital signature. The proposed algorithm generates a unique signature for an individual which when stored in the user account information database will limit the creation of fake or multiple accounts. At the same time the algorithm reduces the database storage overhead as it stores the facial identity of an individual in the form of a compressed textual signature rather than the traditional method where the image file was being stored, occupying lesser amount of space and making it more efficient in terms of searching, fetching and manipulation. A unique new analysis of the features produced at intermediate layers has been applied. Here, we opt to use the normal and two opposites' angular measures of the triangle as the invariance. It simply acts as the real-time optimized encryption procedure to achieve the reliable security goals explained in detail in the later sections.
Ly, Son Thai, Do, Nhu-Tai, Lee, Guee-Sang, Kim, Soo-Hyung, Yang, Hyung-Jeong.  2019.  A 3d Face Modeling Approach for in-The-Wild Facial Expression Recognition on Image Datasets. 2019 IEEE International Conference on Image Processing (ICIP). :3492—3496.

This paper explores the benefits of 3D face modeling for in-the-wild facial expression recognition (FER). Since there is limited in-the-wild 3D FER dataset, we first construct 3D facial data from available 2D dataset using recent advances in 3D face reconstruction. The 3D facial geometry representation is then extracted by deep learning technique. In addition, we also take advantage of manipulating the 3D face, such as using 2D projected images of 3D face as additional input for FER. These features are then fused with that of 2D FER typical network. By doing so, despite using common approaches, we achieve a competent recognition accuracy on Real-World Affective Faces (RAF) database and Static Facial Expressions in the Wild (SFEW 2.0) compared with the state-of-the-art reports. To the best of our knowledge, this is the first time such a deep learning combination of 3D and 2D facial modalities is presented in the context of in-the-wild FER.

Saboor khan, Abdul, Shafi, Imran, Anas, Muhammad, Yousuf, Bilal M, Abbas, Muhammad Jamshed, Noor, Aqib.  2019.  Facial Expression Recognition using Discrete Cosine Transform Artificial Neural Network. 2019 22nd International Multitopic Conference (INMIC). :1—5.

Every so often Humans utilize non-verbal gestures (e.g. facial expressions) to express certain information or emotions. Moreover, countless face gestures are expressed throughout the day because of the capabilities possessed by humans. However, the channels of these expression/emotions can be through activities, postures, behaviors & facial expressions. Extensive research unveiled that there exists a strong relationship between the channels and emotions which has to be further investigated. An Automatic Facial Expression Recognition (AFER) framework has been proposed in this work that can predict or anticipate seven universal expressions. In order to evaluate the proposed approach, Frontal face Image Database also named as Japanese Female Facial Expression (JAFFE) is opted as input. This database is further processed with a frequency domain technique known as Discrete Cosine transform (DCT) and then classified using Artificial Neural Networks (ANN). So as to check the robustness of this novel strategy, the random trial of K-fold cross validation, leave one out and person independent methods is repeated many times to provide an overview of recognition rates. The experimental results demonstrate a promising performance of this application.

Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Facial Expression Recognition Using Merged Convolution Neural Network. 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE). :296—298.

In this paper, a merged convolution neural network (MCNN) is proposed to improve the accuracy and robustness of real-time facial expression recognition (FER). Although there are many ways to improve the performance of facial expression recognition, a revamp of the training framework and image preprocessing renders better results in applications. When the camera is capturing images at high speed, however, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of human facial expression. To solve this problem, we propose a statistical method for recognition results obtained from previous images, instead of using the current recognition output. Experimental results show that the proposed method can satisfactorily recognize seven basic facial expressions in real time.

Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.

Mundra, Saloni, Sujata, Mitra, Suman K..  2019.  Modular Facial Expression Recognition on Noisy Data Using Robust PCA. 2019 IEEE 16th India Council International Conference (INDICON). :1—4.
Chen, Yuedong, Wang, Jianfeng, Chen, Shikai, Shi, Zhongchao, Cai, Jianfei.  2019.  Facial Motion Prior Networks for Facial Expression Recognition. 2019 IEEE Visual Communications and Image Processing (VCIP). :1—4.

Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.

Yang, Jiannan, Zhang, Fan, Chen, Bike, Khan, Samee U..  2019.  Facial Expression Recognition Based on Facial Action Unit. 2019 Tenth International Green and Sustainable Computing Conference (IGSC). :1—6.

In the past few years, there has been increasing interest in the perception of human expressions and mental states by machines, and Facial Expression Recognition (FER) has attracted increasing attention. Facial Action Unit (AU) is an early proposed method to describe facial muscle movements, which can effectively reflect the changes in people's facial expressions. In this paper, we propose a high-performance facial expression recognition method based on facial action unit, which can run on low-configuration computer and realize video and real-time camera FER. Our method is mainly divided into two parts. In the first part, 68 facial landmarks and image Histograms of Oriented Gradients (HOG) are obtained, and the feature values of action units are calculated accordingly. The second part uses three classification methods to realize the mapping from AUs to FER. We have conducted many experiments on the popular human FER benchmark datasets (CK+ and Oulu CASIA) to demonstrate the effectiveness of our method.

Ahmed, Syed Umaid, Sabir, Arbaz, Ashraf, Talha, Ashraf, Usama, Sabir, Shahbaz, Qureshi, Usama.  2019.  Security Lock with Effective Verification Traits. 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). :164–169.
To manage and handle the issues of physical security in the modern world, there is a dire need for a multilevel security system to ensure the safety of precious belongings that could be money, military equipment or medical life-saving drugs. Security locker solution is proposed which is a multiple layer security system consisting of various levels of authentication. In most cases, only relevant persons should have access to their precious belongings. The unlocking of the box is only possible when all of the security levels are successfully cleared. The five levels of security include entering of password on interactive GUI, thumbprint, facial recognition, speech pattern recognition, and vein pattern recognition. This project is unique and effective in a sense that it incorporates five levels of security in a single prototype with the use of cost-effective equipment. Assessing our security system, it is seen that security is increased many a fold as it is near to impossible to breach all these five levels of security. The Raspberry Pi microcomputers, handling all the traits efficiently and smartly makes it easy for performing all the verification tasks. The traits used involves checking, training and verifying processes with application of machine learning operations.
Toliupa, Serhiy, Tereikovskiy, Ihor, Dychka, Ivan, Tereikovska, Liudmyla, Trush, Alexander.  2019.  The Method of Using Production Rules in Neural Network Recognition of Emotions by Facial Geometry. 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT). :323–327.
The article is devoted to the improvement of neural network means of recognition of emotions on human geometry, which are defined for use in information systems of general purpose. It is shown that modern means of emotional recognition are based on the usual networks of critical disadvantage, because there is a lack of accuracy of recognition under the influence of purchased, characteristic of general-purpose information systems. It is determined that the above remarks relate to the turning of the face and the size of the image. A typical approach to overcoming this disadvantage through training is unacceptable for all protection options that are inappropriate for reasons of duration and compilation of the required training sample. It is proposed to increase the accuracy of recognition by submitting an expert data model to the neural network. An appropriate method for representing expert knowledge is developed. A feature of the method is the use of productive rules and the PNN neural network. Experimental verification of the developed solutions has been carried out. The obtained results allow to increase the efficiency of the termination and disclosure of the set of age networks, the characteristics of which are not presented in the registered statistical data.
Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Real-Time Facial Expression Recognition Based on CNN. 2019 International Conference on System Science and Engineering (ICSSE). :120–123.
In this paper, we propose a method for improving the robustness of real-time facial expression recognition. Although there are many ways to improve the accuracy of facial expression recognition, a revamp of the training framework and image preprocessing allow better results in applications. One existing problem is that when the camera is capturing images in high speed, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of the human facial expression. To solve this problem for smooth system operation and maintenance of recognition speed, we take changes in image characteristics at high speed capturing into account. The proposed method does not use the immediate output for reference, but refers to the previous image for averaging to facilitate recognition. In this way, we are able to reduce interference by the characteristics of the images. The experimental results show that after adopting this method, overall robustness and accuracy of facial expression recognition have been greatly improved compared to those obtained by only the convolution neural network (CNN).
Wang, XuMing, Huang, Jin, Zhu, Jia, Yang, Min, Yang, Fen.  2018.  Facial Expression Recognition with Deep Learning. Proceedings of the 10th International Conference on Internet Multimedia Computing and Service. :10:1–10:4.
Automatic recognition of facial expression images is a challenge for computer due to variation of expression, background, position and label noise. The paper propose a new method for static facial expression recognition. Main process is to perform experiments by FER-2013 dataset, the primary mission is using our CNN model to classify a set of static images into 7 basic emotions and then achieve effective classification automatically. The two preprocessing of the faces picture have enhanced the effect of the picture for recognition. First, FER datasets are preprocessed with standard histogram eqialization. Then we employ ImageDataGenerator to deviate and rotate the facial image to enhance model robustness. Finally, the result of softmax activation function (also known as multinomial logistic regression) is stacked by SVM. The result of softmax activation function + SVM is better than softmax activation function. The accuracy of facial expression recognition achieve 68.79% on the test set.