Visible to the public Facial Recognition 2015

SoS Newsletter- Advanced Book Block


SoS Logo

Facial Recognition


Facial recognition tools have long been the stuff of action-adventure films. In the real world, they present opportunities and complex problems being examined by researchers. The research works cited here, presented or published in 2015, address various techniques and issues, such as the use of TDM, PCA and Markov models, application of keystroke dynamics to facial thermography, multiresolution alignment, and sparse representation.

Wan, Qianwen; Panetta, Karen; Agaian, Sos, “Autonomous Facial Recognition Based on the Human Visual System,” in Imaging Systems and Techniques (IST), 2015 IEEE International Conference on, vol., no., pp. 1–6, 16–18 Sept. 2015. doi:10.1109/IST.2015.7294580
Abstract: This paper presents a real-time facial recognition system utilizing our human visual system algorithms coupled with logarithm Logical Binary Pattern feature descriptors and our region weighted model. The architecture can quickly find and rank the closest matches of a test image to a database of stored images. There are many potential applications for this work, including homeland security applications such as identifying persons of interest and other robot vision applications such as search and rescue missions. This new method significantly improves the performance of the previous Local Binary Pattern method. For our prototype application, we supplied the system testing images and found their best matches in the database of training images. In addition, the results were further improved by weighting the contribution of the most distinctive facial features. The system evaluates and selects the best matching image using the chi-squared statistic.
Keywords: Databases; Face; Face recognition; Feature extraction; Histograms; Training; Visual systems; Facial Recognition; Human Visual System; Pattern; Region Weighting; Similarity (ID#: 15-7359)


Wodo, W.; Zientek, S., “Biometric Linkage Between Identity Document Card and Its Holder Based on Real-Time Facial Recognition,” in Science and Information Conference (SAI), 2015, pp. 1380–1383, 28–30 July 2015. doi:10.1109/SAI.2015.7237322
Abstract: Access control systems based on RFiD cards are very popular in many companies and public institutions. There is an assumption, that if one has a card and passes verification, the access is granted. Such an approach entails few threats: using of cards being in possession of unauthorized people (stolen or just borrowed), risk of cloning card. We strongly believe that prevention is a better way to obtain desired security level. The usability of the system is essential, but it has to be pragmatic, that is why we accept in that case higher value for False Acceptance Rate, simultaneously getting lower False Rejection Rate. We aim to discourage, in significant way, any attempts of stealing or borrowing access cards from third parties or cloning them. Our solution verifies the biometric linkage between the signed facial image of the document holder embedded in personal identity document and the user facial image captured by a camera during usage of a card. In order to avoid any trials of real-time facial substitution, we introduced depth camera and infrared flash. Our goal is to compare the similarity of faces of document holder and user, but at the same time to prevent misuse of this high quality digital data. In order to support that assumption we process captured image in the reader and send it to the card for matching (match-on-card).
Keywords: authorisation; biometrics (access control); cameras; face recognition; radiofrequency identification; RFID cards; access control systems; biometric linkage; camera; false acceptance rate; false rejection rate; high quality digital data; identity document card; infrared flash; personal identity document; real-time facial recognition; real-time facial substitution; security level; signed facial image; user facial image; Ash; Cameras; Couplings; Databases; Face detection; Face recognition; Radiofrequency identification; MRTD; biometrics; credentials; facial recognition; personal data protection; smart card (ID#: 15-7360)


Sgroi, Amanda; Garvey, Hannah; Bowyer, Kevin; Flynn, Patrick, “Location Matters: A Study of the Effects of Environment on Facial Recognition for Biometric Security,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol. 02, pp. 1–7, 4–8 May 2015. doi:10.1109/FG.2015.7284812
Abstract: The term “in the wild” has become wildly popular in face recognition research. The term refers generally to use of datasets that are somehow less controlled or more realistic. In this work, we consider how face recognition accuracy varies according to the composition of the dataset on which the decision threshold is learned and the dataset on which performance is then measured. We identify different acquisition locations in the FRVT 2006 dataset, examine face recognition accuracy for within-environment image matching and cross-environment image matching, and suggest a way to improve biometric systems that encounter images taken in multiple locations. We find that false non-matches are more likely to occur when the gallery and probe images are acquired in different locations, and that false matches are more likely when the gallery and probe images were acquired in the same location. These results show that measurements of face recognition accuracy are dependent on environment.
Keywords: Accuracy; Face; Face recognition; Indoor environments; Lighting; Probes; Security (ID#: 15-7361)


Shaukat, Arslan; Aziz, Mansoor; Akram, Usman, “Facial Expression Recognition Using Multiple Feature Sets,” in IT Convergence and Security (ICITCS), 2015 5th International Conference on, vol., no., pp. 1–5, 24–27 Aug. 2015. doi:10.1109/ICITCS.2015.7292981
Abstract: Over the years, human facial expression recognition has always been a challenging problem in computer vision systems. In this paper, we have worked towards recognizing facial expressions from the images given in JAFFE database. From literature, a set of features have been identified to be potentially useful for recognizing facial expressions. Therefore, we propose to use a combination of 3-different types of features i.e. Scale Invariant Features Transform (SIFT), Gabor wavelets and Discrete Cosine Transform (DCT). Some pre-processing steps have been applied before extracting these features. Support Vector Machine (SVM) with radial basis kernel function is used for classifying facial expressions. We evaluate our results on the JAFFE database under the same experimental setup followed in literature. Experimental results show that our proposed methodology gives better results in comparison with existing literature work so far.
Keywords: Databases; Discrete cosine transforms; Face; Face recognition; Feature extraction; Image recognition; Support vector machines (ID#: 15-7362)


Pattabhi Ramaiah, N.; Ijjina, E.P.; Mohan, C.K., “Illumination Invariant Face Recognition Using Convolutional Neural Networks,” in Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, vol., no., pp. 1–4, 19–21 Feb. 2015. doi:10.1109/SPICES.2015.7091490
Abstract: Face is one of the most widely used biometric in security systems. Despite its wide usage, face recognition is not a fully solved problem due to the challenges associated with varying illumination conditions and pose. In this paper, we address the problem of face recognition under non-uniform illumination using deep convolutional neural networks (CNN). The ability of a CNN to learn local patterns from data is used for facial recognition. The symmetry of facial information is exploited to improve the performance of the system by considering the horizontal reflections of the facial images. Experiments conducted on Yale facial image dataset demonstrate the efficacy of the proposed approach.
Keywords: biometrics (access control); face recognition; neural nets; security; CNN; Yale facial image dataset; biometric; deep convolutional neural networks; horizontal reflections; illumination invariant face recognition; nonuniform illumination; security systems; Face; Face recognition; Lighting; Neural networks; Pattern analysis; Training; biometrics; convolutional neural networks; facial recognition; non-uniform illumination (ID#: 15-7363)


Xiaoxia Li, “Application of Biometric Identification Technology for Network Security in the Network and Information Era, Which Will Greatly Change the Life-Style of People,” in Networking, Sensing and Control (ICNSC), 2015 IEEE 12th International Conference on, vol., no., pp. 566–569, 9–11 April 2015. doi:10.1109/ICNSC.2015.7116099
Abstract: The global spread revolution of information and information technology is playing a decisive role to social change. Internet has become the most effective way for information transmission, whose role is network security. How to guarantee network security has become a serious and worrying problem. Biometric identification technology has some advantages including universality, uniqueness, stability and hard to be stolen. Through comparing with the other methods of biometric identification technology, such as fingerprint recognition, palm recognition, facial recognition, signature recognition, iris recognition and retina recognition, gene recognition has advantages of exclusiveness, never change, convenience and a large amount of information, which is thought to be the most important method of biometric identification technology. With the development of modern technology, the fusion of biological technology and information technology has become an inevitable trend. Biometric identification technology will necessarily replace the traditional identification technology and greatly change the life-style of people in the near future.
Keywords: Internet; biometrics (access control); security of data; biological technology; biometric identification technology; facial recognition; fingerprint recognition; gene recognition; information technology; information transmission; iris recognition; network security; palm recognition; retina recognition; signature recognition; social change; Computer viruses; DNA; Encyclopedias; Internet; Servers; traditional identification technology (ID#: 15-7364)


Reney, Dolly; Tripathi, Neeta, “An Efficient Method to Face and Emotion Detection,” in Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, vol., no., pp. 493–497, 4–6 April 2015. doi:10.1109/CSNT.2015.155
Abstract: Face detection and emotion selection is the one of the current topic in the security field which provides solution to various challenges. Beside traditional challenges in captured facial images under uncontrolled settings such as varying poses, different lighting and expressions for face recognition and different sound frequencies for emotion recognition. For the any face and emotion detection system database is the most important part for the comparison of the face features and sound Mel frequency components. For database creation features of the face are calculated and these features are store in the database. This database is then use for the evaluation of the face and emotion by using different algorithms. In this paper we are going implement an efficient method to create face and emotion feature database and then this will be used for face and emotion recognition of the person. For detecting face from the input image we are using Viola-Jones face detection algorithm and to evaluate the face and emotion detection KNN classifier is used.
Keywords: Classification algorithms; Databases; Detectors; Face; Face detection; Face recognition; Feature extraction; Face Detection; Facial Expression Recognition; Feature Extraction; KNN Classifier; Mel Frequency Component; Viola-Jones algorithm (ID#: 15-7365)


Khare, S., “Finger Gesture and Pattern Recognition Based Device Security System,” in Signal Processing and Communication (ICSC), 2015 International Conference on, vol., no., pp. 443–447, 16–18 March 2015. doi:10.1109/ICSPCom.2015.7150694
Abstract: This research aims at introduction of a hand gesture recognition based system to recognize real time gestures in natural environment and compare patterns with image database for matching of image pairs to trigger unlocking of mobile devices. The efforts made in this direction during past relating to security systems for mobile devices has been a major concern and methods like draw pattern unlock, passcodes, facial and voice recognition technologies have already been employed to a fair level of extent, but these are quiet susceptible to hacks and greater ratio of recognition failure errors (especially in cases of voice and facial). A next step in HMI would be use of fingertip tracking based unlocking mechanism, which would employ minimalistic hardware like webcam or smartphone front camera. Image acquisition through MATLAB is followed up by conversion to grayscale and application of optimal filter for edge detection utilized in different conditions for optimal results in recognizing fingertips up to a precise level of accuracy. Pattern is traced at 60 fps for tracking and tracing and therefore cross referenced with the training image by deployment of neural networks for improved recognition efficiency. Data is registered in real time and device is unlocked at instance when SSIM takes a value above predefined threshold percentage or number. The aforementioned mechanism is employed in applications via user friendly GUI frontend and computational modelling through MATLAB for backend.
Keywords: gesture recognition; image motion analysis; mobile handsets; neural nets; security; GUI frontend; MATLAB; SSIM; computational modelling; device security system; draw pattern unlock; edge detection; facial recognition technologies; failure error recognition; finger gesture; fingertip tracking; hand image acquisition; image database; image pair matching; mobile devices security systems; mobile devices unlocking; neural networks deployment; optimal filter; passcodes; pattern recognition; smartphone front camera; unlocking mechanism; voice recognition technologies; webcam; Biological neural networks; Pattern matching; Security; Training; Computer vision; HMI (Human Machine Interface); ORB;SIFT (Scale Invariant Feature Transform); SSIM (Structural Similarity Index Measure); SURF (Speed Up Robust Features) (ID#: 15-7366)


Taleb, I.; Mammar, M.O., “Parking Access System Based on Face Recognition,” in Programming and Systems (ISPS), 2015 12th International Symposium on, vol., no., pp. 1–5, 28–30 April 2015. doi:10.1109/ISPS.2015.7244982
Abstract: The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric face recognition technology has received significant attention. Face recognition technology, is very popular and it is used more widely because it does not require any kind of physical contact between the users and the device. Camera scans the user face and match it to a database for verification. Furthermore, it is easy to install and does not require any expensive hardware. Facial recognition technology is used widely in a variety of security systems such as physical access control or computer user accounts. In this paper, we present an access control vehicle system to the park based on camera installed at the parking entry. First, we use the nonadaptive method to detect the moving object, and we propose an algorithm to detect and recognize the face of the driver who wants to enter to the parking and verify if he is allowed. We use Viola-Jones method for face detection and we propose a new technique based on PCA, and LDA algorithm for face recognition.
Keywords: access control; face recognition; image motion analysis; object detection; principal component analysis; traffic engineering computing; LDA algorithm; PCA; Viola-Jones method; access control vehicle system; biometric face recognition technology; computer user accounts; face detection; human face; moving object detection; nonadaptive method; parking access system; physical access control; security; social interaction; Access control; Databases; Face; Face detection; Face recognition; Principal component analysis; Vehicles; Linear Discriminant Analysis (LDA); Moving object; Principal Component Analysis (PCA) (ID#: 15-7367)


Mohan, M.; R. Prem Kumar; Agrawal, R.; Sharma, S.; Dutta, M.K.; Travieso, C.M.; Alonso-Hernandez, J.B., “Finger Vein Recognition Using Integrated Responses of Texture Features,” in Bioinspired Intelligence (IWOBI), 2015 4th International Work Conference on, vol., no., pp. 209–214, 10–12 June 2015. doi:10.1109/IWOBI.2015.7160168
Abstract: The finger vein recognition system is a secure and a reliable system with the advantage of robustness against malicious attacks. It is more convenient to operate this biometric feature than other biometric features such as facial and iris recognition system. The paper proposes a unique technique to find the local and the global features using Integrated Responses of Texture (IRT) features from finger veins which improves the overall accuracy of the system and is invariant to rotations. The segmentation of region of interest at different resolution levels makes the system highly efficient. The lower resolution data gives the overall global features and the higher resolution data gives the distinct local features. The complete feature set is descriptive in nature and reduces the Equal Error Rate to 0.523%. The Multi-Support Vector Machine (Multi-SVM) is used to classify and match the obtained results. The experimental results indicate that the system is highly accurate with an accuracy of 94%.
Keywords: biometrics (access control); feature extraction; fingerprint identification; image texture; security of data; support vector machines; vein recognition; IRT features; Multi-SVM; biometric feature; equal error rate; facial recognition system; finger vein recognition system; global features; integrated responses; integrated responses of texture; iris recognition system; malicious attacks; multi-support vector machine; reliable system; secure system; texture features; Accuracy; Databases; Feature extraction; Histograms; Thumb; Veins; Integrated Responses; Local Binary Pattern; Multi-Support Vector Machine (Multi- SVM); Pyramid Levels (ID#: 15-7368)


Matzner, S.; Heredia-Langner, A.; Amidan, B.; Boettcher, E.J.; Lochtefeld, D.; Webb, T., “Standoff Human Identification Using Body Shape,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225300
Abstract: The ability to identify individuals is a key component of maintaining safety and security in public spaces and around critical infrastructure. Monitoring an open space is challenging because individuals must be identified and re-identified from a standoff distance non-intrusively, making methods like fingerprinting and even facial recognition impractical. We propose using body shape features as a means for identification from standoff sensing, either complementing other identifiers or as an alternative. An important challenge in monitoring open spaces is reconstructing identifying features when only a partial observation is available, because of the view-angle limitations and occlusion or subject pose changes. To address this challenge, we investigated the minimum number of features required for a high probability of correct identification, and we developed models for predicting a key body feature-height-from a limited set of observed features. We found that any set of nine randomly selected body measurements was sufficient to correctly identify an individual in a dataset of 4041 subjects. For predicting height, anthropometric measures were investigated for correlation with height. Their correlation coefficients and associated linear models were reported. These results—a sufficient number of features for identification and height prediction from a single feature—contribute to developing systems for standoff identification when views of a subject are limited.
Keywords: biomedical measurement; body sensor networks; correlation methods; height measurement; probability; shape measurement; anthropometric measurement; associated linear model; body shape; correlation coefficient; facial recognition; feature reconstruction; fingerprinting; open space monitoring; probability; safety; security; sensor; standoff human identification; view-angle limitation; Correlation; Elbow; Feature extraction; Length measurement; Neck; Shape; Shoulder; anthropometrics; biometrics; feature selection (ID#: 15-7369)


Dong Yi; Zhen Lei; Li, Stan Z., “Shared Representation Learning for Heterogenous Face Recognition,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol.1, no., pp. 1–7, 4–8 May 2015. doi:10.1109/FG.2015.7163093
Abstract: After intensive research, heterogenous face recognition is still a challenging problem. The main difficulties are owing to the complex relationship between heterogenous face image spaces. The heterogeneity is always tightly coupled with other variations, which makes the relationship of heterogenous face images highly nonlinear. Many excellent methods have been proposed to model the nonlinear relationship, but they apt to overfit to the training set, due to limited samples. Inspired by the unsupervised algorithms in deep learning, this paper proposes a novel framework for heterogeneous face recognition. We first extract Gabor features at some localized facial points, and then use Restricted Boltzmann Machines (RBMs) to learn a shared representation locally to remove the heterogeneity around each facial point. Finally, the shared representations of local RBMs are connected together and processed by PCA. Near infrared (NIR) to visible (VIS) face recognition problem and two databases are selected to evaluate the performance of the proposed method. On CASIA HFB database, we obtain comparable results to state-of-the-art methods. On a more difficult database, CASIA NIR-VIS 2.0, we outperform other methods significantly.
Keywords: Boltzmann machines; face recognition; infrared imaging; learning (artificial intelligence); principal component analysis; CASIA HFB database; CASIA NIR-VIS 2.0; PCA; RBM; deep learning; heterogenous face image spaces; heterogenous face recognition; near infrared; nonlinear relationship; restricted Boltzmann machines; shared representation learning; training set; unsupervised algorithms; Databases; Face; Face recognition; Feature extraction; Principal component analysis; Standards; Training (ID#: 15-7370)


Varghese, Ashwini Ann; Cherian, Jacob P; Kizhakkethottam, Jubilant J, “Overview on Emotion Recognition System,”
in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp. 1–5, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292443
Abstract: Human emotion recognition plays an important role in the interpersonal relationship. The automatic recognition of emotions has been an active research topic from early eras. Therefore, there are several advances made in this field. Emotions are reflected from speech, hand and gestures of the body and through facial expressions. Hence extracting and understanding of emotion has a high importance of the interaction between human and machine communication. This paper describes the advances made in this field and the various approaches used for recognition of emotions. The main objective of the paper is to propose real time implementation of emotion recognition system.
Keywords: Active appearance model; Emotion recognition; Face; Face recognition; Feature extraction; Speech; Speech recognition; Active Appearance Model; Decision level function; Facial Action Encoding; Feature level fusion; Hidden Markov Model; State Sequence ML classifier; affective states (ID#: 15-7371)


Brady, K., “Robust Face Recognition-Based Search and Retrieval Across Image Stills and Video,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–8, 14–16 April 2015. doi:10.1109/THS.2015.7225320
Abstract: Significant progress has been made in addressing face recognition channel, sensor, and session effects in both still images and video. These effects include the classic PIE (pose, illumination, expression) variation, as well as variations in other characteristics such as age and facial hair. While much progress has been made, there has been little formal work in characterizing and compensating for the intrinsic differences between faces in still images and video frames. These differences include that faces in still images tend to have neutral expressions and frontal poses, while faces in videos tend to have more natural expressions and poses. Typically faces in videos are also blurrier, have lower resolution, and are framed differently than faces in still images. Addressing these issues is important when comparing face images between still images and video frames. Also, face recognition systems for video applications often rely on legacy face corpora of still images and associated meta data (e.g. identifying information, landmarks) for development, which are not formally compensated for when applied to the video domain. In this paper we will evaluate the impact of channel effects on face recognition across still images and video frames for the search and retrieval task. We will also introduce a novel face recognition approach for addressing the performance gap across these two respective channels. The datasets and evaluation protocols from the Labeled Faces in the Wild (LFW) still image and YouTube Faces (YTF) video corpora will be used for the comparative characterization and evaluation. Since the identities of subjects in the YTF corpora are a subset of those in the LFW corpora, this enables an apples-to-apples comparison of in-corpus and cross-corpora face comparisons.
Keywords: face recognition; pose estimation; social networking (online); video retrieval; LFW; YTF; YouTube faces; classic PIE variation; frontal poses; image retrieval; image search; image stills; labeled faces in the wild; legacy face corpora; neutral expressions; robust face recognition; video frames; Face recognition; Gabor filters; Lighting; Gabor features; computer vision; formatting; pattern recognition (ID#: 15-7372)


Chia-Chin Tsao; Yan-Ying Chen; Yu-Lin Hou; Hsu, W.H., “Identify Visual Human Signature in Community via Wearable Camera,” in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, vol., no.,
pp. 2229–2233, 19–24 April 2015. doi:10.1109/ICASSP.2015.7178367
Abstract: With the increasing popularity of wearable devices, information becomes much easily available. However, personal information sharing still poses great challenges because of privacy issues. We propose an idea of Visual Human Signature (VHS) which can represent each person uniquely even captured in different views/poses by wearable cameras. We evaluate the performance of multiple effective modalities for recognizing an identity, including facial appearance, visual patches, facial attributes and clothing attributes. We propose to emphasize significant dimensions and do weighted voting fusion for incorporating the modalities to improve the VHS recognition. By jointly considering multiple modalities, the VHS recognition rate can reach by 51% in frontal images and 48% in the more challenging environment and our approach can surpass the baseline with average fusion by 25% and 16%. We also introduce Multiview Celebrity Identity Dataset (MCID), a new dataset containing hundreds of identities with different view and clothing for comprehensive evaluation.
Keywords: cameras; image recognition; security of data; sensor fusion; wearable computers; MCID; VHS recognition; clothing attributes; facial appearance; facial attributes; information sharing; multiview celebrity identity dataset; visual human signature; visual patches; wearable camera; wearable devices; weighted voting fusion; Clothing; Communities; Databases; Face; Feature extraction; Robustness; Visualization; Human Attributes; Visual Human Signature; Wearable Device; Weighted Voting
(ID#: 15-7373)


Anjusree V.K.; Darsan, Gopu, “Interactive Email System Based on Blink Detection,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 1274–1277, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275788
Abstract: This is a work to develop an interactive email application. An email system is considered as a personal private property nowadays. It is not easy for people with disability to use normal devices for checking their emails. Users need more interaction with their emails. This interactive technology is based on eye blink detection; hence persons with disabilities can also efficiently use this system. The system is divided into two modules. First, in order to use such a system in a secure manner, it is vital to have a secure login module. Face recognition is used for login because it is the only option that can work as a security module with least failure rate and reliability. For face recognition fisherface algorithm is used which can perform faster, best for variations in lighting and facial expression. Second is a tracking phase, which helps to interact with the email system, after the email is loaded and is based on eye blink detection. In this phase a threshold based approach is used that can detect whether a blink occurred or not and interprets them as control commands for interacting with the system. This vibrant application helps people to check their emails faster in a more interactive way without touching any device.
Keywords: Algorithm design and analysis; Computers; Correlation; Electronic mail; Face; Face recognition; Feature extraction; Blink Detection; Face Detection; Fisherface (ID#: 15-7374)


Babutain, Khalid; Alaklobi, Saied; Alghamdi, Anwar; Sasi, Sreela, “Automated Surveillance of Computer Monitors in Labs,” in Advances in Computing, Communications and Informatics (ICACCI), 2015 International Conference on, vol., no., pp. 1026–1030, 10–13 Aug. 2015. doi:10.1109/ICACCI.2015.7275745
Abstract: Object detection and recognition are still challenging and they find application in surveillance systems. University computer labs may or may not have surveillance video cameras, but even when a camera is present it may not have an intelligent software system to automatically monitor. A system for Automated Surveillance of Computer Monitors in Labs was designed and developed for automatic detection of any missing monitors. This system can also detect the person who is responsible for removing any monitor. If this is an authorized person, the system will display the name and facial image from the database of all employees. If the person is unauthorized, the system will generate an alarm and will display that person’s face, and will send an automated email instantly to the security department of the university with that facial image. The simulation results confirm that this automated system could be used for monitoring any computer labs.
Keywords: Computers; Face; Face recognition; Object detection; Security; Surveillance; Face Detection and Recognition; Missing Monitor Detection; Object Detection and Recognition; Scanning Technique; Surveillance Systems (ID#: 15-7375)


Alkandari, A.; Aljaber, S.J., “Principle Component Analysis Algorithm (PCA) for Image Recognition,” in Computing Technology and Information Management (ICCTIM), 2015 Second International Conference on, vol., no., pp. 76–80, 21–23 April 2015. doi:10.1109/ICCTIM.2015.7224596
Abstract: This paper aims mainly to recognize the important of algorithm computing to identify the facial image without human intervention. Life in the current era imposed on us to increase the level of security and speed in the search for information, and the most important information is the capability of recognizing and identifying a person by his face. Principle Component Analysis algorithm (PCA) is a useful statistical technique used for finding patterns in data of high dimension and that has found application in face recognition and image compression fields that are used for reduce dimension vector to better recognize images.
Keywords: data compression; face recognition; principal component analysis; PCA; dimension vector reduction; facial image identification; image compression; image recognition; person identification; person recognition; principle component analysis algorithm; security level; statistical technique; Algorithm design and analysis; Face; Face recognition; Feature extraction; Image recognition; Principal component analysis; Training; Image analysis; Principle Component Analysis algorithm (PCA) (ID#: 15-7376)


Shalin Eliabeth S; Thomas, Bino; Kizhakkethottam, Jubilant J; “Analysis of Effective Biometric Identification on Monozygotic Twins,” in Soft-Computing and Networks Security (ICSNS), 2015 International Conference on, vol., no., pp.1–6, 25–27 Feb. 2015. doi:10.1109/ICSNS.2015.7292444
Abstract: One of the major challenges that the biometric detection system facing is how to distinguish between monozygotic or identical twins. However, the number of multiple births has been increasing over the past two decades. It cannot be identified based on DNA. So the lack of proper identification system will lead to many criminal cases. This paper is focused on different biometric identification technologies based on the features of face, fingerprint, palm print, iris, retina and voice for the verification of identical twins. By analyzing can realize that face detection based on facial mark is the most efficient one for the identification of identical twins. Automatic facial mark detector known as fast radial symmetry transform will help with the proper identification of different facial marks because the manual annotation of facial marks does not provide proper results. The other features (finger print, palm print, iris, and retina, etc.) are not unique for identical twins.
Keywords: Face; Face recognition; Fingerprint recognition; Iris recognition; Retina; Transforms; Biometric Identification; Face detection; Facial mark detection; Monozygotic twins; Multibiometric system (ID#: 15-7377)


Zhencheng Hu; Uchida, Naoko; Yanming Wang; Yanchao Dong, “Face Orientation Estimation for Driver Monitoring with a Single Depth Camera,” in Intelligent Vehicles Symposium (IV), 2015 IEEE, pp. 958–963, 28 June–01 July 2015. doi:10.1109/IVS.2015.7225808
Abstract: Careless driving is a major factor in most traffic accidents. In the last decade, research on estimation of face orientation and tracking of facial features through consequence images have shown very promising results of determining the level of driver’s concentration. Image sources could be monochrome camera, stereo camera or depth camera. In this paper, we propose a novel approach of facial features and face orientation detection by using a single uncalibrated depth camera by switching IR depth pattern emitter. With this simple setup, we are able to obtain both depth image and infrared image in a continuously alternating grab mode. Infrared images are employed for facial features detection and tracking while depth information is used for face region detection and face orientation estimation. This system is not utilized only for driver monitoring system, but also other human interface system such as security and avatar systems.
Keywords: accidents; estimation theory; face recognition; feature extraction; monitoring; traffic engineering computing; careless driving; consequence images; driver monitoring system; face orientation estimation; facial features detection; monochrome camera; single depth camera; traffic accidents; Cameras; Estimation; Face; Facial features; Feature extraction; Lighting; Mouth
(ID#: 15-7378)


Xun Gong; Zehua Fu; Xinxin Li; Lin Feng, “A Two-Stage Estimation Method for Depth Estimation of Facial Landmarks,” in Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, vol., no., pp. 1–6, 23–25 March 2015. doi:10.1109/ISBA.2015.7126355
Abstract: To address the problem of 3D face modeling based on a set of landmarks on images, the traditional feature-based morphable model, using face class-specific information, makes direct use of these 2D points to infer a dense 3D face surface. However, the unknown depth of landmarks degrades accuracy considerably. A promising solution is to predict the depth of landmarks at first. Bases on this idea, a two-stage estimation method is proposed to compute the depth value of landmarks from two images. And then, the estimated 3D landmarks are applied to a deformation algorithm to make a precise 3D dense facial shape. Test results on synthesized images with known ground-truth show that the proposed two-stage estimation method can obtain landmarks’ depth both effectively and efficiently, and further that the reconstructed accuracy is greatly enhanced with the estimated 3D landmarks. Reconstruction results of real-world photos are rather realistic.
Keywords: face recognition; image reconstruction; 3D face modeling; 3D landmarks; deformation algorithm; dense 3D face surface; depth estimation; face class-specific information; facial landmarks; feature-based morphable model;  precise 3D dense facial shape; synthesized images; two-stage estimation method; Computational modeling; Estimation; Face; Image reconstruction; Shape; Solid modeling; Three-dimensional displays (ID#: 15-7379)


Srividhya, K.; Manikanthan, S.V., “An Android Based Secure Access Control Using ARM and Cloud Computing,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 1486–1489, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124833
Abstract: Biometrics in the cloud infrastructure improves the security of the system. The physical characters in biometrics are finger print, facial structure, iris pattern, voice, etc. Any of these characters are given to identify the persons and authenticate them. This paper describes the enrollment and identification for the system which allows the accessing of person’s well known by the higher officials. The physical behaviors are scanned by using android mobile phone. The enroll and recognize operations are achieved with the help of cloud computing. LPC2148 is ARM processor used for controlling the overall system. The primary goal is to achieve the best security to the system and reliable. In this system, there is no need for password.
Keywords: authorisation; biometrics (access control); cloud computing; mobile computing; smart phones; ARM processor; Android mobile phone; access control; cloud infrastructure biometrics; system security; Authentication; Cloud computing; Databases; Fingerprint recognition; Smart phones; authentication; enrollment and identification (ID#: 15-7380)


Bin Yang; Junjie Yan; Zhen Lei; Li, Stan Z., “Fine-Grained Evaluation on Face Detection in the Wild,” in Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, vol.1, no., pp.1–7, 4–8 May 2015. doi:10.1109/FG.2015.7163158
Abstract: Current evaluation datasets for face detection, which is of great value in real-world applications, are still somewhat
out-of-date. We propose a new face detection dataset MALF (short for Multi-Attribute Labelled Faces), which contains 5,250 images collected from the Internet and ~12,000 labelled faces. The MALF dataset highlights in two main features: 1) It is the largest dataset for evaluation of face detection in the wild, and the annotation of multiple facial attributes makes it possible for fine-grained performance analysis. 2) To reveal the ‘true’ performances of algorithms in practice, MALF adopts an evaluation metric that puts stress on the recall rate at a relatively low false alarm rate. Besides providing a large dataset for face detection evaluation, this paper also collects more than 20 state-of-the-art algorithms, both from academia and industry, and conducts a fine-grained comparative evaluation of these algorithms, which can be considered as a summary of past advances made in face detection. The dataset and up-to-date results of the evaluation can be found at http: //
Keywords: Internet; face recognition; object detection; MALF; face detection dataset; fine-grained comparative evaluation; multiattribute labelled faces; multiple facial attribute annotation; recall rate; relatively low false alarm rate; Algorithm design and analysis; Benchmark testing; Detectors; Face; Face detection; Measurement; Object detection (ID#: 15-7381)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.