Visible to the public Biblio

Filters: Keyword is lighting  [Clear All Filters]
Szolga, L.A., Groza, R.G..  2020.  Phosphor Based White LED Driver by Taking Advantage on the Remanence Effect. 2020 IEEE 26th International Symposium for Design and Technology in Electronic Packaging (SIITME). :265–269.
This paper presents the development of a control circuit to enhance the performances of LED lamps. In this direction, a comparison between the luminous intensity of normal LED based lamps and mid-power ones, for both continuous and switching conditions has been made. The already well know control technologies were analyzed and a study was conducted to increase the lighting performances by rising the operating frequency and magnifying the contribution of remanence effect and thus increasing the efficiency of the light source. To achieve this, in the first stage of the project the power and control circuits have been modeled, related to desired parameters and tested in simulation software. In the second stage, the proposed circuit was implemented by functional blocks and in the last stage, tests were made on the circuit and on light sources in order to process the results. The power consumption has been decreased nearly to a half of it and the luminous flux raised with 15% due to overcurrent and remanence effect that we used.
Bezzine, Ismail, Khan, Zohaib Amjad, Beghdadi, Azeddine, Al-Maadeed, Noor, Kaaniche, Mounir, Al-Maadeed, Somaya, Bouridane, Ahmed, Cheikh, Faouzi Alaya.  2020.  Video Quality Assessment Dataset for Smart Public Security Systems. 2020 IEEE 23rd International Multitopic Conference (INMIC). :1—5.
Security and monitoring systems are more and more demanding in terms of quality, reliability and flexibility especially those dedicated to video surveillance. The quality of the acquired video signal strongly affects the performance of the high level tasks such as visual tracking, face detection and recognition. The design of a video quality assessment metric dedicated to this particular application requires a preliminary study on the common distortions encountered in video surveillance. To this end, we present in this paper a dataset dedicated to video quality assessment in the context of video surveillance. This database consists of a set of common distortions at different levels of annoyance. The subjective tests are performed using a classical pair comparison protocol with some new configurations. The subjective results obtained through the psycho-visual tests are analyzed and compared to some objective video quality assessment metrics. The preliminary results are encouraging and open a new framework for building smart video surveillance based security systems.
Beghdadi, Azeddine, Bezzine, Ismail, Qureshi, Muhammad Ali.  2020.  A Perceptual Quality-driven Video Surveillance System. 2020 IEEE 23rd International Multitopic Conference (INMIC). :1–6.
Video-based surveillance systems often suffer from poor-quality video in an uncontrolled environment. This may strongly affect the performance of high-level tasks such as visual tracking, abnormal event detection or more generally scene understanding and interpretation. This work aims to demonstrate the impact and the importance of video quality in video surveillance systems. Here, we focus on the most important challenges and difficulties related to the perceptual quality of the acquired or transmitted images/videos in uncontrolled environments. In this paper, we propose an architecture of a smart surveillance system that incorporates the perceptual quality of acquired scenes. We study the behaviour of some state-of-the-art video quality metrics on some original and distorted sequences from a dedicated surveillance dataset. Through this study, it has been shown that some of the state-of-the-art image/video quality metrics do not work in the context of video-surveillance. This study opens a new research direction to develop the video quality metrics in the context of video surveillance and also to propose a new quality-driven framework of video surveillance system.
Plager, Trenton, Zhu, Ying, Blackmon, Douglas A..  2020.  Creating a VR Experience of Solitary Confinement. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). :692—693.
The goal of this project is to create a realistic VR experience of solitary confinement and study its impact on users. Although there have been active debates and studies on this subject, very few people have personal experience of solitary confinement. Our first aim is to create such an experience in VR to raise the awareness of solitary confinement. We also want to conduct user studies to compare the VR solitary confinement experience with other types of media experiences, such as films or personal narrations. Finally, we want to study people’s sense of time in such a VR environment.
Olejnik, Lukasz.  2020.  Shedding light on web privacy impact assessment: A case study of the Ambient Light Sensor API. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :310—313.

As modern web browsers gain new and increasingly powerful features the importance of impact assessments of the new functionality becomes crucial. A web privacy impact assessment of a planned web browser feature, the Ambient Light Sensor API, indicated risks arising from the exposure of overly precise information about the lighting conditions in the user environment. The analysis led to the demonstration of direct risks of leaks of user data, such as the list of visited websites or exfiltration of sensitive content across distinct browser contexts. Our work contributed to the creation of web standards leading to decisions by browser vendors (i.e. obsolescence, non-implementation or modification to the operation of browser features). We highlight the need to consider broad risks when making reviews of new features. We offer practically-driven high-level observations lying on the intersection of web security and privacy risk engineering and modeling, and standardization. We structure our work as a case study from activities spanning over three years.

Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

John, A., MC, A., Ajayan, A. S., Sanoop, S., Kumar, V. R..  2020.  Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :1328—1333.

Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.

Chiang, M., Lau, S..  2011.  Automatic multiple faces tracking and detection using improved edge detector algorithm. 2011 7th International Conference on Information Technology in Asia. :1—5.

The automatic face tracking and detection has been one of the fastest developing areas due to its wide range of application, security and surveillance application in particular. It has been one of the most interest subjects, which suppose but yet to be wholly explored in various research areas due to various distinctive factors: varying ethnic groups, sizes, orientations, poses, occlusions and lighting conditions. The focus of this paper is to propose an improve algorithm to speed up the face tracking and detection process with the simple and efficient proposed novel edge detector to reject the non-face-likes regions, hence reduce the false detection rate in an automatic face tracking and detection in still images with multiple faces for facial expression system. The correct rates of 95.9% on the Haar face detection and proposed novel edge detector, which is higher 6.1% than the primitive integration of Haar and canny edge detector.

Matern, F., Riess, C., Stamminger, M..  2019.  Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). :83—92.
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.
Kobayashi, Hiroyuki.  2019.  CEPHEID: the infrastructure-less indoor localization using lighting fixtures' acoustic frequency fingerprints. IECON 2019 - 45th Annual Conference of the IEEE Industrial Electronics Society. 1:6842–6847.
This paper deals with a new indoor localization scheme called “CEPHEID” by using ceiling lighting fixtures. It is based on the fact that each lighting fixture has its own characteristic flickering pattern. Then, the author proposes a technique to identify individual light by using simple instruments and DNN classifier. Thanks to the less requirements for hardware, CEPHEID can be implemented by a few simple discrete electronic components and an ordinary smartphone. A prototype “CEPHEID dongle” is also introduced in this paper. Finally, the validity of the author's method is examined by indoor positioning experiments.
Manaka, Keisuke, Chen, Liyuan, Habuchi, Hiromasa, Kozawa, Yusuke.  2019.  Proposal of Equal-Weight (2, 2) Visual Secret Sharing Scheme on VN-CSK Illumination Light Communication. 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS). :1–5.
Variable N-parallel code-shift-keying (VN-CSK) system has been proposed for solving the dimming control problem and the adjacent illumination light interference in illumination light communication. VN-CSK system only focuses on separating the light signal in the illumination light overlapping area. While, it is considerable to transmit a new data using the light overlapping. Visual secret sharing (VSS) scheme is a kind of secret sharing scheme, which distributes the secret data for security and restore by overlapping. It has high affinity to visible light communication. In this paper, a system combined with visible light communication and (2,2)-VSS scheme is proposed. In the proposed system, a modified pseudo orthogonal M-sequence is used that the occurrence probability of 0 and 1 of share is one-half in order to achieve a constant illuminance. In addition, this system use Modified Pseudo-Orthogonal M-sequence(MPOM) for ensuring the lighting function. The bit error rate performance of the proposed system is evaluated under the indoor visible light communication channel by simulation.
Bundalo, Zlatko, Veljko, Momčilo, Bundalo, Dušanka, Kuzmić, Goran, Sajić, Mirko, Ramakić, Adnan.  2019.  Energy Efficient Embedded Systems for LED Lighting Control in Traffic. 2019 8th Mediterranean Conference on Embedded Computing (MECO). :1–4.
The paper considers, proposes and describes possibilities and ways for application, design and implementation of energy efficient microprocessor based embedded systems for LED lighting control in the traffic. Using LED lighting technology and appropriate designed embedded systems it is possible to implement very efficient and smart systems for very wide range of applications in the traffic. This type of systems can be widely used in many places in the traffic where there is needed quality lighting and low energy consumption. Application of such systems enables to increase energy consumption efficiency, quality of lighting and security of traffic and to decrease total costs for the lighting. Way of design and use of such digital embedded system to effectively increase functionality and efficiency of lighting in the traffic is proposed and described. It is also proposed and described one practically designed and implemented simple and universal embedded system for LED lighting control for many applications in the traffic.
Lian, J., Wang, X., Noshad, M., Brandt-Pearce, M..  2018.  Optical Wireless Interception Vulnerability Analysis of Visible Light Communication System. 2018 IEEE International Conference on Communications (ICC). :1–6.
Visible light communication is a solution for high-security wireless data transmission. In this paper, we first analyze the potential vulnerability of the system from eavesdropping outside the room. By setting up a signal to noise ratio threshold, we define a vulnerable area outside of the room through a window. We compute the receiver aperture needed to capture the signal and what portion of the space is most vulnerable to eavesdropping. Based on the analysis, we propose a solution to improve the security by optimizing the modulation efficiency of each LED in the indoor lamp. The simulation results show that the proposed solution can improve the security considerably while maintaining the indoor communication performance.
Alhafidh, B. M. H., Allen, W. H..  2017.  High Level Design of a Home Autonomous System Based on Cyber Physical System Modeling. 2017 IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW). :45–52.
The process used to build an autonomous smart home system using Cyber-Physical Systems (CPS) principles has received much attention by researchers and developers. However, there are many challenges during the design and implementation of such a system, such as Portability, Timing, Prediction, and Integrity. This paper presents a novel modeling methodology for a smart home system in the scope of CyberPhysical interface that attempts to overcome these issues. We discuss a high-level design approach that simulates the first three levels of a 5C architecture in CPS layers in a smart home environment. A detailed description of the model design, architecture, and a software implementation via NetLogo simulation have been presented in this paper.
Jin, Y., Eriksson, J..  2017.  Fully Automatic, Real-Time Vehicle Tracking for Surveillance Video. 2017 14th Conference on Computer and Robot Vision (CRV). :147–154.

We present an object tracking framework which fuses multiple unstable video-based methods and supports automatic tracker initialization and termination. To evaluate our system, we collected a large dataset of hand-annotated 5-minute traffic surveillance videos, which we are releasing to the community. To the best of our knowledge, this is the first publicly available dataset of such long videos, providing a diverse range of real-world object variation, scale change, interaction, different resolutions and illumination conditions. In our comprehensive evaluation using this dataset, we show that our automatic object tracking system often outperforms state-of-the-art trackers, even when these are provided with proper manual initialization. We also demonstrate tracking throughput improvements of 5× or more vs. the competition.