Visible to the public Biblio

Filters: Keyword is object detection  [Clear All Filters]
2021-06-30
Liu, Ming, Chen, Shichao, Lu, Fugang, Xing, Mengdao, Wei, Jingbiao.  2020.  A Target Detection Method in SAR Images Based on Superpixel Segmentation. 2020 IEEE 3rd International Conference on Electronic Information and Communication Technology (ICEICT). :528—530.
A synthetic aperture radar (SAR) target detection method based on the fusion of multiscale superpixel segmentations is proposed in this paper. SAR images are segmented between land and sea firstly by using superpixel technology in different scales. Secondly, image segmentation results together with the constant false alarm rate (CFAR) detection result are coalesced. Finally, target detection is realized by fusing different scale results. The effectiveness of the proposed algorithm is tested on Sentinel-1A data.
2021-05-13
Jain, Harsh, Vikram, Aditya, Mohana, Kashyap, Ankit, Jain, Ayush.  2020.  Weapon Detection using Artificial Intelligence and Deep Learning for Security Applications. 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). :193—198.
Security is always a main concern in every domain, due to a rise in crime rate in a crowded event or suspicious lonely areas. Abnormal detection and monitoring have major applications of computer vision to tackle various problems. Due to growing demand in the protection of safety, security and personal properties, needs and deployment of video surveillance systems can recognize and interpret the scene and anomaly events play a vital role in intelligence monitoring. This paper implements automatic gun (or) weapon detection using a convolution neural network (CNN) based SSD and Faster RCNN algorithms. Proposed implementation uses two types of datasets. One dataset, which had pre-labelled images and the other one is a set of images, which were labelled manually. Results are tabulated, both algorithms achieve good accuracy, but their application in real situations can be based on the trade-off between speed and accuracy.
2021-04-08
Zheng, Y., Cao, Y., Chang, C..  2020.  A PUF-Based Data-Device Hash for Tampered Image Detection and Source Camera Identification. IEEE Transactions on Information Forensics and Security. 15:620—634.
With the increasing prevalent of digital devices and their abuse for digital content creation, forgeries of digital images and video footage are more rampant than ever. Digital forensics is challenged into seeking advanced technologies for forgery content detection and acquisition device identification. Unfortunately, existing solutions that address image tampering problems fail to identify the device that produces the images or footage while techniques that can identify the camera is incapable of locating the tampered content of its captured images. In this paper, a new perceptual data-device hash is proposed to locate maliciously tampered image regions and identify the source camera of the received image data as a non-repudiable attestation in digital forensics. The presented image may have been either tampered or gone through benign content preserving geometric transforms or image processing operations. The proposed image hash is generated by projecting the invariant image features into a physical unclonable function (PUF)-defined Bernoulli random space. The tamper-resistant random PUF response is unique for each camera and can only be generated upon triggered by a challenge, which is provided by the image acquisition timestamp. The proposed hash is evaluated on the modified CASIA database and CMOS image sensor-based PUF simulated using 180 nm TSMC technology. It achieves a high tamper detection rate of 95.42% with the regions of tampered content successfully located, a good authentication performance of above 98.5% against standard content-preserving manipulations, and 96.25% and 90.42%, respectively, for the more challenging geometric transformations of rotation (0 360°) and scaling (scale factor in each dimension: 0.5). It is demonstrated to be able to identify the source camera with 100% accuracy and is secure against attacks on PUF.
2021-03-29
Xu, X., Ruan, Z., Yang, L..  2020.  Facial Expression Recognition Based on Graph Neural Network. 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC). :211—214.

Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.

2021-02-08
Nikouei, S. Y., Chen, Y., Faughnan, T. R..  2018.  Smart Surveillance as an Edge Service for Real-Time Human Detection and Tracking. 2018 IEEE/ACM Symposium on Edge Computing (SEC). :336—337.

Monitoring for security and well-being in highly populated areas is a critical issue for city administrators, policy makers and urban planners. As an essential part of many dynamic and critical data-driven tasks, situational awareness (SAW) provides decision-makers a deeper insight of the meaning of urban surveillance. Thus, surveillance measures are increasingly needed. However, traditional surveillance platforms are not scalable when more cameras are added to the network. In this work, a smart surveillance as an edge service has been proposed. To accomplish the object detection, identification, and tracking tasks at the edge-fog layers, two novel lightweight algorithms are proposed for detection and tracking respectively. A prototype has been built to validate the feasibility of the idea, and the test results are very encouraging.

Nisperos, Z. A., Gerardo, B., Hernandez, A..  2020.  Key Generation for Zero Steganography Using DNA Sequences. 2020 12th International Conference on Electronics, Computers and Artificial Intelligence (ECAI). :1–6.
Some of the key challenges in steganography are imperceptibility and resistance to detection of steganalysis algorithms. Zero steganography is an approach to data hiding such that the cover image is not modified. This paper focuses on the generation of stego-key, which is an essential component of this steganographic approach. This approach utilizes DNA sequences and shifting and flipping operations in its binary code representation. Experimental results show that the key generation algorithm has a low cracking probability. The algorithm satisfies the avalanche criterion.
2021-02-03
Sabu, R., Yasuda, K., Kato, R., Kawaguchi, S., Iwata, H..  2020.  Does visual search by neck motion improve hemispatial neglect?: An experimental study using an immersive virtual reality system 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :262—267.

Unilateral spatial neglect (USN) is a higher cognitive dysfunction that can occur after a stroke. It is defined as an impairment in finding, reporting, reacting to, and directing stimuli opposite the damaged side of the brain. We have proposed a system to identify neglected regions in USN patients in three dimensions using three-dimensional virtual reality. The objectives of this study are twofold: first, to propose a system for numerically identifying the neglected regions using an object detection task in a virtual space, and second, to compare the neglected regions during object detection when the patient's neck is immobilized (‘fixed-neck’ condition) versus when the neck can be freely moved to search (‘free-neck’ condition). We performed the test using an immersive virtual reality system, once with the patient's neck fixed and once with the patient's neck free to move. Comparing the results of the study in two patients, we found that the neglected areas were similar in the fixed-neck condition. However, in the free-neck condition, one patient's neglect improved while the other patient’s neglect worsened. These results suggest that exploratory ability affects the symptoms of USN and is crucial for clinical evaluation of USN patients.

2021-01-15
Brockschmidt, J., Shang, J., Wu, J..  2019.  On the Generality of Facial Forgery Detection. 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). :43—47.
A variety of architectures have been designed or repurposed for the task of facial forgery detection. While many of these designs have seen great success, they largely fail to address challenges these models may face in practice. A major challenge is posed by generality, wherein models must be prepared to perform in a variety of domains. In this paper, we investigate the ability of state-of-the-art facial forgery detection architectures to generalize. We first propose two criteria for generality: reliably detecting multiple spoofing techniques and reliably detecting unseen spoofing techniques. We then devise experiments which measure how a given architecture performs against these criteria. Our analysis focuses on two state-of-the-art facial forgery detection architectures, MesoNet and XceptionNet, both being convolutional neural networks (CNNs). Our experiments use samples from six state-of-the-art facial forgery techniques: Deepfakes, Face2Face, FaceSwap, GANnotation, ICface, and X2Face. We find MesoNet and XceptionNet show potential to generalize to multiple spoofing techniques but with a slight trade-off in accuracy, and largely fail against unseen techniques. We loosely extrapolate these results to similar CNN architectures and emphasize the need for better architectures to meet the challenges of generality.
Kharbat, F. F., Elamsy, T., Mahmoud, A., Abdullah, R..  2019.  Image Feature Detectors for Deepfake Video Detection. 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA). :1—4.
Detecting DeepFake videos are one of the challenges in digital media forensics. This paper proposes a method to detect deepfake videos using Support Vector Machine (SVM) regression. The SVM classifier can be trained with feature points extracted using one of the different feature-point detectors such as HOG, ORB, BRISK, KAZE, SURF, and FAST algorithms. A comprehensive test of the proposed method is conducted using a dataset of original and fake videos from the literature. Different feature point detectors are tested. The result shows that the proposed method of using feature-detector-descriptors for training the SVM can be effectively used to detect false videos.
Khalid, H., Woo, S. S..  2020.  OC-FakeDect: Classifying Deepfakes Using One-class Variational Autoencoder. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2794—2803.
An image forgery method called Deepfakes can cause security and privacy issues by changing the identity of a person in a photo through the replacement of his/her face with a computer-generated image or another person's face. Therefore, a new challenge of detecting Deepfakes arises to protect individuals from potential misuses. Many researchers have proposed various binary-classification based detection approaches to detect deepfakes. However, binary-classification based methods generally require a large amount of both real and fake face images for training, and it is challenging to collect sufficient fake images data in advance. Besides, when new deepfakes generation methods are introduced, little deepfakes data will be available, and the detection performance may be mediocre. To overcome these data scarcity limitations, we formulate deepfakes detection as a one-class anomaly detection problem. We propose OC-FakeDect, which uses a one-class Variational Autoencoder (VAE) to train only on real face images and detects non-real images such as deepfakes by treating them as anomalies. Our preliminary result shows that our one class-based approach can be promising when detecting Deepfakes, achieving a 97.5% accuracy on the NeuralTextures data of the well-known FaceForensics++ benchmark dataset without using any fake images for the training process.
Younus, M. A., Hasan, T. M..  2020.  Effective and Fast DeepFake Detection Method Based on Haar Wavelet Transform. 2020 International Conference on Computer Science and Software Engineering (CSASE). :186—190.
DeepFake using Generative Adversarial Networks (GANs) tampered videos reveals a new challenge in today's life. With the inception of GANs, generating high-quality fake videos becomes much easier and in a very realistic manner. Therefore, the development of efficient tools that can automatically detect these fake videos is of paramount importance. The proposed DeepFake detection method takes the advantage of the fact that current DeepFake generation algorithms cannot generate face images with varied resolutions, it is only able to generate new faces with a limited size and resolution, a further distortion and blur is needed to match and fit the fake face with the background and surrounding context in the source video. This transformation causes exclusive blur inconsistency between the generated face and its background in the outcome DeepFake videos, in turn, these artifacts can be effectively spotted by examining the edge pixels in the wavelet domain of the faces in each frame compared to the rest of the frame. A blur inconsistency detection scheme relied on the type of edge and the analysis of its sharpness using Haar wavelet transform as shown in this paper, by using this feature, it can determine if the face region in a video has been blurred or not and to what extent it has been blurred. Thus will lead to the detection of DeepFake videos. The effectiveness of the proposed scheme is demonstrated in the experimental results where the “UADFV” dataset has been used for the evaluation, a very successful detection rate with more than 90.5% was gained.
Zhu, K., Wu, B., Wang, B..  2020.  Deepfake Detection with Clustering-based Embedding Regularization. 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC). :257—264.

In recent months, AI-synthesized face swapping videos referred to as deepfake have become an emerging problem. False video is becoming more and more difficult to distinguish, which brings a series of challenges to social security. Some scholars are devoted to studying how to improve the detection accuracy of deepfake video. At the same time, in order to conduct better research, some datasets for deepfake detection are made. Companies such as Google and Facebook have also spent huge sums of money to produce datasets for deepfake video detection, as well as holding deepfake detection competitions. The continuous advancement of video tampering technology and the improvement of video quality have also brought great challenges to deepfake detection. Some scholars have achieved certain results on existing datasets, while the results on some high-quality datasets are not as good as expected. In this paper, we propose new method with clustering-based embedding regularization for deepfake detection. We use open source algorithms to generate videos which can simulate distinctive artifacts in the deepfake videos. To improve the local smoothness of the representation space, we integrate a clustering-based embedding regularization term into the classification objective, so that the obtained model learns to resist adversarial examples. We evaluate our method on three latest deepfake datasets. Experimental results demonstrate the effectiveness of our method.

2021-01-11
Mihanpour, A., Rashti, M. J., Alavi, S. E..  2020.  Human Action Recognition in Video Using DB-LSTM and ResNet. 2020 6th International Conference on Web Research (ICWR). :133—138.

Human action recognition in video is one of the most widely applied topics in the field of image and video processing, with many applications in surveillance (security, sports, etc.), activity detection, video-content-based monitoring, man-machine interaction, and health/disability care. Action recognition is a complex process that faces several challenges such as occlusion, camera movement, viewpoint move, background clutter, and brightness variation. In this study, we propose a novel human action recognition method using convolutional neural networks (CNN) and deep bidirectional LSTM (DB-LSTM) networks, using only raw video frames. First, deep features are extracted from video frames using a pre-trained CNN architecture called ResNet152. The sequential information of the frames is then learned using the DB-LSTM network, where multiple layers are stacked together in both forward and backward passes of DB-LSTM, to increase depth. The evaluation results of the proposed method using PyTorch, compared to the state-of-the-art methods, show a considerable increase in the efficiency of action recognition on the UCF 101 dataset, reaching 95% recognition accuracy. The choice of the CNN architecture, proper tuning of input parameters, and techniques such as data augmentation contribute to the accuracy boost in this study.

Khadka, A., Argyriou, V., Remagnino, P..  2020.  Accurate Deep Net Crowd Counting for Smart IoT Video acquisition devices. 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). :260—264.

A novel deep neural network is proposed, for accurate and robust crowd counting. Crowd counting is a complex task, as it strongly depends on the deployed camera characteristics and, above all, the scene perspective. Crowd counting is essential in security applications where Internet of Things (IoT) cameras are deployed to help with crowd management tasks. The complexity of a scene varies greatly, and a medium to large scale security system based on IoT cameras must cater for changes in perspective and how people appear from different vantage points. To address this, our deep architecture extracts multi-scale features with a pyramid contextual module to provide long-range contextual information and enlarge the receptive field. Experiments were run on three major crowd counting datasets, to test our proposed method. Results demonstrate our method supersedes the performance of state-of-the-art methods.

Khudhair, A. B., Ghani, R. F..  2020.  IoT Based Smart Video Surveillance System Using Convolutional Neural Network. 2020 6th International Engineering Conference “Sustainable Technology and Development" (IEC). :163—168.

Video surveillance plays an important role in our times. It is a great help in reducing the crime rate, and it can also help to monitor the status of facilities. The performance of the video surveillance system is limited by human factors such as fatigue, time efficiency, and human resources. It would be beneficial for all if fully automatic video surveillance systems are employed to do the job. The automation of the video surveillance system is still not satisfying regarding many problems such as the accuracy of the detector, bandwidth consumption, storage usage, etc. This scientific paper mainly focuses on a video surveillance system using Convolutional Neural Networks (CNN), IoT and cloud. The system contains multi nods, each node consists of a microprocessor(Raspberry Pi) and a camera, the nodes communicate with each other using client and server architecture. The nodes can detect humans using a pretraining MobileNetv2-SSDLite model and Common Objects in Context(COCO) dataset, the captured video will stream to the main node(only one node will communicate with cloud) in order to stream the video to the cloud. Also, the main node will send an SMS notification to the security team to inform the detection of humans. The security team can check the videos captured using a mobile application or web application. Operating the Object detection model of Deep learning will be required a large amount of the computational power, for instance, the Raspberry Pi with a limited in performance for that reason we used the MobileNetv2-SSDLite model.

Fomin, I., Burin, V., Bakhshiev, A..  2020.  Research on Neural Networks Integration for Object Classification in Video Analysis Systems. 2020 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). :1—5.

Object recognition with the help of outdoor video surveillance cameras is an important task in the context of ensuring the security at enterprises, public places and even private premises. There have long existed systems that allow detecting moving objects in the image sequence from a video surveillance system. Such a system is partially considered in this research. It detects moving objects using a background model, which has certain problems. Due to this some objects are missed or detected falsely. We propose to combine the moving objects detection results with the classification, using a deep neural network. This will allow determining whether a detected object belongs to a certain class, sorting out false detections, discarding the unnecessary ones (sometimes individual classes are unwanted), to divide detected people into the employees in the uniform and all others, etc. The authors perform a network training in the Keras developer-friendly environment that provides for quick building, changing and training of network architectures. The performance of the Keras integration into a video analysis system, using direct Python script execution techniques, is between 6 and 52 ms, while the precision is between 59.1% and 97.2% for different architectures. The integration, made by freezing a selected network architecture with weights, is selected after testing. After that, frozen architecture can be imported into video analysis using the TensorFlow interface for C++. The performance of such type of integration is between 3 and 49 ms. The precision is between 63.4% and 97.8% for different architectures.

Liu, X., Gao, W., Feng, D., Gao, X..  2020.  Abnormal Traffic Congestion Recognition Based on Video Analysis. 2020 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). :39—42.

The incidence of abnormal road traffic events, especially abnormal traffic congestion, is becoming more and more prominent in daily traffic management in China. It has become the main research work of urban traffic management to detect and identify traffic congestion incidents in time. Efficient and accurate detection of traffic congestion incidents can provide a good strategy for traffic management. At present, the detection and recognition of traffic congestion events mainly rely on the integration of road traffic flow data and the passing data collected by electronic police or devices of checkpoint, and then estimating and forecasting road conditions through the method of big data analysis; Such methods often have some disadvantages such as low time-effect, low precision and small prediction range. Therefore, with the help of the current large and medium cities in the public security, traffic police have built video surveillance equipment, through computer vision technology to analyze the traffic flow from video monitoring, in this paper, the motion state and the changing trend of vehicle flow are obtained by using the technology of vehicle detection from video and multi-target tracking based on deep learning, so as to realize the perception and recognition of traffic congestion. The method achieves the recognition accuracy of less than 60 seconds in real-time, more than 80% in detection rate of congestion event and more than 82.5% in accuracy of detection. At the same time, it breaks through the restriction of traditional big data prediction, such as traffic flow data, truck pass data and GPS floating car data, and enlarges the scene and scope of detection.

Gautam, A., Singh, S..  2020.  A Comparative Analysis of Deep Learning based Super-Resolution Techniques for Thermal Videos. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :919—925.

Video streams acquired from thermal cameras are proven to be beneficial in diverse number of fields including military, healthcare, law enforcement, and security. Despite the hype, thermal imaging is increasingly affected by poor resolution, where it has expensive optical sensors and inability to attain optical precision. In recent years, deep learning based super-resolution algorithms are developed to enhance the video frame resolution at high accuracy. This paper presents a comparative analysis of super resolution (SR) techniques based on deep neural networks (DNN) that are applied on thermal video dataset. SRCNN, EDSR, Auto-encoder, and SRGAN are also discussed and investigated. Further the results on benchmark thermal datasets including FLIR, OSU thermal pedestrian database and OSU color thermal database are evaluated and analyzed. Based on the experimental results, it is concluded that, SRGAN has delivered a superior performance on thermal frames when compared to other techniques and improvements, which has the ability to provide state-of-the art performance in real time operations.

2020-12-28
Slavic, G., Campo, D., Baydoun, M., Marin, P., Martin, D., Marcenaro, L., Regazzoni, C..  2020.  Anomaly Detection in Video Data Based on Probabilistic Latent Space Models. 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS). :1—8.

This paper proposes a method for detecting anomalies in video data. A Variational Autoencoder (VAE) is used for reducing the dimensionality of video frames, generating latent space information that is comparable to low-dimensional sensory data (e.g., positioning, steering angle), making feasible the development of a consistent multi-modal architecture for autonomous vehicles. An Adapted Markov Jump Particle Filter defined by discrete and continuous inference levels is employed to predict the following frames and detecting anomalies in new video sequences. Our method is evaluated on different video scenarios where a semi-autonomous vehicle performs a set of tasks in a closed environment.

2020-12-17
Iskhakov, A., Jharko, E..  2020.  Approach to Security Provision of Machine Vision for Unmanned Vehicles of “Smart City”. 2020 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). :1—5.

By analogy to nature, sight is the main integral component of robotic complexes, including unmanned vehicles. In this connection, one of the urgent tasks in the modern development of unmanned vehicles is the solution to the problem of providing security for new advanced systems, algorithms, methods, and principles of space navigation of robots. In the paper, we present an approach to the protection of machine vision systems based on technologies of deep learning. At the heart of the approach lies the “Feature Squeezing” method that works on the phase of model operation. It allows us to detect “adversarial” examples. Considering the urgency and importance of the target process, the features of unmanned vehicle hardware platforms and also the necessity of execution of tasks on detecting of the objects in real-time mode, it was offered to carry out an additional simple computational procedure of localization and classification of required objects in case of crossing a defined in advance threshold of “adversarial” object testing.

Maram, S. S., Vishnoi, T., Pandey, S..  2019.  Neural Network and ROS based Threat Detection and Patrolling Assistance. 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP). :1—5.

To bring a uniform development platform which seamlessly combines hardware components and software architecture of various developers across the globe and reduce the complexity in producing robots which help people in their daily ergonomics. ROS has come out to be a game changer. It is disappointing to see the lack of penetration of technology in different verticals which involve protection, defense and security. By leveraging the power of ROS in the field of robotic automation and computer vision, this research will pave path for identification of suspicious activity with autonomously moving bots which run on ROS. The research paper proposes and validates a flow where ROS and computer vision algorithms like YOLO can fall in sync with each other to provide smarter and accurate methods for indoor and limited outdoor patrolling. Identification of age,`gender, weapons and other elements which can disturb public harmony will be an integral part of the research and development process. The simulation and testing reflects the efficiency and speed of the designed software architecture.

Lagraa, S., Cailac, M., Rivera, S., Beck, F., State, R..  2019.  Real-Time Attack Detection on Robot Cameras: A Self-Driving Car Application. 2019 Third IEEE International Conference on Robotic Computing (IRC). :102—109.

The Robot Operating System (ROS) are being deployed for multiple life critical activities such as self-driving cars, drones, and industries. However, the security has been persistently neglected, especially the image flows incoming from camera robots. In this paper, we perform a structured security assessment of robot cameras using ROS. We points out a relevant number of security flaws that can be used to take over the flows incoming from the robot cameras. Furthermore, we propose an intrusion detection system to detect abnormal flows. Our defense approach is based on images comparisons and unsupervised anomaly detection method. We experiment our approach on robot cameras embedded on a self-driving car.

2020-12-14
Lim, K., Islam, T., Kim, H., Joung, J..  2020.  A Sybil Attack Detection Scheme based on ADAS Sensors for Vehicular Networks. 2020 IEEE 17th Annual Consumer Communications Networking Conference (CCNC). :1–5.
Vehicular Ad Hoc Network (VANET) is a promising technology for autonomous driving as it provides many benefits and user conveniences to improve road safety and driving comfort. Sybil attack is one of the most serious threats in vehicular communications because attackers can generate multiple forged identities to disseminate false messages to disrupt safety-related services or misuse the systems. To address this issue, we propose a Sybil attack detection scheme using ADAS (Advanced Driving Assistant System) sensors installed on modern passenger vehicles, without the assistance of trusted third party authorities or infrastructure. Also, a deep learning based object detection technique is used to accurately identify nearby objects for Sybil attack detection and the multi-step verification process minimizes the false positive of the detection.
2020-12-11
Abratkiewicz, K., Gromek, D., Samczynski, P..  2019.  Chirp Rate Estimation and micro-Doppler Signatures for Pedestrian Security Radar Systems. 2019 Signal Processing Symposium (SPSympo). :212—215.

A new approach to micro-Doppler signal analysis is presented in this article. Novel chirp rate estimators in the time-frequency domain were used for this purpose, which provided the chirp rate of micro-Doppler signatures, allowing the classification of objects in the urban environment. As an example verifying the method, a signal from a high-resolution radar with a linear frequency modulated continuous wave (FMCW) recording an echo reflected from a pedestrian was used to validate the proposed algorithms for chirp rate estimation. The obtained results are plotted on saturated accelerograms, giving an additional parameter dedicated for target classification in security systems utilizing radar sensors for target detection.

2020-12-07
Handa, A., Garg, P., Khare, V..  2018.  Masked Neural Style Transfer using Convolutional Neural Networks. 2018 International Conference on Recent Innovations in Electrical, Electronics Communication Engineering (ICRIEECE). :2099–2104.

In painting, humans can draw an interrelation between the style and the content of a given image in order to enhance visual experiences. Deep neural networks like convolutional neural networks are being used to draw a satisfying conclusion of this problem of neural style transfer due to their exceptional results in the key areas of visual perceptions such as object detection and face recognition.In this study, along with style transfer on whole image it is also outlined how transfer of style can be performed only on the specific parts of the content image which is accomplished by using masks. The style is transferred in a way that there is a least amount of loss to the content image i.e., semantics of the image is preserved.