Visible to the public Biblio

Found 109 results

Filters: Keyword is Cameras  [Clear All Filters]
Kaur, Ketanpreet, Sharma, Vikrant, Sachdeva, Monika.  2020.  Framework for FOGIoT based Smart Video Surveillance System (SVSS). 2020 International Conference on Computational Performance Evaluation (ComPE). :797–799.
In this ever updating digitalized world, everything is connected with just few touches away. Our phone is connected with things around us, even we can see live video of our home, shop, institute or company on the phone. But we can't track suspicious activity 24*7 hence needed a smart system to track down any suspicious activity taking place, so it automatically notifies us before any robbery or dangerous activity takes place. We have proposed a framework to tackle down this security matter with the help of sensors enabled cameras(IoT) connected through a FOG layer hence called FOGIoT which consists of small servers configured with Human Activity Analysis Algorithm. Any suspicious activity analyzed will be reported to responsible personnel and the due action will be taken place.
Elbasi, Ersin.  2020.  Reliable abnormal event detection from IoT surveillance systems. 2020 7th International Conference on Internet of Things: Systems, Management and Security (IOTSMS). :1–5.
Surveillance systems are widely used in airports, streets, banks, military areas, borders, hospitals, and schools. There are two types of surveillance systems which are real-time systems and offline surveillance systems. Usually, security people track videos on time in monitoring rooms to find out abnormal human activities. Real-time human tracking from videos is very expensive especially in airports, borders, and streets due to the huge number of surveillance cameras. There are a lot of research works have been done for automated surveillance systems. In this paper, we presented a new surveillance system to recognize human activities from several cameras using machine learning algorithms. Sequences of images are collected from cameras using the internet of things technology from indoor or outdoor areas. A feature vector is created for each recognized moving object, then machine learning algorithms are applied to extract moving object activities. The proposed abnormal event detection system gives very promising results which are more than 96% accuracy in Multilayer Perceptron, Iterative Classifier Optimizer, and Random Forest algorithms.
Seneviratne, Piyumi, Perera, Dilanka, Samarasekara, Harinda, Keppitiyagama, Chamath, Thilakarathna, Kenneth, De Soyza, Kasun, Wijesekara, Primal.  2020.  Impact of Video Surveillance Systems on ATM PIN Security. 2020 20th International Conference on Advances in ICT for Emerging Regions (ICTer). :59–64.
ATM transactions are verified using two-factor authentication. The PIN is one of the factors (something you know) and the ATM Card is the other factor (something you have). Therefore, banks make significant investments on PIN Mailers and HSMs to preserve the security and confidentiality in the generation, validation, management and the delivery of the PIN to their customers. Moreover, banks install surveillance cameras inside ATM cubicles as a physical security measure to prevent fraud and theft. However, in some cases, ATM PIN-Pad and the PIN entering process get revealed through the surveillance camera footage itself. We demonstrate that visibility of forearm movements is sufficient to infer PINs with a significant level of accuracy. Video footage of the PIN entry process simulated in an experimental setup was analyzed using two approaches. The human observer-based approach shows that a PIN can be guessed with a 30% of accuracy within 3 attempts whilst the computer-assisted analysis of footage gave an accuracy of 50%. The results confirm that ad-hoc installation of surveillance cameras can weaken ATM PIN security significantly by potentially exposing one factor of a two-factor authentication system. Our investigation also revealed that there are no guidelines, standards or regulations governing the placement of surveillance cameras inside ATM cubicles in Sri Lanka.
Lehman, Sarah M., Alrumayh, Abrar S., Ling, Haibin, Tan, Chiu C..  2020.  Stealthy Privacy Attacks Against Mobile AR Apps. 2020 IEEE Conference on Communications and Network Security (CNS). :1—5.
The proliferation of mobile augmented reality applications and the toolkits to create them have serious implications for user privacy. In this paper, we explore how malicious AR app developers can leverage capabilities offered by commercially available AR libraries, and describe how edge computing can be used to address this privacy problem.
Wu, Chongke, Shao, Sicong, Tunc, Cihan, Hariri, Salim.  2020.  Video Anomaly Detection using Pre-Trained Deep Convolutional Neural Nets and Context Mining. 2020 IEEE/ACS 17th International Conference on Computer Systems and Applications (AICCSA). :1—8.
Anomaly detection is critically important for intelligent surveillance systems to detect in a timely manner any malicious activities. Many video anomaly detection approaches using deep learning methods focus on a single camera video stream with a fixed scenario. These deep learning methods use large-scale training data with large complexity. As a solution, in this paper, we show how to use pre-trained convolutional neural net models to perform feature extraction and context mining, and then use denoising autoencoder with relatively low model complexity to provide efficient and accurate surveillance anomaly detection, which can be useful for the resource-constrained devices such as edge devices of the Internet of Things (IoT). Our anomaly detection model makes decisions based on the high-level features derived from the selected embedded computer vision models such as object classification and object detection. Additionally, we derive contextual properties from the high-level features to further improve the performance of our video anomaly detection method. We use two UCSD datasets to demonstrate that our approach with relatively low model complexity can achieve comparable performance compared to the state-of-the-art approaches.
Chu, Wen-Yi, Yu, Ting-Guang, Lin, Yu-Kai, Lee, Shao-Chuan, Hsiao, Hsu-Chun.  2020.  On Using Camera-based Visible Light Communication for Security Protocols. 2020 IEEE Security and Privacy Workshops (SPW). :110–117.
In security protocol design, Visible Light Communication (VLC) has often been abstracted as an ideal channel that is resilient to eavesdropping, manipulation, and jamming. Camera Communication (CamCom), a subcategory of VLC, further strengthens the level of security by providing a visually verifiable association between the transmitter and the extracted information. However, the ideal security guarantees of visible light channels may not hold in practice due to limitations and tradeoffs introduced by hardware, software, configuration, environment, etc. This paper presents our experience and lessons learned from implementing CamCom for security protocols. We highlight CamCom's security-enhancing properties and security applications that it enables. Backed by real implementation and experiments, we also systematize the practical considerations of CamCom-based security protocols.
Zhang, Mingyue, Zhou, Junlong, Cao, Kun, Hu, Shiyan.  2020.  Trusted Anonymous Authentication For Vehicular Cyber-Physical Systems. 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). :37—44.
In vehicular cyber-physical systems, the mounted cameras on the vehicles, together with the fixed roadside cameras, can produce pictorial data for multiple purposes. In this process, ensuring the security and privacy of vehicles while guaranteeing efficient data transmission among vehicles is critical. This motivates us to propose a trusted anonymous authentication scheme for vehicular cyber-physical systems and Internet-of-Things. Our scheme is designed based on a three-tier architecture which contains cloud layer, fog layer, and user layer. It utilizes bilinear-free certificateless signcryption to realize a secure and trusted anonymous authentication efficiently. We verify its effectiveness through theoretical analyses in terms of correctness, security, and efficiency. Furthermore, our simulation results demonstrate that the communication overhead, the computation overhead, and the packet loss rate of the proposed scheme are significantly better than those of the state-of-the-art techniques. Particularly, the proposed scheme can speed up the computation process at least 10× compared to all the state-of-the-art approaches.
Mayer, O., Stamm, M. C..  2020.  Forensic Similarity for Digital Images. IEEE Transactions on Information Forensics and Security. 15:1331—1346.
In this paper, we introduce a new digital image forensics approach called forensic similarity, which determines whether two image patches contain the same forensic trace or different forensic traces. One benefit of this approach is that prior knowledge, e.g., training samples, of a forensic trace is not required to make a forensic similarity decision on it in the future. To do this, we propose a two-part deep-learning system composed of a convolutional neural network-based feature extractor and a three-layer neural network, called the similarity network. This system maps the pairs of image patches to a score indicating whether they contain the same or different forensic traces. We evaluated the system accuracy of determining whether two image patches were captured by the same or different camera model and manipulated by the same or a different editing operation and the same or a different manipulation parameter, given a particular editing operation. Experiments demonstrate applicability to a variety of forensic traces and importantly show efficacy on “unknown” forensic traces that were not used to train the system. Experiments also show that the proposed system significantly improves upon prior art, reducing error rates by more than half. Furthermore, we demonstrated the utility of the forensic similarity approach in two practical applications: forgery detection and localization, and database consistency verification.
Zheng, Y., Cao, Y., Chang, C..  2020.  A PUF-Based Data-Device Hash for Tampered Image Detection and Source Camera Identification. IEEE Transactions on Information Forensics and Security. 15:620—634.
With the increasing prevalent of digital devices and their abuse for digital content creation, forgeries of digital images and video footage are more rampant than ever. Digital forensics is challenged into seeking advanced technologies for forgery content detection and acquisition device identification. Unfortunately, existing solutions that address image tampering problems fail to identify the device that produces the images or footage while techniques that can identify the camera is incapable of locating the tampered content of its captured images. In this paper, a new perceptual data-device hash is proposed to locate maliciously tampered image regions and identify the source camera of the received image data as a non-repudiable attestation in digital forensics. The presented image may have been either tampered or gone through benign content preserving geometric transforms or image processing operations. The proposed image hash is generated by projecting the invariant image features into a physical unclonable function (PUF)-defined Bernoulli random space. The tamper-resistant random PUF response is unique for each camera and can only be generated upon triggered by a challenge, which is provided by the image acquisition timestamp. The proposed hash is evaluated on the modified CASIA database and CMOS image sensor-based PUF simulated using 180 nm TSMC technology. It achieves a high tamper detection rate of 95.42% with the regions of tampered content successfully located, a good authentication performance of above 98.5% against standard content-preserving manipulations, and 96.25% and 90.42%, respectively, for the more challenging geometric transformations of rotation (0 360°) and scaling (scale factor in each dimension: 0.5). It is demonstrated to be able to identify the source camera with 100% accuracy and is secure against attacks on PUF.
Hynes, E., Flynn, R., Lee, B., Murray, N..  2020.  An Evaluation of Lower Facial Micro Expressions as an Implicit QoE Metric for an Augmented Reality Procedure Assistance Application. 2020 31st Irish Signals and Systems Conference (ISSC). :1–6.
Augmented reality (AR) has been identified as a key technology to enhance worker utility in the context of increasing automation of repeatable procedures. AR can achieve this by assisting the user in performing complex and frequently changing procedures. Crucial to the success of procedure assistance AR applications is user acceptability, which can be measured by user quality of experience (QoE). An active research topic in QoE is the identification of implicit metrics that can be used to continuously infer user QoE during a multimedia experience. A user's QoE is linked to their affective state. Affective state is reflected in facial expressions. Emotions shown in micro facial expressions resemble those expressed in normal expressions but are distinguished from them by their brief duration. The novelty of this work lies in the evaluation of micro facial expressions as a continuous QoE metric by means of correlation analysis to the more traditional and accepted post-experience self-reporting. In this work, an optimal Rubik's Cube solver AR application was used as a proof of concept for complex procedure assistance. This was compared with a paper-based procedure assistance control. QoE expressed by affect in normal and micro facial expressions was evaluated through correlation analysis with post-experience reports. The results show that the AR application yielded higher task success rates and shorter task durations. Micro facial expressions reflecting disgust correlated moderately to the questionnaire responses for instruction disinterest in the AR application.
Wöhnert, S.-J., Wöhnert, K. H., Almamedov, E., Skwarek, V..  2020.  Trusted Video Streams in Camera Sensor Networks. 2020 IEEE 18th International Conference on Embedded and Ubiquitous Computing (EUC). :17—24.

Proof of integrity in produced video data by surveillance cameras requires active forensic methods such as signatures, otherwise authenticity and integrity can be comprised and data becomes unusable e. g. for legal evidence. But a simple file- or stream-signature loses its validity when the stream is cut in parts or by separating data and signature. Using the principles of security in distributed systems similar to those of blockchain and distributed ledger technologies (BC/DLT), a chain which consists of the frames of a video which frame hash values will be distributed among a camera sensor network is presented. The backbone of this Framechain within the camera sensor network will be a camera identity concept to ensure accountability, integrity and authenticity according to the extended CIA triad security concept. Modularity by secure sequences, autarky in proof and robustness against natural modulation of data are the key parameters of this new approach. It allows the standalone data and even parts of it to be used as hard evidence.

Illing, B., Westhoven, M., Gaspers, B., Smets, N., Brüggemann, B., Mathew, T..  2020.  Evaluation of Immersive Teleoperation Systems using Standardized Tasks and Measurements. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :278—285.

Despite advances regarding autonomous functionality for robots, teleoperation remains a means for performing delicate tasks in safety critical contexts like explosive ordnance disposal (EOD) and ambiguous environments. Immersive stereoscopic displays have been proposed and developed in this regard, but bring about their own specific problems, e.g., simulator sickness. This work builds upon standardized test environments to yield reproducible comparisons between different robotic platforms. The focus was placed on testing three optronic systems of differing degrees of immersion: (1) A laptop display showing multiple monoscopic camera views, (2) an off-the-shelf virtual reality headset coupled with a pantilt-based stereoscopic camera, and (3) a so-called Telepresence Unit, providing fast pan, tilt, yaw rotation, stereoscopic view, and spatial audio. Stereoscopic systems yielded significant faster task completion only for the maneuvering task. As expected, they also induced Simulator Sickness among other results. However, the amount of Simulator Sickness varied between both stereoscopic systems. Collected data suggests that a higher degree of immersion combined with careful system design can reduce the to-be-expected increase of Simulator Sickness compared to the monoscopic camera baseline while making the interface subjectively more effective for certain tasks.

Martin, S., Parra, G., Cubillo, J., Quintana, B., Gil, R., Perez, C., Castro, M..  2020.  Design of an Augmented Reality System for Immersive Learning of Digital Electronic. 2020 XIV Technologies Applied to Electronics Teaching Conference (TAEE). :1—6.

This article describes the development of two mobile applications for learning Digital Electronics. The first application is an interactive app for iOS where you can study the different digital circuits, and which will serve as the basis for the second: a game of questions in augmented reality.

Romashchenko, V., Brutscheck, M., Chmielewski, I..  2020.  Organisation and Implementation of ResNet Face Recognition Architectures in the Environment of Zigbee-based Data Transmission Protocol. 2020 Fourth International Conference on Multimedia Computing, Networking and Applications (MCNA). :25—30.

This paper describes a realisation of a ResNet face recognition method through Zigbee-based wireless protocol. The system uses a CC2530 Zigbee-based radio frequency chip with connected VC0706 camera on it. The Arduino Nano had been used for organisation of data compression and effective division of Zigbee packets. The proposed solution also simplifies a data transmission within a strict bandwidth of Zigbee protocol and reliable packet forwarding in case of frequency distortion. The following investigation model uses Raspberry Pi 3 with connected Zigbee End Device (ZED) for successful receiving of important images and acceleration of deep learning interfaces. The model is integrated into a smart security system based on Zigbee modules, MySQL database, Android application and works in the background by using daemons procedures. To protect data, all wireless connections had been encrypted by the 128-bit Advanced Encryption Standard (AES-128) algorithm. Experimental results show a possibility to implement complex systems under restricted requirements of available transmission protocols.

Goswami, U., Wang, K., Nguyen, G., Lagesse, B..  2020.  Privacy-Preserving Mobile Video Sharing using Fully Homomorphic Encryption. 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). :1—3.

Increased availability of mobile cameras has led to more opportunities for people to record videos of significantly more of their lives. Many times people want to share these videos, but only to certain people who were co-present. Since the videos may be of a large event where the attendees are not necessarily known, we need a method for proving co-presence without revealing information before co-presence is proven. In this demonstration, we present a privacy-preserving method for comparing the similarity of two videos without revealing the contents of either video. This technique leverages the Similarity of Simultaneous Observation technique for detecting hidden webcams and modifies the existing algorithms so that they are computationally feasible to run under fully homomorphic encryption scheme on modern mobile devices. The demonstration will consist of a variety of devices preloaded with our software. We will demonstrate the video sharing software performing comparisons in real time. We will also make the software available to Android devices via a QR code so that participants can record and exchange their own videos.

Naz, M. T., Zeki, A. M..  2020.  A Review of Various Attack Methods on Air-Gapped Systems. 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT). :1—6.

In the past air-gapped systems that are isolated from networks have been considered to be very secure. Yet there have been reports of such systems being breached. These breaches have shown to use unconventional means for communication also known as covert channels such as Acoustic, Electromagnetic, Magnetic, Electric, Optical, and Thermal to transfer data. In this paper, a review of various attack methods that can compromise an air-gapped system is presented along with a summary of how efficient and dangerous a particular method could be. The capabilities of each covert channel are listed to better understand the threat it poses and also some countermeasures to safeguard against such attack methods are mentioned. These attack methods have already been proven to work and awareness of such covert channels for data exfiltration is crucial in various industries.

McCloskey, S., Albright, M..  2019.  Detecting GAN-Generated Imagery Using Saturation Cues. 2019 IEEE International Conference on Image Processing (ICIP). :4584—4588.
Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation [1], and show that the network's treatment of exposure is markedly different from a real camera. We further show that this cue can be used to distinguish GAN-generated imagery from camera imagery, including effective discrimination between GAN imagery and real camera images used to train the GAN.
Yang, X., Li, Y., Lyu, S..  2019.  Exposing Deep Fakes Using Inconsistent Head Poses. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8261—8265.
In this paper, we propose a new method to expose AI-generated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue, an SVM classifier is evaluated using a set of real face images and Deep Fakes.
Shin, H. C., Chang, J., Na, K..  2020.  Anomaly Detection Algorithm Based on Global Object Map for Video Surveillance System. 2020 20th International Conference on Control, Automation and Systems (ICCAS). :793—795.

Recently, smart video security systems have been active. The existing video security system is mainly a method of detecting a local abnormality of a unit camera. In this case, it is difficult to obtain the characteristics of each local region and the situation for the entire watching area. In this paper, we developed an object map for the entire surveillance area using a combination of surveillance cameras, and developed an algorithm to detect anomalies by learning normal situations. The surveillance camera in each area detects and tracks people and cars, and creates a local object map and transmits it to the server. The surveillance server combines each local maps to generate a global map for entire areas. Probability maps were automatically calculated from the global maps, and normal and abnormal decisions were performed through trained data about normal situations. For three reporting status: normal, caution, and warning, and the caution report performance shows that normal detection 99.99% and abnormal detection 86.6%.

Khadka, A., Argyriou, V., Remagnino, P..  2020.  Accurate Deep Net Crowd Counting for Smart IoT Video acquisition devices. 2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS). :260—264.

A novel deep neural network is proposed, for accurate and robust crowd counting. Crowd counting is a complex task, as it strongly depends on the deployed camera characteristics and, above all, the scene perspective. Crowd counting is essential in security applications where Internet of Things (IoT) cameras are deployed to help with crowd management tasks. The complexity of a scene varies greatly, and a medium to large scale security system based on IoT cameras must cater for changes in perspective and how people appear from different vantage points. To address this, our deep architecture extracts multi-scale features with a pyramid contextual module to provide long-range contextual information and enlarge the receptive field. Experiments were run on three major crowd counting datasets, to test our proposed method. Results demonstrate our method supersedes the performance of state-of-the-art methods.

Amrutha, C. V., Jyotsna, C., Amudha, J..  2020.  Deep Learning Approach for Suspicious Activity Detection from Surveillance Video. 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). :335—339.

Video Surveillance plays a pivotal role in today's world. The technologies have been advanced too much when artificial intelligence, machine learning and deep learning pitched into the system. Using above combinations, different systems are in place which helps to differentiate various suspicious behaviors from the live tracking of footages. The most unpredictable one is human behaviour and it is very difficult to find whether it is suspicious or normal. Deep learning approach is used to detect suspicious or normal activity in an academic environment, and which sends an alert message to the corresponding authority, in case of predicting a suspicious activity. Monitoring is often performed through consecutive frames which are extracted from the video. The entire framework is divided into two parts. In the first part, the features are computed from video frames and in second part, based on the obtained features classifier predict the class as suspicious or normal.

Bhat, P., Batakurki, M., Chari, M..  2020.  Classifier with Deep Deviation Detection in PoE-IoT Devices. 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). :1–3.
With the rapid growth in diversity of PoE-IoT devices and concept of "Edge intelligence", PoE-IoT security and behavior analysis is the major concern. These PoE-IoT devices lack visibility when the entire network infrastructure is taken into account. The IoT devices are prone to have design faults in their security capabilities. The entire network may be put to risk by attacks on vulnerable IoT devices or malware might get introduced into IoT devices even by routine operations such as firmware upgrade. There have been various approaches based on machine learning(ML) to classify PoE-IoT devices based on network traffic characteristics such as Deep Packet Inspection(DPI). In this paper, we propose a novel method for PoE-IoT classification where ML algorithm, Decision Tree is used. In addition to classification, this method provides useful insights to the network deployment, based on the deviations detected. These insights can further be used for shaping policies, troubleshooting and behavior analysis of PoE-IoT devices.
Amrouche, F., Lagraa, S., Frank, R., State, R..  2020.  Intrusion detection on robot cameras using spatio-temporal autoencoders: A self-driving car application. 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). :1—5.

Robot Operating System (ROS) is becoming more and more important and is used widely by developers and researchers in various domains. One of the most important fields where it is being used is the self-driving cars industry. However, this framework is far from being totally secure, and the existing security breaches do not have robust solutions. In this paper we focus on the camera vulnerabilities, as it is often the most important source for the environment discovery and the decision-making process. We propose an unsupervised anomaly detection tool for detecting suspicious frames incoming from camera flows. Our solution is based on spatio-temporal autoencoders used to truthfully reconstruct the camera frames and detect abnormal ones by measuring the difference with the input. We test our approach on a real-word dataset, i.e. flows coming from embedded cameras of self-driving cars. Our solution outperforms the existing works on different scenarios.

Maram, S. S., Vishnoi, T., Pandey, S..  2019.  Neural Network and ROS based Threat Detection and Patrolling Assistance. 2019 Second International Conference on Advanced Computational and Communication Paradigms (ICACCP). :1—5.

To bring a uniform development platform which seamlessly combines hardware components and software architecture of various developers across the globe and reduce the complexity in producing robots which help people in their daily ergonomics. ROS has come out to be a game changer. It is disappointing to see the lack of penetration of technology in different verticals which involve protection, defense and security. By leveraging the power of ROS in the field of robotic automation and computer vision, this research will pave path for identification of suspicious activity with autonomously moving bots which run on ROS. The research paper proposes and validates a flow where ROS and computer vision algorithms like YOLO can fall in sync with each other to provide smarter and accurate methods for indoor and limited outdoor patrolling. Identification of age,`gender, weapons and other elements which can disturb public harmony will be an integral part of the research and development process. The simulation and testing reflects the efficiency and speed of the designed software architecture.

Lagraa, S., Cailac, M., Rivera, S., Beck, F., State, R..  2019.  Real-Time Attack Detection on Robot Cameras: A Self-Driving Car Application. 2019 Third IEEE International Conference on Robotic Computing (IRC). :102—109.

The Robot Operating System (ROS) are being deployed for multiple life critical activities such as self-driving cars, drones, and industries. However, the security has been persistently neglected, especially the image flows incoming from camera robots. In this paper, we perform a structured security assessment of robot cameras using ROS. We points out a relevant number of security flaws that can be used to take over the flows incoming from the robot cameras. Furthermore, we propose an intrusion detection system to detect abnormal flows. Our defense approach is based on images comparisons and unsupervised anomaly detection method. We experiment our approach on robot cameras embedded on a self-driving car.