Visible to the public Biblio

Filters: Keyword is Computer vision  [Clear All Filters]
2022-08-12
Chen, Wenhu, Gan, Zhe, Li, Linjie, Cheng, Yu, Wang, William, Liu, Jingjing.  2021.  Meta Module Network for Compositional Visual Reasoning. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). :655–664.
Neural Module Network (NMN) exhibits strong interpretability and compositionality thanks to its handcrafted neural modules with explicit multi-hop reasoning capability. However, most NMNs suffer from two critical draw-backs: 1) scalability: customized module for specific function renders it impractical when scaling up to a larger set of functions in complex tasks; 2) generalizability: rigid pre-defined module inventory makes it difficult to generalize to unseen functions in new tasks/domains. To design a more powerful NMN architecture for practical use, we propose Meta Module Network (MMN) centered on a novel meta module, which can take in function recipes and morph into diverse instance modules dynamically. The instance modules are then woven into an execution graph for complex visual reasoning, inheriting the strong explainability and compositionality of NMN. With such a flexible instantiation mechanism, the parameters of instance modules are inherited from the central meta module, retaining the same model complexity as the function set grows, which promises better scalability. Meanwhile, as functions are encoded into the embedding space, unseen functions can be readily represented based on its structural similarity with previously observed ones, which ensures better generalizability. Experiments on GQA and CLEVR datasets validate the superiority of MMN over state-of-the-art NMN designs. Synthetic experiments on held-out unseen functions from GQA dataset also demonstrate the strong generalizability of MMN. Our code and model are released in Github1.
2022-06-14
Vanitha, C. N., Malathy, S., Anitha, K., Suwathika, S..  2021.  Enhanced Security using Advanced Encryption Standards in Face Recognition. 2021 2nd International Conference on Communication, Computing and Industry 4.0 (C2I4). :1–5.
Nowadays, face recognition is used everywhere in all fields. Though the face recognition is used for security purposes there is also chance in hacking the faces which is used for face recognition. For enhancing the face security, encryption and decryption technique is used. Face cognizance has been engaged in more than a few security-connected purposes such as supervision, e-passport, and etc… The significant use of biometric raises vital private concerns, in precise if the biometric same method is carried out at a central or unfrosted servers, and calls for implementation of Privacy improving technologies. For privacy concerns the encoding and decoding is used. For achieving the result we are using the Open Computer Vision (OpenCV) tool. With the help of this tool we are going to cipher the face and decode the face with advanced encryption standards techniques. OpenCV is the tool used in this project
2022-06-08
Ong, Ding Sheng, Seng Chan, Chee, Ng, Kam Woh, Fan, Lixin, Yang, Qiang.  2021.  Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attacks. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :3629–3638.
Ever since Machine Learning as a Service emerges as a viable business that utilizes deep learning models to generate lucrative revenue, Intellectual Property Right (IPR) has become a major concern because these deep learning models can easily be replicated, shared, and re-distributed by any unauthorized third parties. To the best of our knowledge, one of the prominent deep learning models - Generative Adversarial Networks (GANs) which has been widely used to create photorealistic image are totally unprotected despite the existence of pioneering IPR protection methodology for Convolutional Neural Networks (CNNs). This paper therefore presents a complete protection framework in both black-box and white-box settings to enforce IPR protection on GANs. Empirically, we show that the proposed method does not compromise the original GANs performance (i.e. image generation, image super-resolution, style transfer), and at the same time, it is able to withstand both removal and ambiguity attacks against embedded watermarks. Codes are available at https://github.com/dingsheng-ong/ipr-gan.
2022-05-10
Hassan, Salman, Bari, Safioul, Shuvo, A S M Muktadiru Baized, Khan, Shahriar.  2021.  Implementation of a Low-Cost IoT Enabled Surveillance Security System. 2021 7th International Conference on Applied System Innovation (ICASI). :101–104.
Security is a requirement in society, yet its wide implementation is held back because of high expenses, and barriers to the use of technology. Experimental implementation of security at low cost will only help in promoting the technology at more affordable prices. This paper describes the design of a security system of surveillance using Raspberry Pi and Arduino UNO. The design senses the presence of \$a\$ human in a surveillance area and immediately sets off the buzzer and simultaneously starts capturing video of the motion it had detected and stores it in a folder. When the design senses a motion, it immediately sends an SMS to the user. The user of this design can see the live video of the motion it detects using the internet connection from a remote area. Our objective of making a low-cost surveillance area security system has been mostly fulfilled. Although this is a low-cost project, features can be compared with existing commercially available systems.
Ji, Xiaoyu, Cheng, Yushi, Zhang, Yuepeng, Wang, Kai, Yan, Chen, Xu, Wenyuan, Fu, Kevin.  2021.  Poltergeist: Acoustic Adversarial Machine Learning against Cameras and Computer Vision. 2021 IEEE Symposium on Security and Privacy (SP). :160–175.
Autonomous vehicles increasingly exploit computer-vision-based object detection systems to perceive environments and make critical driving decisions. To increase the quality of images, image stabilizers with inertial sensors are added to alleviate image blurring caused by camera jitters. However, such a trend opens a new attack surface. This paper identifies a system-level vulnerability resulting from the combination of the emerging image stabilizer hardware susceptible to acoustic manipulation and the object detection algorithms subject to adversarial examples. By emitting deliberately designed acoustic signals, an adversary can control the output of an inertial sensor, which triggers unnecessary motion compensation and results in a blurred image, even if the camera is stable. The blurred images can then induce object misclassification affecting safety-critical decision making. We model the feasibility of such acoustic manipulation and design an attack framework that can accomplish three types of attacks, i.e., hiding, creating, and altering objects. Evaluation results demonstrate the effectiveness of our attacks against four academic object detectors (YOLO V3/V4/V5 and Fast R-CNN), and one commercial detector (Apollo). We further introduce the concept of AMpLe attacks, a new class of system-level security vulnerabilities resulting from a combination of adversarial machine learning and physics-based injection of information-carrying signals into hardware.
2022-04-25
Khalil, Hady A., Maged, Shady A..  2021.  Deepfakes Creation and Detection Using Deep Learning. 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC). :1–4.
Deep learning has been used in a wide range of applications like computer vision, natural language processing and image detection. The advancement in deep learning algorithms in image detection and manipulation has led to the creation of deepfakes, deepfakes use deep learning algorithms to create fake images that are at times very hard to distinguish from real images. With the rising concern around personal privacy and security, Many methods to detect deepfake images have emerged, in this paper the use of deep learning for creating as well as detecting deepfakes is explored, this paper also propose the use of deep learning image enhancement method to improve the quality of deepfakes created.
Khasanova, Aliia, Makhmutova, Alisa, Anikin, Igor.  2021.  Image Denoising for Video Surveillance Cameras Based on Deep Learning Techniques. 2021 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). :713–718.
Nowadays, video surveillance cameras are widely used in many smart city applications for ensuring road safety. We can use video data from them to solve such tasks as traffic management, driving control, environmental monitoring, etc. Most of these applications are based on object recognition and tracking algorithms. However, the video image quality is not always meet the requirements of such algorithms due to the influence of different external factors. A variety of adverse weather conditions produce noise on the images, which often makes it difficult to detect objects correctly. Lately, deep learning methods show good results in image processing, including denoising tasks. This work is devoted to the study of using these methods for image quality enhancement in difficult weather conditions such as snow, rain, fog. Different deep learning techniques were evaluated in terms of their impact on the quality of object detection/recognition. Finally, the system for automatic image denoising was developed.
2022-04-13
Deepika, P., Kaliraj, S..  2021.  A Survey on Pest and Disease Monitoring of Crops. 2021 3rd International Conference on Signal Processing and Communication (ICPSC). :156–160.
Maintenance of Crop health is essential for the successful farming for both yield and product quality. Pest and disease in crops are serious problem to be monitored. pest and disease occur in different stages or phases of crop development. Due to introduction of genetically modified seeds the natural resistance of crops to prevent them from pest and disease is less. Major crop loss is due to pest and disease attack in crops. It damages the leaves, buds, flowers and fruits of the crops. Affected areas and damage levels of pest and diseases attacks are growing rapidly based on global climate change. Weather Conditions plays a major role in pest and disease attacks in crops. Naked eye inspection of pest and disease is complex and difficult for wide range of field. And at the same time taking lab samples to detect disease is also inefficient and time-consuming process. Early identification of diseases is important to take necessary actions for preventing crop loss and to avoid disease spreads. So, Timely and effective monitoring of crop health is important. Several technologies have been developed to detect pest and disease in crops. In this paper we discuss the various technologies implemented by using AI and Deep Learning for pest and disease detection. And also, briefly discusses their Advantages and limitations on using certain technology for monitoring of crops.
2022-03-10
Yang, Mengde.  2021.  A Survey on Few-Shot Learning in Natural Language Processing. 2021 International Conference on Artificial Intelligence and Electromechanical Automation (AIEA). :294—297.
The annotated dataset is the foundation for Supervised Natural Language Processing. However, the cost of obtaining dataset is high. In recent years, the Few-Shot Learning has gradually attracted the attention of researchers. From the definition, in this paper, we conclude the difference in Few-Shot Learning between Natural Language Processing and Computer Vision. On that basis, the current Few-Shot Learning on Natural Language Processing is summarized, including Transfer Learning, Meta Learning and Knowledge Distillation. Furthermore, we conclude the solutions to Few-Shot Learning in Natural Language Processing, such as the method based on Distant Supervision, Meta Learning and Knowledge Distillation. Finally, we present the challenges facing Few-Shot Learning in Natural Language Processing.
2022-02-09
Guo, Hao, Dolhansky, Brian, Hsin, Eric, Dinh, Phong, Ferrer, Cristian Canton, Wang, Song.  2021.  Deep Poisoning: Towards Robust Image Data Sharing against Visual Disclosure. 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). :686–696.
Due to respectively limited training data, different entities addressing the same vision task based on certain sensitive images may not train a robust deep network. This paper introduces a new vision task where various entities share task-specific image data to enlarge each other's training data volume without visually disclosing sensitive contents (e.g. illegal images). Then, we present a new structure-based training regime to enable different entities learn task-specific and reconstruction-proof image representations for image data sharing. Specifically, each entity learns a private Deep Poisoning Module (DPM) and insert it to a pre-trained deep network, which is designed to perform the specific vision task. The DPM deliberately poisons convolutional image features to prevent image reconstructions, while ensuring that the altered image data is functionally equivalent to the non-poisoned data for the specific vision task. Given this equivalence, the poisoned features shared from one entity could be used by another entity for further model refinement. Experimental results on image classification prove the efficacy of the proposed method.
2022-01-31
Zhao, Rui.  2021.  The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms. 2021 2nd International Conference on Computing and Data Science (CDS). :287–295.
With the further development in the fields of computer vision, network security, natural language processing and so on so forth, deep learning technology gradually exposed certain security risks. The existing deep learning algorithms cannot effectively describe the essential characteristics of data, making the algorithm unable to give the correct result in the face of malicious input. Based on current security threats faced by deep learning, this paper introduces the problem of adversarial examples in deep learning, sorts out the existing attack and defense methods of black box and white box, and classifies them. It briefly describes the application of some adversarial examples in different scenarios in recent years, compares several defense technologies of adversarial examples, and finally summarizes the problems in this research field and prospects its future development. This paper introduces the common white box attack methods in detail, and further compares the similarities and differences between the attack of black and white boxes. Correspondingly, the author also introduces the defense methods, and analyzes the performance of these methods against the black and white box attack.
2022-01-25
Lee, Jungbeom, Yi, Jihun, Shin, Chaehun, Yoon, Sungroh.  2021.  BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :2643–2651.
Weakly supervised segmentation methods using bounding box annotations focus on obtaining a pixel-level mask from each box containing an object. Existing methods typically depend on a class-agnostic mask generator, which operates on the low-level information intrinsic to an image. In this work, we utilize higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image. These areas constitute a bounding-box attribution map (BBAM), which identifies the target object in its bounding box and thus serves as pseudo ground-truth for weakly supervised semantic and instance segmentation. This approach significantly outperforms recent comparable techniques on both the PASCAL VOC and MS COCO benchmarks in weakly supervised semantic and instance segmentation. In addition, we provide a detailed analysis of our method, offering deeper insight into the behavior of the BBAM.
2021-12-22
Guerdan, Luke, Raymond, Alex, Gunes, Hatice.  2021.  Toward Affective XAI: Facial Affect Analysis for Understanding Explainable Human-AI Interactions. 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). :3789–3798.
As machine learning approaches are increasingly used to augment human decision-making, eXplainable Artificial Intelligence (XAI) research has explored methods for communicating system behavior to humans. However, these approaches often fail to account for the affective responses of humans as they interact with explanations. Facial affect analysis, which examines human facial expressions of emotions, is one promising lens for understanding how users engage with explanations. Therefore, in this work, we aim to (1) identify which facial affect features are pronounced when people interact with XAI interfaces, and (2) develop a multitask feature embedding for linking facial affect signals with participants' use of explanations. Our analyses and results show that the occurrence and values of facial AU1 and AU4, and Arousal are heightened when participants fail to use explanations effectively. This suggests that facial affect analysis should be incorporated into XAI to personalize explanations to individuals' interaction styles and to adapt explanations based on the difficulty of the task performed.
2021-12-20
Wen, Peisong, Xu, Qianqian, Jiang, Yangbangyan, Yang, Zhiyong, He, Yuan, Huang, Qingming.  2021.  Seeking the Shape of Sound: An Adaptive Framework for Learning Voice-Face Association. 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :16342–16351.
Nowadays, we have witnessed the early progress on learning the association between voice and face automatically, which brings a new wave of studies to the computer vision community. However, most of the prior arts along this line (a) merely adopt local information to perform modality alignment and (b) ignore the diversity of learning difficulty across different subjects. In this paper, we propose a novel framework to jointly address the above-mentioned issues. Targeting at (a), we propose a two-level modality alignment loss where both global and local information are considered. Compared with the existing methods, we introduce a global loss into the modality alignment process. The global component of the loss is driven by the identity classification. Theoretically, we show that minimizing the loss could maximize the distance between embeddings across different identities while minimizing the distance between embeddings belonging to the same identity, in a global sense (instead of a mini-batch). Targeting at (b), we propose a dynamic reweighting scheme to better explore the hard but valuable identities while filtering out the unlearnable identities. Experiments show that the proposed method outperforms the previous methods in multiple settings, including voice-face matching, verification and retrieval.
2021-11-30
Subramanian, Vinod, Pankajakshan, Arjun, Benetos, Emmanouil, Xu, Ning, McDonald, SKoT, Sandler, Mark.  2020.  A Study on the Transferability of Adversarial Attacks in Sound Event Classification. ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :301–305.
An adversarial attack is an algorithm that perturbs the input of a machine learning model in an intelligent way in order to change the output of the model. An important property of adversarial attacks is transferability. According to this property, it is possible to generate adversarial perturbations on one model and apply it the input to fool the output of a different model. Our work focuses on studying the transferability of adversarial attacks in sound event classification. We are able to demonstrate differences in transferability properties from those observed in computer vision. We show that dataset normalization techniques such as z-score normalization does not affect the transferability of adversarial attacks and we show that techniques such as knowledge distillation do not increase the transferability of attacks.
2021-10-12
Gouk, Henry, Hospedales, Timothy M..  2020.  Optimising Network Architectures for Provable Adversarial Robustness. 2020 Sensor Signal Processing for Defence Conference (SSPD). :1–5.
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any Lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
2021-09-07
Ahmed, Faruk, Mahmud, Md Sultan, Yeasin, Mohammed.  2020.  Assistive System for Navigating Complex Realistic Simulated World Using Reinforcement Learning. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Finding a free path without obstacles or situation that pose minimal risk is critical for safe navigation. People who are sighted and people who are blind or visually impaired require navigation safety while walking on a sidewalk. In this paper we develop assistive navigation on a sidewalk by integrating sensory inputs using reinforcement learning. We train the reinforcement model in a simulated robotic environment which is used to avoid sidewalk obstacles. A conversational agent is built by training with real conversation data. The reinforcement learning model along with a conversational agent improved the obstacle avoidance experience about 2.5% from the base case which is 78.75%.
2021-05-20
Maung, Maung, Pyone, April, Kiya, Hitoshi.  2020.  Encryption Inspired Adversarial Defense For Visual Classification. 2020 IEEE International Conference on Image Processing (ICIP). :1681—1685.
Conventional adversarial defenses reduce classification accuracy whether or not a model is under attacks. Moreover, most of image processing based defenses are defeated due to the problem of obfuscated gradients. In this paper, we propose a new adversarial defense which is a defensive transform for both training and test images inspired by perceptual image encryption methods. The proposed method utilizes a block-wise pixel shuffling method with a secret key. The experiments are carried out on both adaptive and non-adaptive maximum-norm bounded white-box attacks while considering obfuscated gradients. The results show that the proposed defense achieves high accuracy (91.55%) on clean images and (89.66%) on adversarial examples with noise distance of 8/255 on CFAR-10 dataset. Thus, the proposed defense outperforms state-of-the-art adversarial defenses including latent adversarial training, adversarial training and thermometer encoding.
2021-05-13
Jain, Harsh, Vikram, Aditya, Mohana, Kashyap, Ankit, Jain, Ayush.  2020.  Weapon Detection using Artificial Intelligence and Deep Learning for Security Applications. 2020 International Conference on Electronics and Sustainable Communication Systems (ICESC). :193—198.
Security is always a main concern in every domain, due to a rise in crime rate in a crowded event or suspicious lonely areas. Abnormal detection and monitoring have major applications of computer vision to tackle various problems. Due to growing demand in the protection of safety, security and personal properties, needs and deployment of video surveillance systems can recognize and interpret the scene and anomaly events play a vital role in intelligence monitoring. This paper implements automatic gun (or) weapon detection using a convolution neural network (CNN) based SSD and Faster RCNN algorithms. Proposed implementation uses two types of datasets. One dataset, which had pre-labelled images and the other one is a set of images, which were labelled manually. Results are tabulated, both algorithms achieve good accuracy, but their application in real situations can be based on the trade-off between speed and accuracy.
2021-03-09
Cui, W., Li, X., Huang, J., Wang, W., Wang, S., Chen, J..  2020.  Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation. 2020 IEEE International Conference on Image Processing (ICIP). :648–652.
Although deep convolutional neural network (CNN) performs well in many computer vision tasks, its classification mechanism is very vulnerable when it is exposed to the perturbation of adversarial attacks. In this paper, we proposed a new algorithm to generate the substitute model of black-box CNN models by using knowledge distillation. The proposed algorithm distills multiple CNN teacher models to a compact student model as the substitution of other black-box CNN models to be attacked. The black-box adversarial samples can be consequently generated on this substitute model by using various white-box attacking methods. According to our experiments on ResNet18 and DenseNet121, our algorithm boosts the attacking success rate (ASR) by 20% by training the substitute model based on knowledge distillation.
Bronzin, T., Prole, B., Stipić, A., Pap, K..  2020.  Individualization of Anonymous Identities Using Artificial Intelligence (AI). 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO). :1058–1063.

Individualization of anonymous identities using artificial intelligence - enables innovative human-computer interaction through the personalization of communication which is, at the same time, individual and anonymous. This paper presents possible approach for individualization of anonymous identities in real time. It uses computer vision and artificial intelligence to automatically detect and recognize person's age group, gender, human body measures, proportions and other specific personal characteristics. Collected data constitutes the so-called person's biometric footprint and are linked to a unique (but still anonymous) identity that is recorded in the computer system, along with other information that make up the profile of the person. Identity anonymization can be achieved by appropriate asymmetric encryption of the biometric footprint (with no additional personal information being stored) and integrity can be ensured using blockchain technology. Data collected in this manner is GDPR compliant.

2021-03-01
Hynes, E., Flynn, R., Lee, B., Murray, N..  2020.  An Evaluation of Lower Facial Micro Expressions as an Implicit QoE Metric for an Augmented Reality Procedure Assistance Application. 2020 31st Irish Signals and Systems Conference (ISSC). :1–6.
Augmented reality (AR) has been identified as a key technology to enhance worker utility in the context of increasing automation of repeatable procedures. AR can achieve this by assisting the user in performing complex and frequently changing procedures. Crucial to the success of procedure assistance AR applications is user acceptability, which can be measured by user quality of experience (QoE). An active research topic in QoE is the identification of implicit metrics that can be used to continuously infer user QoE during a multimedia experience. A user's QoE is linked to their affective state. Affective state is reflected in facial expressions. Emotions shown in micro facial expressions resemble those expressed in normal expressions but are distinguished from them by their brief duration. The novelty of this work lies in the evaluation of micro facial expressions as a continuous QoE metric by means of correlation analysis to the more traditional and accepted post-experience self-reporting. In this work, an optimal Rubik's Cube solver AR application was used as a proof of concept for complex procedure assistance. This was compared with a paper-based procedure assistance control. QoE expressed by affect in normal and micro facial expressions was evaluated through correlation analysis with post-experience reports. The results show that the AR application yielded higher task success rates and shorter task durations. Micro facial expressions reflecting disgust correlated moderately to the questionnaire responses for instruction disinterest in the AR application.
2021-02-08
Wang Xiao, Mi Hong, Wang Wei.  2010.  Inner edge detection of PET bottle opening based on the Balloon Snake. 2010 2nd International Conference on Advanced Computer Control. 4:56—59.

Edge detection of bottle opening is a primary section to the machine vision based bottle opening detection system. This paper, taking advantage of the Balloon Snake, on the PET (Polyethylene Terephthalate) images sampled at rotating bottle-blowing machine producing pipelines, extracts the opening. It first uses the grayscale weighting average method to calculate the centroid as the initial position of Snake and then based on the energy minimal theory, it extracts the opening. Experiments show that compared with the conventional edge detection and center location methods, Balloon Snake is robust and can easily step over the weak noise points. Edge extracted thorough Balloon Snake is more integral and continuous which provides a guarantee to correctly judge the opening.

2021-02-01
Rathi, P., Adarsh, P., Kumar, M..  2020.  Deep Learning Approach for Arbitrary Image Style Fusion and Transformation using SANET model. 2020 4th International Conference on Trends in Electronics and Informatics (ICOEI)(48184). :1049–1057.
For real-time applications of arbitrary style transformation, there is a trade-off between the quality of results and the running time of existing algorithms. Hence, it is required to maintain the equilibrium of the quality of generated artwork with the speed of execution. It's complicated for the present arbitrary style-transformation procedures to preserve the structure of content-image while blending with the design and pattern of style-image. This paper presents the implementation of a network using SANET models for generating impressive artworks. It is flexible in the fusion of new style characteristics while sustaining the semantic-structure of the content-image. The identity-loss function helps to minimize the overall loss and conserves the spatial-arrangement of content. The results demonstrate that this method is practically efficient, and therefore it can be employed for real-time fusion and transformation using arbitrary styles.
2021-01-15
Matern, F., Riess, C., Stamminger, M..  2019.  Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). :83—92.
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.