Visible to the public Biblio

Found 1458 results

Filters: Keyword is human factors  [Clear All Filters]
Zeid, R. B., Moubarak, J., Bassil, C..  2020.  Investigating The Darknet. 2020 International Wireless Communications and Mobile Computing (IWCMC). :727—732.

Cybercrime is growing dramatically in the technological world nowadays. World Wide Web criminals exploit the personal information of internet users and use them to their advantage. Unethical users leverage the dark web to buy and sell illegal products or services and sometimes they manage to gain access to classified government information. A number of illegal activities that can be found in the dark web include selling or buying hacking tools, stolen data, digital fraud, terrorists activities, drugs, weapons, and more. The aim of this project is to collect evidence of any malicious activity in the dark web by using computer security mechanisms as traps called honeypots.

Pete, I., Hughes, J., Chua, Y. T., Bada, M..  2020.  A Social Network Analysis and Comparison of Six Dark Web Forums. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :484—493.

With increasing monitoring and regulation by platforms, communities with criminal interests are moving to the dark web, which hosts content ranging from whistle-blowing and privacy, to drugs, terrorism, and hacking. Using post discussion data from six dark web forums we construct six interaction graphs and use social network analysis tools to study these underground communities. We observe the structure of each network to highlight structural patterns and identify nodes of importance through network centrality analysis. Our findings suggest that in the majority of the forums some members are highly connected and form hubs, while most members have a lower number of connections. When examining the posting activities of central nodes we found that most of the central nodes post in sub-forums with broader topics, such as general discussions and tutorials. These members play different roles in the different forums, and within each forum we identified diverse user profiles.

Ebrahimi, M., Samtani, S., Chai, Y., Chen, H..  2020.  Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach. 2020 IEEE Security and Privacy Workshops (SPW). :20—26.

The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.

Korolev, D., Frolov, A., Babalova, I..  2020.  Classification of Websites Based on the Content and Features of Sites in Onion Space. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1680—1683.
This paper describes a method for classifying onion sites. According to the results of the research, the most spread model of site in onion space is built. To create such a model, a specially trained neural network is used. The classification of neural network is based on five different categories such as using authentication system, corporate email, readable URL, feedback and type of onion-site. The statistics of the most spread types of websites in Dark Net are given.
Kadoguchi, M., Kobayashi, H., Hayashi, S., Otsuka, A., Hashimoto, M..  2020.  Deep Self-Supervised Clustering of the Dark Web for Cyber Threat Intelligence. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

In recent years, cyberattack techniques have become more and more sophisticated each day. Even if defense measures are taken against cyberattacks, it is difficult to prevent them completely. It can also be said that people can only fight defensively against cyber criminals. To address this situation, it is necessary to predict cyberattacks and take appropriate measures in advance, and the use of intelligence is important to make this possible. In general, many malicious hackers share information and tools that can be used for attacks on the dark web or in the specific communities. Therefore, we assume that a lot of intelligence, including this illegal content exists in cyber space. By using the threat intelligence, detecting attacks in advance and developing active defense is expected these days. However, such intelligence is currently extracted manually. In order to do this more efficiently, we apply machine learning to various forum posts that exist on the dark web, with the aim of extracting forum posts containing threat information. By doing this, we expect that detecting threat information in cyber space in a timely manner will be possible so that the optimal preventive measures will be taken in advance.

Kobayashi, H., Kadoguchi, M., Hayashi, S., Otsuka, A., Hashimoto, M..  2020.  An Expert System for Classifying Harmful Content on the Dark Web. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

In this research, we examine and develop an expert system with a mechanism to automate crime category classification and threat level assessment, using the information collected by crawling the dark web. We have constructed a bag of words from 250 posts on the dark web and developed an expert system which takes the frequency of terms as an input and classifies sample posts into 6 criminal category dealing with drugs, stolen credit card, passwords, counterfeit products, child porn and others, and 3 threat levels (high, middle, low). Contrary to prior expectations, our simple and explainable expert system can perform competitively with other existing systems. For short, our experimental result with 1500 posts on the dark web shows 76.4% of recall rate for 6 criminal category classification and 83% of recall rate for 3 threat level discrimination for 100 random-sampled posts.

Zhang, N., Ebrahimi, M., Li, W., Chen, H..  2020.  A Generative Adversarial Learning Framework for Breaking Text-Based CAPTCHA in the Dark Web. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

Cyber threat intelligence (CTI) necessitates automated monitoring of dark web platforms (e.g., Dark Net Markets and carding shops) on a large scale. While there are existing methods for collecting data from the surface web, large-scale dark web data collection is commonly hindered by anti-crawling measures. Text-based CAPTCHA serves as the most prohibitive type of these measures. Text-based CAPTCHA requires the user to recognize a combination of hard-to-read characters. Dark web CAPTCHA patterns are intentionally designed to have additional background noise and variable character length to prevent automated CAPTCHA breaking. Existing CAPTCHA breaking methods cannot remedy these challenges and are therefore not applicable to the dark web. In this study, we propose a novel framework for breaking text-based CAPTCHA in the dark web. The proposed framework utilizes Generative Adversarial Network (GAN) to counteract dark web-specific background noise and leverages an enhanced character segmentation algorithm. Our proposed method was evaluated on both benchmark and dark web CAPTCHA testbeds. The proposed method significantly outperformed the state-of-the-art baseline methods on all datasets, achieving over 92.08% success rate on dark web testbeds. Our research enables the CTI community to develop advanced capabilities of large-scale dark web monitoring.

Park, W..  2020.  A Study on Analytical Visualization of Deep Web. 2020 22nd International Conference on Advanced Communication Technology (ICACT). :81—83.

Nowadays, there is a flood of data such as naked body photos and child pornography, which is making people bloodless. In addition, people also distribute drugs through unknown dark channels. In particular, most transactions are being made through the Deep Web, the dark path. “Deep Web refers to an encrypted network that is not detected on search engine like Google etc. Users must use Tor to visit sites on the dark web” [4]. In other words, the Dark Web uses Tor's encryption client. Therefore, users can visit multiple sites on the dark Web, but not know the initiator of the site. In this paper, we propose the key idea based on the current status of such crimes and a crime information visual system for Deep Web has been developed. The status of deep web is analyzed and data is visualized using Java. It is expected that the program will help more efficient management and monitoring of crime in unknown web such as deep web, torrent etc.

Liu, Y., Lin, F. Y., Ahmad-Post, Z., Ebrahimi, M., Zhang, N., Hu, J. L., Xin, J., Li, W., Chen, H..  2020.  Identifying, Collecting, and Monitoring Personally Identifiable Information: From the Dark Web to the Surface Web. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

Personally identifiable information (PII) has become a major target of cyber-attacks, causing severe losses to data breach victims. To protect data breach victims, researchers focus on collecting exposed PII to assess privacy risk and identify at-risk individuals. However, existing studies mostly rely on exposed PII collected from either the dark web or the surface web. Due to the wide exposure of PII on both the dark web and surface web, collecting from only the dark web or the surface web could result in an underestimation of privacy risk. Despite its research and practical value, jointly collecting PII from both sources is a non-trivial task. In this paper, we summarize our effort to systematically identify, collect, and monitor a total of 1,212,004,819 exposed PII records across both the dark web and surface web. Our effort resulted in 5.8 million stolen SSNs, 845,000 stolen credit/debit cards, and 1.2 billion stolen account credentials. From the surface web, we identified and collected over 1.3 million PII records of the victims whose PII is exposed on the dark web. To the best of our knowledge, this is the largest academic collection of exposed PII, which, if properly anonymized, enables various privacy research inquiries, including assessing privacy risk and identifying at-risk populations.

Brockschmidt, J., Shang, J., Wu, J..  2019.  On the Generality of Facial Forgery Detection. 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). :43—47.
A variety of architectures have been designed or repurposed for the task of facial forgery detection. While many of these designs have seen great success, they largely fail to address challenges these models may face in practice. A major challenge is posed by generality, wherein models must be prepared to perform in a variety of domains. In this paper, we investigate the ability of state-of-the-art facial forgery detection architectures to generalize. We first propose two criteria for generality: reliably detecting multiple spoofing techniques and reliably detecting unseen spoofing techniques. We then devise experiments which measure how a given architecture performs against these criteria. Our analysis focuses on two state-of-the-art facial forgery detection architectures, MesoNet and XceptionNet, both being convolutional neural networks (CNNs). Our experiments use samples from six state-of-the-art facial forgery techniques: Deepfakes, Face2Face, FaceSwap, GANnotation, ICface, and X2Face. We find MesoNet and XceptionNet show potential to generalize to multiple spoofing techniques but with a slight trade-off in accuracy, and largely fail against unseen techniques. We loosely extrapolate these results to similar CNN architectures and emphasize the need for better architectures to meet the challenges of generality.
McCloskey, S., Albright, M..  2019.  Detecting GAN-Generated Imagery Using Saturation Cues. 2019 IEEE International Conference on Image Processing (ICIP). :4584—4588.
Image forensics is an increasingly relevant problem, as it can potentially address online disinformation campaigns and mitigate problematic aspects of social media. Of particular interest, given its recent successes, is the detection of imagery produced by Generative Adversarial Networks (GANs), e.g. `deepfakes'. Leveraging large training sets and extensive computing resources, recent GANs can be trained to generate synthetic imagery which is (in some ways) indistinguishable from real imagery. We analyze the structure of the generating network of a popular GAN implementation [1], and show that the network's treatment of exposure is markedly different from a real camera. We further show that this cue can be used to distinguish GAN-generated imagery from camera imagery, including effective discrimination between GAN imagery and real camera images used to train the GAN.
Yang, X., Li, Y., Lyu, S..  2019.  Exposing Deep Fakes Using Inconsistent Head Poses. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8261—8265.
In this paper, we propose a new method to expose AI-generated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue, an SVM classifier is evaluated using a set of real face images and Deep Fakes.
Matern, F., Riess, C., Stamminger, M..  2019.  Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). :83—92.
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.
Akhtar, Z., Dasgupta, D..  2019.  A Comparative Evaluation of Local Feature Descriptors for DeepFakes Detection. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—5.
The global proliferation of affordable photographing devices and readily-available face image and video editing software has caused a remarkable rise in face manipulations, e.g., altering face skin color using FaceApp. Such synthetic manipulations are becoming a very perilous problem, as altered faces not only can fool human experts but also have detrimental consequences on automated face identification systems (AFIS). Thus, it is vital to formulate techniques to improve the robustness of AFIS against digital face manipulations. The most prominent countermeasure is face manipulation detection, which aims at discriminating genuine samples from manipulated ones. Over the years, analysis of microtextural features using local image descriptors has been successfully used in various applications owing to their flexibility, computational simplicity, and performances. Therefore, in this paper, we study the possibility of identifying manipulated faces via local feature descriptors. The comparative experimental investigation of ten local feature descriptors on a new and publicly available DeepfakeTIMIT database is reported.
Bose, A. J., Aarabi, P..  2019.  Virtual Fakes: DeepFakes for Virtual Reality. 2019 IEEE 21st International Workshop on Multimedia Signal Processing (MMSP). :1—1.
The proliferation of data and computational resources has led into many advancements in computer vision for facial data including easily replacing a face in one video with another one, the so called DeepFake. In this paper, we apply techniques to generate DeepFakes for virtual reality applications. We empirically validate our method by generating, for the first time, Deep Fake videos in virtual reality.
Korshunov, P., Marcel, S..  2019.  Vulnerability assessment and detection of Deepfake videos. 2019 International Conference on Biometrics (ICB). :1—6.
It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found the best performing method based on visual quality metrics, which is often used in presentation attack detection domain, to lead to 8.97% equal error rate on high quality Deep-fakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.
Amerini, I., Galteri, L., Caldelli, R., Bimbo, A. Del.  2019.  Deepfake Video Detection through Optical Flow Based CNN. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :1205—1207.
Recent advances in visual media technology have led to new tools for processing and, above all, generating multimedia contents. In particular, modern AI-based technologies have provided easy-to-use tools to create extremely realistic manipulated videos. Such synthetic videos, named Deep Fakes, may constitute a serious threat to attack the reputation of public subjects or to address the general opinion on a certain event. According to this, being able to individuate this kind of fake information becomes fundamental. In this work, a new forensic technique able to discern between fake and original video sequences is given; unlike other state-of-the-art methods which resorts at single video frames, we propose the adoption of optical flow fields to exploit possible inter-frame dissimilarities. Such a clue is then used as feature to be learned by CNN classifiers. Preliminary results obtained on FaceForensics++ dataset highlight very promising performances.
Yadav, D., Salmani, S..  2019.  Deepfake: A Survey on Facial Forgery Technique Using Generative Adversarial Network. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :852—857.
"Deepfake" it is an incipiently emerging face video forgery technique predicated on AI technology which is used for creating the fake video. It takes images and video as source and it coalesces these to make a new video using the generative adversarial network and the output is very convincing. This technique is utilized for generating the unauthentic spurious video and it is capable of making it possible to generate an unauthentic spurious video of authentic people verbally expressing and doing things that they never did by swapping the face of the person in the video. Deepfake can create disputes in countries by influencing their election process by defaming the character of the politician. This technique is now being used for character defamation of celebrities and high-profile politician just by swapping the face with someone else. If it is utilized in unethical ways, this could lead to a serious problem. Someone can use this technique for taking revenge from the person by swapping face in video and then posting it to a social media platform. In this paper, working of Deepfake technique along with how it can swap faces with maximum precision in the video has been presented. Further explained are the different ways through which we can identify if the video is generated by Deepfake and its advantages and drawback have been listed.
Kharbat, F. F., Elamsy, T., Mahmoud, A., Abdullah, R..  2019.  Image Feature Detectors for Deepfake Video Detection. 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA). :1—4.
Detecting DeepFake videos are one of the challenges in digital media forensics. This paper proposes a method to detect deepfake videos using Support Vector Machine (SVM) regression. The SVM classifier can be trained with feature points extracted using one of the different feature-point detectors such as HOG, ORB, BRISK, KAZE, SURF, and FAST algorithms. A comprehensive test of the proposed method is conducted using a dataset of original and fake videos from the literature. Different feature point detectors are tested. The result shows that the proposed method of using feature-detector-descriptors for training the SVM can be effectively used to detect false videos.
Đorđević, M., Milivojević, M., Gavrovska, A..  2019.  DeepFake Video Analysis using SIFT Features. 2019 27th Telecommunications Forum (℡FOR). :1—4.
Recent advantages in changing faces using DeepFake algorithms, which replace a face of one person with a face of another, truly represent what artificial intelligence and deep learning are capable of. Deepfakes in still images or video clips represent forgeries and tampered visual information. They are becoming increasingly successful and even difficult to notice in some cases. In this paper we analyze deepfakes using SIFT (Scale-Invariant Feature Transform) features. The experimental results show that in deepfake analysis using SIFT keypoints can be considered valuable.
Kumar, A., Bhavsar, A., Verma, R..  2020.  Detecting Deepfakes with Metric Learning. 2020 8th International Workshop on Biometrics and Forensics (IWBF). :1—6.
With the arrival of several face-swapping applications such as FaceApp, SnapChat, MixBooth, FaceBlender and many more, the authenticity of digital media content is hanging on a very loose thread. On social media platforms, videos are widely circulated often at a high compression factor. In this work, we analyze several deep learning approaches in the context of deepfakes classification in high compression scenarios and demonstrate that a proposed approach based on metric learning can be very effective in performing such a classification. Using less number of frames per video to assess its realism, the metric learning approach using a triplet network architecture proves to be fruitful. It learns to enhance the feature space distance between the cluster of real and fake videos embedding vectors. We validated our approaches on two datasets to analyze the behavior in different environments. We achieved a state-of-the-art AUC score of 99.2% on the Celeb-DF dataset and accuracy of 90.71% on a highly compressed Neural Texture dataset. Our approach is especially helpful on social media platforms where data compression is inevitable.
Li, Y., Yang, X., Sun, P., Qi, H., Lyu, S..  2020.  Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :3204—3213.
AI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for datasets of DeepFake videos. However, current DeepFake datasets suffer from low visual quality and do not resemble DeepFake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5,639 high-quality DeepFake videos of celebrities generated using improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF.
Katarya, R., Lal, A..  2020.  A Study on Combating Emerging Threat of Deepfake Weaponization. 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :485—490.
A breakthrough in the emerging use of machine learning and deep learning is the concept of autoencoders and GAN (Generative Adversarial Networks), architectures that can generate believable synthetic content called deepfakes. The threat lies when these low-tech doctored images, videos, and audios blur the line between fake and genuine content and are used as weapons to cause damage to an unprecedented degree. This paper presents a survey of the underlying technology of deepfakes and methods proposed for their detection. Based on a detailed study of all the proposed models of detection, this paper presents SSTNet as the best model to date, that uses spatial, temporal, and steganalysis for detection. The threat posed by document and signature forgery, which is yet to be explored by researchers, has also been highlighted in this paper. This paper concludes with the discussion of research directions in this field and the development of more robust techniques to deal with the increasing threats surrounding deepfake technology.
Khalid, H., Woo, S. S..  2020.  OC-FakeDect: Classifying Deepfakes Using One-class Variational Autoencoder. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2794—2803.
An image forgery method called Deepfakes can cause security and privacy issues by changing the identity of a person in a photo through the replacement of his/her face with a computer-generated image or another person's face. Therefore, a new challenge of detecting Deepfakes arises to protect individuals from potential misuses. Many researchers have proposed various binary-classification based detection approaches to detect deepfakes. However, binary-classification based methods generally require a large amount of both real and fake face images for training, and it is challenging to collect sufficient fake images data in advance. Besides, when new deepfakes generation methods are introduced, little deepfakes data will be available, and the detection performance may be mediocre. To overcome these data scarcity limitations, we formulate deepfakes detection as a one-class anomaly detection problem. We propose OC-FakeDect, which uses a one-class Variational Autoencoder (VAE) to train only on real face images and detects non-real images such as deepfakes by treating them as anomalies. Our preliminary result shows that our one class-based approach can be promising when detecting Deepfakes, achieving a 97.5% accuracy on the NeuralTextures data of the well-known FaceForensics++ benchmark dataset without using any fake images for the training process.
Gandhi, A., Jain, S..  2020.  Adversarial Perturbations Fool Deepfake Detectors. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.
This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors. We created adversarial perturbations using the Fast Gradient Sign Method and the Carlini and Wagner L2 norm attack in both blackbox and whitebox settings. Detectors achieved over 95% accuracy on unperturbed deepfakes, but less than 27% accuracy on perturbed deepfakes. We also explore two improvements to deep-fake detectors: (i) Lipschitz regularization, and (ii) Deep Image Prior (DIP). Lipschitz regularization constrains the gradient of the detector with respect to the input in order to increase robustness to input perturbations. The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner. Regularization improved the detection of perturbed deepfakes on average, including a 10% accuracy boost in the blackbox case. The DIP defense achieved 95% accuracy on perturbed deepfakes that fooled the original detector while retaining 98% accuracy in other cases on a 100 image subsample.