Visible to the public Biblio

Filters: Keyword is dark web  [Clear All Filters]
Singh, Hoshiyar, Balamurgan, K M.  2022.  Implementation of Privacy and Security in the Wireless Networks. 2022 International Conference on Futuristic Technologies (INCOFT). :1—6.

The amount of information that is shared regularly has increased as a direct result of the rapid development of network administrators, Web of Things-related devices, and online users. Cybercriminals constantly work to gain access to the data that is stored and transferred online in order to accomplish their objectives, whether those objectives are to sell the data on the dark web or to commit another type of crime. After conducting a thorough writing analysis of the causes and problems that arise with wireless networks’ security and privacy, it was discovered that there are a number of factors that can make the networks unpredictable, particularly those that revolve around cybercriminals’ evolving skills and the lack of significant bodies’ efforts to combat them. It was observed. Wireless networks have a built-in security flaw that renders them more defenceless against attack than their wired counterparts. Additionally, problems arise in networks with hub mobility and dynamic network geography. Additionally, inconsistent availability poses unanticipated problems, whether it is accomplished through mobility or by sporadic hub slumber. In addition, it is difficult, if not impossible, to implement recently developed security measures due to the limited resources of individual hubs. Large-scale problems that arise in relation to wireless networks and flexible processing are examined by the Wireless Correspondence Network Security and Privacy research project. A few aspects of security that are taken into consideration include confirmation, access control and approval, non-disavowal, privacy and secrecy, respectability, and inspection. Any good or service should be able to protect a client’s personal information. an approach that emphasises quality, implements strategy, and uses a poll as a research tool for IT and public sector employees. This strategy reflects a higher level of precision in IT faculties.

Nikoletos, Sotirios, Raftopoulou, Paraskevi.  2022.  Employing social network analysis to dark web communities. 2022 IEEE International Conference on Cyber Security and Resilience (CSR). :311—316.

Deep web refers to sites that cannot be found by search engines and makes up the 96% of the digital world. The dark web is the part of the deep web that can only be accessed through specialised tools and anonymity networks. To avoid monitoring and control, communities that seek for anonymization are moving to the dark web. In this work, we scrape five dark web forums and construct five graphs to model user connections. These networks are then studied and compared using data mining techniques and social network analysis tools; for each community we identify the key actors, we study the social connections and interactions, we observe the small world effect, and we highlight the type of discussions among the users. Our results indicate that only a small subset of users are influential, while the rapid dissemination of information and resources between users may affect behaviours and formulate ideas for future members.

Liang, Dingyang, Sun, Jianing, Zhang, Yizhi, Yan, Jun.  2022.  Lightweight Neural Network-based Web Fingerprinting Model. 2022 International Conference on Networking and Network Applications (NaNA). :29—34.

Onion Routing is an encrypted communication system developed by the U.S. Naval Laboratory that uses existing Internet equipment to communicate anonymously. Miscreants use this means to conduct illegal transactions in the dark web, posing a security risk to citizens and the country. For this means of anonymous communication, website fingerprinting methods have been used in existing studies. These methods often have high overhead and need to run on devices with high performance, which makes the method inflexible. In this paper, we propose a lightweight method to address the high overhead problem that deep learning website fingerprinting methods generally have, so that the method can be applied on common devices while also ensuring accuracy to a certain extent. The proposed method refers to the structure of Inception net, divides the original larger convolutional kernels into smaller ones, and uses group convolution to reduce the website fingerprinting and computation to a certain extent without causing too much negative impact on the accuracy. The method was experimented on the data set collected by Rimmer et al. to ensure the effectiveness.

Sharad Sonawane, Hritesh, Deshmukh, Sanika, Joy, Vinay, Hadsul, Dhanashree.  2022.  Torsion: Web Reconnaissance using Open Source Intelligence. 2022 2nd International Conference on Intelligent Technologies (CONIT). :1—4.

Internet technology has made surveillance widespread and access to resources at greater ease than ever before. This implied boon has countless advantages. It however makes protecting privacy more challenging for the greater masses, and for the few hacktivists, supplies anonymity. The ever-increasing frequency and scale of cyber-attacks has not only crippled private organizations but has also left Law Enforcement Agencies(LEA's) in a fix: as data depicts a surge in cases relating to cyber-bullying, ransomware attacks; and the force not having adequate manpower to tackle such cases on a more microscopic level. The need is for a tool, an automated assistant which will help the security officers cut down precious time needed in the very first phase of information gathering: reconnaissance. Confronting the surface web along with the deep and dark web is not only a tedious job but which requires documenting the digital footprint of the perpetrator and identifying any Indicators of Compromise(IOC's). TORSION which automates web reconnaissance using the Open Source Intelligence paradigm, extracts the metadata from popular indexed social sites and un-indexed dark web onion sites, provided it has some relating Intel on the target. TORSION's workflow allows account matching from various top indexed sites, generating a dossier on the target, and exporting the collected metadata to a PDF file which can later be referenced.

Labrador, Víctor, Pastrana, Sergio.  2022.  Examining the trends and operations of modern Dark-Web marketplaces. 2022 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW). :163—172.

Currently, the Dark Web is one key platform for the online trading of illegal products and services. Analysing the .onion sites hosting marketplaces is of interest for law enforcement and security researchers. This paper presents a study on 123k listings obtained from 6 different Dark Web markets. While most of current works leverage existing datasets, these are outdated and might not contain new products, e.g., those related to the 2020 COVID pandemic. Thus, we build a custom focused crawler to collect the data. Being able to conduct analyses on current data is of considerable importance as these marketplaces continue to change and grow, both in terms of products offered and users. Also, there are several anti-crawling mechanisms being improved, making this task more difficult and, consequently, reducing the amount of data obtained in recent years on these marketplaces. We conduct a data analysis evaluating multiple characteristics regarding the products, sellers, and markets. These characteristics include, among others, the number of sales, existing categories in the markets, the origin of the products and the sellers. Our study sheds light on the products and services being offered in these markets nowadays. Moreover, we have conducted a case study on one particular productive and dynamic drug market, i.e., Cannazon. Our initial goal was to understand its evolution over time, analyzing the variation of products in stock and their price longitudinally. We realized, though, that during the period of study the market suffered a DDoS attack which damaged its reputation and affected users' trust on it, which was a potential reason which lead to the subsequent closure of the market by its operators. Consequently, our study provides insights regarding the last days of operation of such a productive market, and showcases the effectiveness of a potential intervention approach by means of disrupting the service and fostering mistrust.

Abdellatif, Tamer Mohamed, Said, Raed A., Ghazal, Taher M..  2022.  Understanding Dark Web: A Systematic Literature Review. 2022 International Conference on Cyber Resilience (ICCR). :1—10.

Web evolution and Web 2.0 social media tools facilitate communication and support the online economy. On the other hand, these tools are actively used by extremist, terrorist and criminal groups. These malicious groups use these new communication channels, such as forums, blogs and social networks, to spread their ideologies, recruit new members, market their malicious goods and raise their funds. They rely on anonymous communication methods that are provided by the new Web. This malicious part of the web is called the “dark web”. Dark web analysis became an active research area in the last few decades, and multiple research studies were conducted in order to understand our enemy and plan for counteract. We have conducted a systematic literature review to identify the state-of-art and open research areas in dark web analysis. We have filtered the available research papers in order to obtain the most relevant work. This filtration yielded 28 studies out of 370. Our systematic review is based on four main factors: the research trends used to analyze dark web, the employed analysis techniques, the analyzed artifacts, and the accuracy and confidence of the available work. Our review results have shown that most of the dark web research relies on content analysis. Also, the results have shown that forum threads are the most analyzed artifacts. Also, the most significant observation is the lack of applying any accuracy metrics or validation techniques by most of the relevant studies. As a result, researchers are advised to consider using acceptance metrics and validation techniques in their future work in order to guarantee the confidence of their study results. In addition, our review has identified some open research areas in dark web analysis which can be considered for future research work.

Al-Omari, Ahmad, Allhusen, Andrew, Wahbeh, Abdullah, Al-Ramahi, Mohammad, Alsmadi, Izzat.  2022.  Dark Web Analytics: A Comparative Study of Feature Selection and Prediction Algorithms. 2022 International Conference on Intelligent Data Science Technologies and Applications (IDSTA). :170—175.

The value and size of information exchanged through dark-web pages are remarkable. Recently Many researches showed values and interests in using machine-learning methods to extract security-related useful knowledge from those dark-web pages. In this scope, our goals in this research focus on evaluating best prediction models while analyzing traffic level data coming from the dark web. Results and analysis showed that feature selection played an important role when trying to identify the best models. Sometimes the right combination of features would increase the model’s accuracy. For some feature set and classifier combinations, the Src Port and Dst Port both proved to be important features. When available, they were always selected over most other features. When absent, it resulted in many other features being selected to compensate for the information they provided. The Protocol feature was never selected as a feature, regardless of whether Src Port and Dst Port were available.

Dalvi, Ashwini, Patil, Gunjan, Bhirud, S G.  2022.  Dark Web Marketplace Monitoring - The Emerging Business Trend of Cybersecurity. 2022 International Conference on Trends in Quantum Computing and Emerging Business Technologies (TQCEBT). :1—6.

Cyber threat intelligence (CTI) is vital for enabling effective cybersecurity decisions by providing timely, relevant, and actionable information about emerging threats. Monitoring the dark web to generate CTI is one of the upcoming trends in cybersecurity. As a result, developing CTI capabilities with the dark web investigation is a significant focus for cybersecurity companies like Deepwatch, DarkOwl, SixGill, ThreatConnect, CyLance, ZeroFox, and many others. In addition, the dark web marketplace (DWM) monitoring tools are of much interest to law enforcement agencies (LEAs). The fact that darknet market participants operate anonymously and online transactions are pseudo-anonymous makes it challenging to identify and investigate them. Therefore, keeping up with the DWMs poses significant challenges for LEAs today. Nevertheless, the offerings on the DWM give insights into the dark web economy to LEAs. The present work is one such attempt to describe and analyze dark web market data collected for CTI using a dark web crawler. After processing and labeling, authors have 53 DWMs with their product listings and pricing.

Dalvi, Ashwini, Bhoir, Soham, Siddavatam, Irfan, Bhirud, S G.  2022.  Dark Web Image Classification Using Quantum Convolutional Neural Network. 2022 International Conference on Trends in Quantum Computing and Emerging Business Technologies (TQCEBT). :1—5.

Researchers have investigated the dark web for various purposes and with various approaches. Most of the dark web data investigation focused on analysing text collected from HTML pages of websites hosted on the dark web. In addition, researchers have documented work on dark web image data analysis for a specific domain, such as identifying and analyzing Child Sexual Abusive Material (CSAM) on the dark web. However, image data from dark web marketplace postings and forums could also be helpful in forensic analysis of the dark web investigation.The presented work attempts to conduct image classification on classes other than CSAM. Nevertheless, manually scanning thousands of websites from the dark web for visual evidence of criminal activity is time and resource intensive. Therefore, the proposed work presented the use of quantum computing to classify the images using a Quantum Convolutional Neural Network (QCNN). Authors classified dark web images into four categories alcohol, drugs, devices, and cards. The provided dataset used for work discussed in the paper consists of around 1242 images. The image dataset combines an open source dataset and data collected by authors. The paper discussed the implementation of QCNN and offered related performance measures.

Sweigert, Devin, Chowdhury, Md Minhaz, Rifat, Nafiz.  2022.  Exploit Security Vulnerabilities by Penetration Testing. 2022 IEEE International Conference on Electro Information Technology (eIT). :527–532.
When we setup a computer network, we need to know if an attacker can get into the system. We need to do a series of test that shows the vulnerabilities of the network setup. These series of tests are commonly known Penetration Test. The need for penetration testing was not well known before. This paper highlights how penetration started and how it became as popular as it has today. The internet played a big part into the push to getting the idea of penetration testing started. The styles of penetration testing can vary from physical to network or virtual based testing which either can be a benefit to how a company becomes more secure. This paper presents the steps of penetration testing that a company or organization needs to carry out, to find out their own security flaws.
Shams, Montasir, Pavia, Sophie, Khan, Rituparna, Pyayt, Anna, Gubanov, Michael.  2021.  Towards Unveiling Dark Web Structured Data. 2021 IEEE International Conference on Big Data (Big Data). :5275—5282.
Anecdotal evidence suggests that Web-search engines, together with the Knowledge Graphs and Bases, such as YAGO [46], DBPedia [13], Freebase [16], Google Knowledge Graph [52] provide rapid access to most structured information on the Web. However, taking a closer look reveals a so called "knowledge gap" [18] that is largely in the dark. For example, a person searching for a relevant job opening has to spend at least 3 hours per week for several months [2] just searching job postings on numerous online job-search engines and the employer websites. The reason why this seemingly simple task cannot be completed by typing in a few keyword queries into a search-engine and getting all relevant results in seconds instead of hours is because access to structured data on the Web is still rudimentary. While searching for a job we have many parameters in mind, not just the job title, but also, usually location, salary range, remote work option, given a recent shift to hybrid work places, and many others. Ideally, we would like to write a SQL-style query, selecting all job postings satisfying our requirements, but it is currently impossible, because job postings (and all other) Web tables are structured in many different ways and scattered all over the Web. There is neither a Web-scale generalizable algorithm nor a system to locate and normalize all relevant tables in a category of interest from millions of sources.Here we describe and evaluate on a corpus having hundreds of millions of Web tables [39], a new scalable iterative training data generation algorithm, producing high quality training data required to train Deep- and Machine-learning models, capable of generalizing to Web scale. The models, trained on such en-riched training data efficiently deal with Web scale heterogeneity compared to poor generalization performance of models, trained without enrichment [20], [25], [38]. Such models are instrumental in bridging the knowledge gap for structured data on the Web.
Evangelatos, Pavlos, Iliou, Christos, Mavropoulos, Thanassis, Apostolou, Konstantinos, Tsikrika, Theodora, Vrochidis, Stefanos, Kompatsiaris, Ioannis.  2021.  Named Entity Recognition in Cyber Threat Intelligence Using Transformer-based Models. 2021 IEEE International Conference on Cyber Security and Resilience (CSR). :348—353.
The continuous increase in sophistication of threat actors over the years has made the use of actionable threat intelligence a critical part of the defence against them. Such Cyber Threat Intelligence is published daily on several online sources, including vulnerability databases, CERT feeds, and social media, as well as on forums and web pages from the Surface and the Dark Web. Named Entity Recognition (NER) techniques can be used to extract the aforementioned information in an actionable form from such sources. In this paper we investigate how the latest advances in the NER domain, and in particular transformer-based models, can facilitate this process. To this end, the dataset for NER in Threat Intelligence (DNRTI) containing more than 300 pieces of threat intelligence reports from open source threat intelligence websites is used. Our experimental results demonstrate that transformer-based techniques are very effective in extracting cybersecurity-related named entities, by considerably outperforming the previous state- of-the-art approaches tested with DNRTI.
K M, Akshobhya.  2021.  Machine learning for anonymous traffic detection and classification. 2021 11th International Conference on Cloud Computing, Data Science Engineering (Confluence). :942—947.
Anonymity is one of the biggest concerns in web security and traffic management. Though web users are concerned about privacy and security various methods are being adopted in making the web more vulnerable. Browsing the web anonymously not only threatens the integrity but also questions the motive of such activity. It is important to classify the network traffic and prevent source and destination from hiding with each other unless it is for benign activity. The paper proposes various methods to classify the dark web at different levels or hierarchies. Various preprocessing techniques are proposed for feature selection and dimensionality reduction. Anon17 dataset is used for training and testing the model. Three levels of classification are proposed in the paper based on the network, traffic type, and application.
Ma, Haoyu, Cao, Jianqiu, Mi, Bo, Huang, Darong, Liu, Yang, Zhang, Zhenyuan.  2021.  Dark web traffic detection method based on deep learning. 2021 IEEE 10th Data Driven Control and Learning Systems Conference (DDCLS). :842—847.
Network traffic detection is closely related to network security, and it is also a hot research topic now. With the development of encryption technology, traffic detection has become more and more difficult, and many crimes have occurred on the dark web, so how to detect dark web traffic is the subject of this study. In this paper, we proposed a dark web traffic(Tor traffic) detection scheme based on deep learning and conducted experiments on public data sets. By analyzing the results of the experiment, our detection precision rate reached 95.47%.
Li, Junyan.  2021.  Threats and data trading detection methods in the dark web. 2021 6th International Conference on Innovative Technology in Intelligent System and Industrial Applications (CITISIA). :1—9.
The dark web has become a major trading platform for cybercriminals, with its anonymity and encrypted content nature make it possible to exchange hacked information and sell illegal goods without being traced. The types of items traded on the dark web have increased with the number of users and demands. In recent years, in addition to the main items sold in the past, including drugs, firearms and child pornography, a growing number of cybercriminals are targeting various types of private information, including different types of account data, identity information and visual data etc. This paper will further discuss the issue of threat detection in the dark web by reviewing the past literature on the subject. An approach is also proposed to identify criminals who commit crimes offline or on the surface network by using private information purchased from the dark web and the original sources of information on the dark web by building a database based on historical victim records for keyword matching and traffic analysis.
Dalvi, Ashwini, Siddavatam, Irfan, Thakkar, Viraj, Jain, Apoorva, Kazi, Faruk, Bhirud, Sunil.  2021.  Link Harvesting on the Dark Web. 2021 IEEE Bombay Section Signature Conference (IBSSC). :1—5.
In this information age, web crawling on the internet is a prime source for data collection. And with the surface web already being dominated by giants like Google and Microsoft, much attention has been on the Dark Web. While research on crawling approaches is generally available, a considerable gap is present for URL extraction on the dark web. With most literature using the regular expressions methodology or built-in parsers, the problem with these methods is the higher number of false positives generated with the Dark Web, which makes the crawler less efficient. This paper proposes the dedicated parsers methodology for extracting URLs from the dark web, which when compared proves to be better than the regular expression methodology. Factors that make link harvesting on the Dark Web a challenge are discussed in the paper.
Furumoto, Keisuke, Umizaki, Mitsuhiro, Fujita, Akira, Nagata, Takahiko, Takahashi, Takeshi, Inoue, Daisuke.  2021.  Extracting Threat Intelligence Related IoT Botnet From Latest Dark Web Data Collection. 2021 IEEE International Conferences on Internet of Things (iThings) and IEEE Green Computing Communications (GreenCom) and IEEE Cyber, Physical Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). :138—145.
As it is easy to ensure the confidentiality of users on the Dark Web, malware and exploit kits are sold on the market, and attack methods are discussed in forums. Some services provide IoT Botnet to perform distributed denial-of-service (DDoS as a Service: DaaS), and it is speculated that the purchase of these services is made on the Dark Web. By crawling such information and storing it in a database, threat intelligence can be obtained that cannot otherwise be obtained from information on the Surface Web. However, crawling sites on the Dark Web present technical challenges. For this paper, we implemented a crawler that can solve these challenges. We also collected information on markets and forums on the Dark Web by operating the implemented crawler. Results confirmed that the dataset collected by crawling contains threat intelligence that is useful for analyzing cyber attacks, particularly those related to IoT Botnet and DaaS. Moreover, by uncovering the relationship with security reports, we demonstrated that the use of data collected from the Dark Web can provide more extensive threat intelligence than using information collected only on the Surface Web.
Nair, Viswajit Vinod, van Staalduinen, Mark, Oosterman, Dion T..  2021.  Template Clustering for the Foundational Analysis of the Dark Web. 2021 IEEE International Conference on Big Data (Big Data). :2542—2549.
The rapid rise of the Dark Web and supportive technologies has served as the backbone facilitating online illegal activity worldwide. These illegal activities supported by anonymisation technologies such as Tor has made it increasingly elusive to law enforcement agencies. Despite several successful law enforcement operations, illegal activity on the Dark Web is still growing. There are approaches to monitor, mine, and research the Dark Web, all with varying degrees of success. Given the complexity and dynamics of the services offered, we recognize the need for in depth analysis of the Dark Web with regard to its infrastructures, actors, types of abuse and their relationships. This involves the challenging task of information extraction from the very heterogeneous collection of web pages that make up the Dark Web. Most providers develop their services on top of standard frameworks such as WordPress, Simple Machine Forum, phpBB and several other frameworks to deploy their services. As a result, these service providers publish significant number of pages based on similar structural and stylistic templates. We propose an efficient, scalable, repeatable and accurate approach to cluster Dark Web pages based on those structural and stylistic features. Extracting relevant information from those clusters should make it feasible to conduct in depth Dark Web analysis. This paper presents our clustering algorithm to accelerate information extraction, and as a result improve attribution of digital traces to infrastructures or individuals in the fight against cyber crime.
Mahor, Vinod, Rawat, Romil, Kumar, Anil, Chouhan, Mukesh, Shaw, Rabindra Nath, Ghosh, Ankush.  2021.  Cyber Warfare Threat Categorization on CPS by Dark Web Terrorist. 2021 IEEE 4th International Conference on Computing, Power and Communication Technologies (GUCON). :1—6.
The Industrial Internet of Things (IIoT) also referred as Cyber Physical Systems (CPS) as critical elements, expected to play a key role in Industry 4.0 and always been vulnerable to cyber-attacks and vulnerabilities. Terrorists use cyber vulnerability as weapons for mass destruction. The dark web's strong transparency and hard-to-track systems offer a safe haven for criminal activity. On the dark web (DW), there is a wide variety of illicit material that is posted regularly. For supervised training, large-scale web pages are used in traditional DW categorization. However, new study is being hampered by the impossibility of gathering sufficiently illicit DW material and the time spent manually tagging web pages. We suggest a system for accurately classifying criminal activity on the DW in this article. Rather than depending on the vast DW training package, we used authorized regulatory to various types of illicit activity for training Machine Learning (ML) classifiers and get appreciable categorization results. Espionage, Sabotage, Electrical power grid, Propaganda and Economic disruption are the cyber warfare motivations and We choose appropriate data from the open source links for supervised Learning and run a categorization experiment on the illicit material obtained from the actual DW. The results shows that in the experimental setting, using TF-IDF function extraction and a AdaBoost classifier, we were able to achieve an accuracy of 0.942. Our method enables the researchers and System authoritarian agency to verify if their DW corpus includes such illicit activity depending on the applicable rules of the illicit categories they are interested in, allowing them to identify and track possible illicit websites in real time. Because broad training set and expert-supplied seed keywords are not required, this categorization approach offers another option for defining illicit activities on the DW.
Dalvi, Ashwini, Ankamwar, Lukesh, Sargar, Omkar, Kazi, Faruk, Bhirud, S.G..  2021.  From Hidden Wiki 2020 to Hidden Wiki 2021: What Dark Web Researchers Comprehend with Tor Directory Services? 2021 5th International Conference on Information Systems and Computer Networks (ISCON). :1—4.
The dark web searching mechanism is unlike surface web searching. On one popular dark web, Tor dark web, the search is often directed by directory like services such as Hidden Wiki. The numerous dark web data collection mechanisms are discussed and implemented via crawling. The dark web crawler assumes seed link, i.e. hidden service from where the crawling begins. One such popular Tor directory service is Hidden Wiki. Most of the hidden services listed on the Hidden Wiki 2020 page became unreachable with the recent upgrade in the Tor version. The Hidden Wiki 2021 page has a limited listing of services compared to the Hidden Wiki 2020 page. This motivated authors of the present work to establish the role of Hidden wiki service in dark web research and proposed the hypothesis that the dark web could be reached better through customized harvested links than Hidden Wiki-like service. The work collects unique hidden services/ onion links using the opensource crawler TorBot and runs similarity analysis on collected pages to map to corresponding categories.
Zeid, R. B., Moubarak, J., Bassil, C..  2020.  Investigating The Darknet. 2020 International Wireless Communications and Mobile Computing (IWCMC). :727—732.

Cybercrime is growing dramatically in the technological world nowadays. World Wide Web criminals exploit the personal information of internet users and use them to their advantage. Unethical users leverage the dark web to buy and sell illegal products or services and sometimes they manage to gain access to classified government information. A number of illegal activities that can be found in the dark web include selling or buying hacking tools, stolen data, digital fraud, terrorists activities, drugs, weapons, and more. The aim of this project is to collect evidence of any malicious activity in the dark web by using computer security mechanisms as traps called honeypots.

Pete, I., Hughes, J., Chua, Y. T., Bada, M..  2020.  A Social Network Analysis and Comparison of Six Dark Web Forums. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :484—493.

With increasing monitoring and regulation by platforms, communities with criminal interests are moving to the dark web, which hosts content ranging from whistle-blowing and privacy, to drugs, terrorism, and hacking. Using post discussion data from six dark web forums we construct six interaction graphs and use social network analysis tools to study these underground communities. We observe the structure of each network to highlight structural patterns and identify nodes of importance through network centrality analysis. Our findings suggest that in the majority of the forums some members are highly connected and form hubs, while most members have a lower number of connections. When examining the posting activities of central nodes we found that most of the central nodes post in sub-forums with broader topics, such as general discussions and tutorials. These members play different roles in the different forums, and within each forum we identified diverse user profiles.

Ebrahimi, M., Samtani, S., Chai, Y., Chen, H..  2020.  Detecting Cyber Threats in Non-English Hacker Forums: An Adversarial Cross-Lingual Knowledge Transfer Approach. 2020 IEEE Security and Privacy Workshops (SPW). :20—26.

The regularity of devastating cyber-attacks has made cybersecurity a grand societal challenge. Many cybersecurity professionals are closely examining the international Dark Web to proactively pinpoint potential cyber threats. Despite its potential, the Dark Web contains hundreds of thousands of non-English posts. While machine translation is the prevailing approach to process non-English text, applying MT on hacker forum text results in mistranslations. In this study, we draw upon Long-Short Term Memory (LSTM), Cross-Lingual Knowledge Transfer (CLKT), and Generative Adversarial Networks (GANs) principles to design a novel Adversarial CLKT (A-CLKT) approach. A-CLKT operates on untranslated text to retain the original semantics of the language and leverages the collective knowledge about cyber threats across languages to create a language invariant representation without any manual feature engineering or external resources. Three experiments demonstrate how A-CLKT outperforms state-of-the-art machine learning, deep learning, and CLKT algorithms in identifying cyber-threats in French and Russian forums.

Korolev, D., Frolov, A., Babalova, I..  2020.  Classification of Websites Based on the Content and Features of Sites in Onion Space. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1680—1683.
This paper describes a method for classifying onion sites. According to the results of the research, the most spread model of site in onion space is built. To create such a model, a specially trained neural network is used. The classification of neural network is based on five different categories such as using authentication system, corporate email, readable URL, feedback and type of onion-site. The statistics of the most spread types of websites in Dark Net are given.
Kadoguchi, M., Kobayashi, H., Hayashi, S., Otsuka, A., Hashimoto, M..  2020.  Deep Self-Supervised Clustering of the Dark Web for Cyber Threat Intelligence. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

In recent years, cyberattack techniques have become more and more sophisticated each day. Even if defense measures are taken against cyberattacks, it is difficult to prevent them completely. It can also be said that people can only fight defensively against cyber criminals. To address this situation, it is necessary to predict cyberattacks and take appropriate measures in advance, and the use of intelligence is important to make this possible. In general, many malicious hackers share information and tools that can be used for attacks on the dark web or in the specific communities. Therefore, we assume that a lot of intelligence, including this illegal content exists in cyber space. By using the threat intelligence, detecting attacks in advance and developing active defense is expected these days. However, such intelligence is currently extracted manually. In order to do this more efficiently, we apply machine learning to various forum posts that exist on the dark web, with the aim of extracting forum posts containing threat information. By doing this, we expect that detecting threat information in cyber space in a timely manner will be possible so that the optimal preventive measures will be taken in advance.