Visible to the public Botnets 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo

Botnets 2015

Botnets, a common security threat, are used for a variety of attacks: spam, distributed denial of service (DDOS), ad and spyware, scareware and brute forcing services.  Their reach and the challenge of detecting and neutralizing them is compounded in the cloud and on mobile networks.  The research cited here was presented in 2015.

Carvalho, M., "Resilient Command and Control Infrastructures for Cyber Operations," in Software Engineering for Adaptive and Self-Managing Systems (SEAMS), 2015 IEEE/ACM 10th International Symposium on , vol., no., pp.97-97, 18-19 May 2015

doi: 10.1109/SEAMS.2015.17

Abstract: The concept of command and control (C2) is generally associated with the exercise of authority, direction and coordination of assets and capabilities. Traditionally, the concept has encompassed important operational functions such as the establishment of intent, allocation of roles and responsibilities, definition of rules and constraints, and the monitoring and estimation of system state, situation, and progress. More recently, the notion of C2 has been extended beyond military applications to include cyber operation environments and assets. Unfortunately this evolution has enjoyed faster progress and adoption on the offensive, rather than defensive side of cyber operations. One example is the adoption of advanced peer-to-peer C2 infrastructures for the control of malicious botnets and coordinated attacks, which have successfully yielded very effective and resilient control infrastructures in many instances. Defensive C2 is normally associated with a system's ability to monitor, interpret, reason, and respond to cyber events, often through advanced human-machine interfaces, or automated actions. For defensive operations, the concept is gradually evolving and gaining momentum. Recent research activities in this area are now showing great potential to enable truly resilient cyber defense infrastructures. In this talk I will introduce some of the motivations, requirements, and challenges associated with the design of distributed command and control infrastructures for cyber operations. The talk will primarily focus on the resilience aspects of distributed C2, and will cover a brief overview of the prior research in the field, as well as discussions on some of the current and future challenges in this important research domain.

Keywords: command and control systems; security of data; advanced peer-to-peer C2 infrastructures; coordinated attacks ;cyber operation environments; malicious botnets; military applications; resilient command and control infrastructures; resilient cyber defense infrastructures; Adaptive systems; Command and control systems; Computer science; Computer security; Mechanical engineering; Monitoring; Software engineering; command and control; resilience; self-adaptation (ID#: 15-8224)



Al-Hakbani, M.M.; Dahshan, M.H., "Avoiding Honeypot Detection in Peer-to-Peer Botnets," in Engineering and Technology (ICETECH), 2015 IEEE International Conference on, pp. 1-7, 20-20 March 2015. doi: 10.1109/ICETECH.2015.7275017

Abstract: A botnet is group of compromised computers that are controlled by a botmaster, who uses them to perform illegal activities. Centralized and P2P (Peer-to-Peer) botnets are the most commonly used botnet types. Honeypots have been used in many systems as computer defense. They are used to attract botmasters to add them in their botnets; to become spies in exposing botnet attacker behaviors. In recent research works, improved mechanisms for honeypot detection have been proposed. Such mechanisms would enable bot masters to distinguish honeypots from real bots, making it more difficult for honeypots to join botnets. This paper presents a new method that can be used by security defenders to overcome the authentication procedure used by the advanced two-stage reconnaissance worm (ATSRW). The presented method utilizes the peer list information sent by an infected host during the ATSRW authentication process and uses a combination of IP address spoofing and fake TCP three-way handshake. The paper provides an analytical study on the performance and the success probability of the presented method. We show that the presented method provide a higher chance for honeypots to join botnets despite security measures taken by botmasters.

Keywords: message authentication; peer-to-peer computing; ATSRW authentication process; IP address spoofing; advanced two-stage reconnaissance worm; centralized botnet; fake TCP three-way handshake; honeypot detection; peer-to-peer botnets; success probability; Authentication; Computers; Delays; Grippers; IP networks; Peer-to-peer computing;P2P;botnet;detecting; honeypot; honeypot aware; peer-to-peer (ID#: 15-8225)



Bock, Leon; Karuppayah, Shankar; Grube, Tim; Muhlhauser, Max; Fischer, Mathias, "Hide and Seek: Detecting Sensors in P2P Botnets," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 731-732, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346908

Abstract: Many cyber-crimes, such as Denial of Service (DoS) attacks and banking frauds, originate from botnets. To prevent botnets from being taken down easily, botmasters have adopted peer-to-peer (P2P) mechanisms to prevent any single point of failure. However, sensor nodes that are often used for both, monitoring and executing sinkholing attacks, are threatening such botnets. In this paper, we introduce a novel mechanism to detect sensor nodes in P2P botnets using the clustering coefficient as a metric. We evaluated our mechanism on the real-world botnet Sality over the course of a week and were able to detect an average of 25 sensors per day with a false positive rate of 20%.

Keywords: Monitoring; Peer-to-peer computing (ID#: 15-8226)



Eslahi, M.; Rohmad, M.S.; Nilsaz, H.; Naseri, M.V.; Tahir, N.M.; Hashim, H., "Periodicity Classification of HTTP Traffic to Detect HTTP Botnets," in Computer Applications & Industrial Electronics (ISCAIE), 2015 IEEE Symposium on, pp. 119-123, 12-14 April 2015. doi: 10.1109/ISCAIE.2015.7298339

Abstract: Recently, the HTTP based Botnet threat has become a serious challenge for security experts as Bots can be distributed quickly and stealthily. With the HTTP protocol, Bots hide their communication flows within the normal HTTP flows making them more stealthy and difficult to detect. Furthermore, since the HTTP service is being widely used by the Internet applications, it is not easy to block this service as a precautionary measure and other techniques are required to detect and deter the Bot menace. The HTTP Bots periodically connect to particular web pages or URLs to get commands and updates from the Botmaster. In fact, this identifiable periodic connection pattern has been used in several studies as a feature to detect HTTP Botnets. In this paper, we review the current studies on detection of periodic communications in HTTP Botnets as well as the shortcomings of these methods. Consequently, we propose three metrics to be used in identifying the types of communication patterns according to their periodicity. Test results show that in addition to detecting HTTP Botnet communication patterns with 80% accuracy, the proposed method is able to efficiently classify communication patterns into several periodicity categories.

Keywords: Internet; invasive software; pattern classification; telecommunication security; telecommunication traffic; transport protocols; HTTP based botnet threat; HTTP botnet communication patterns; HTTP botnets detection; HTTP flows; HTTP protocol; HTTP service; HTTP traffic; Internet applications; URL; Web pages; botmaster; communication flows; periodic communications detection; periodic connection pattern; periodicity categories; periodicity classification; Command and control systems; Decision trees; Internet; Measurement; Radio frequency; Security; Servers; Botnet Detection; Command and Control Mechanism; HTTP Botnet; Internet Security; Mobile Botnets; Periodic Pattern (ID#: 15-8227)



Venkatesan, Sridhar; Albanese, Massimiliano; Jajodia, Sushil, "Disrupting Stealthy Botnets through Strategic Placement of Detectors," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 95-103, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346816

Abstract: In recent years, botnets have gained significant attention due to their extensive use in various kinds of criminal or otherwise unauthorized activities. Botnets have become increasingly sophisticated, and studies have shown that they can significantly reduce their footprint and increase their dwell time. Therefore, modern botnets can operate in stealth mode and evade detection for extended periods of time. In order to address this problem, we propose a proactive approach to strategically deploy detectors on selected network nodes, so as to either completely disrupt communication between bots and command and control nodes, or at least force the attacker to create more bots, therefore increasing the footprint of the botnet and the likelihood of detection. As the detector placement problem is intractable, we propose heuristics based on several centrality measures. Simulations results confirm that our approach can effectively increase complexity for the attacker.

Keywords: Command and control systems; Communication networks; Detectors; Mission critical systems; Peer-to-peer computing; Security; Servers (ID#: 15-8228)



Lysenko, S.; Pomorova, O.; Savenko, O.; Kryshchuk, A.; Bobrovnikova, K., "DNS-Based Anti-Evasion Technique for Botnets Detection," in Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2015 IEEE 8th International Conference on, vol. 1, pp. 453-458, 24-26 Sept. 2015. doi: 10.1109/IDAACS.2015.7340777

Abstract: A new DNS-based anti-evasion technique for botnets detection is proposed. It is based on a cluster analysis of the features obtained from the payload of DNS-messages. The method uses a semi-supervised fuzzy c-means clustering. Usage of the developed method makes it possible to detect botnets that use the DNS-based evasion techniques with high efficiency.

Keywords: fuzzy set theory; invasive software; pattern clustering; DNS; antievasion technique; botnets detection; cluster analysis; semisupervised fuzzy c-means clustering; Buildings; Entropy; Feature extraction; IP networks; Internet; Payloads; Servers; DNS-tunneling; botnet; botnet detection; botnet's evasion technique; domain flux; fast-flux service network (ID#: 15-8229)



An Wang; Mohaisen, A.; Wentao Chang; Songqing Chen, "Delving into Internet DDoS Attacks by Botnets: Characterization and Analysis," in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, pp. 379-390, 22-25 June 2015. doi: 10.1109/DSN.2015.47

Abstract: Internet Distributed Denial of Service (DDoS) at- tacks are prevalent but hard to defend against, partially due to the volatility of the attacking methods and patterns used by attackers. Understanding the latest DDoS attacks can provide new insights for effective defense. But most of existing understandings are based on indirect traffic measures (e.g., backscatters) or traffic seen locally. In this study, we present an in-depth analysis based on 50,704 different Internet DDoS attacks directly observed in a seven-month period. These attacks were launched by 674 botnets from 23 different botnet families with a total of 9,026 victim IPs belonging to 1,074 organizations in 186 countries. Our analysis reveals several interesting findings about today's Internet DDoS attacks. Some highlights include: (1) geolocation analysis shows that the geospatial distribution of the attacking sources follows certain patterns, which enables very accurate source prediction of future attacks for most active botnet families, (2) from the target perspective, multiple attacks to the same target also exhibit strong patterns of inter-attack time interval, allowing accurate start time prediction of the next anticipated attacks from certain botnet families, (3) there is a trend for different botnets to launch DDoS attacks targeting the same victim, simultaneously or in turn. These findings add to the existing literature on the understanding of today's Internet DDoS attacks, and offer new insights for designing new defense schemes at different levels.

Keywords: IP networks; computer network security; telecommunication traffic; Internet DDoS attacks; Internet distributed denial-of-service; botnet families; geolocation analysis; geospatial distribution; indirect traffic measures; interattack time interval; source prediction; start time prediction; victim IP; Cities and towns; Computer crime; Geology; IP networks; Internet; Malware; Organizations (ID#: 15-8230)



Qiben Yan; Yao Zheng; Tingting Jiang; Wenjing Lou; Hou, Y.T., "PeerClean: Unveiling Peer-to-Peer Botnets Through Dynamic Group Behavior Analysis," in Computer Communications (INFOCOM), 2015 IEEE Conference on, pp. 316-324, April 26 2015-May 1 2015. doi: 10.1109/INFOCOM.2015.7218396

Abstract: Advanced botnets adopt a peer-to-peer (P2P) infrastructure for more resilient command and control (C&C). Traditional detection techniques become less effective in identifying bots that communicate via a P2P structure. In this paper, we present PeerClean, a novel system that detects P2P botnets in real time using only high-level features extracted from C&C network flow traffic. PeerClean reliably distinguishes P2P bot-infected hosts from legitimate P2P hosts by jointly considering flow-level traffic statistics and network connection patterns. Instead of working on individual connections or hosts, PeerClean clusters hosts with similar flow traffic statistics into groups. It then extracts the collective and dynamic connection patterns of each group by leveraging a novel dynamic group behavior analysis. Comparing with the individual host-level connection patterns, the collective group patterns are more robust and differentiable. Multi-class classification models are then used to identify different types of bots based on the established patterns. To increase the detection probability, we further propose to train the model with average group behavior, but to explore the extreme group behavior for the detection. We evaluate PeerClean on real-world flow records from a campus network. Our evaluation shows that PeerClean is able to achieve high detection rates with few false positives.

Keywords: command and control systems; feature extraction; invasive software; pattern classification; peer-to-peer computing; probability; statistical analysis; telecommunication traffic; C&C network flow traffic; P2P bot-infected host; P2P botnet; PeerClean; command and control; detection probability; detection technique; dynamic group behavior analysis; flow level traffic statistic; high-level feature extraction; multiclass classification model; network connection pattern; peer-to-peer botnet; Computers; Conferences; Feature extraction; Peer-to-peer computing; Robustness; Support vector machines; Training (ID#: 15-8231)



Singh, K.J.; De, T., "DDOS Attack Detection and Mitigation Technique Based on Http Count and Verification Using CAPTCHA," in Computational Intelligence and Networks (CINE), 2015 International Conference on, pp. 196-197, 12-13 Jan. 2015. doi: 10.1109/CINE.2015.47

Abstract: With the rapid development of internet, the number of people who are online also increases tremendously. But now a day's we find not only growing positive use of internet but also the negative use of it. The misuse and abuse of internet is growing at an alarming rate. There are large cases of virus and worms infecting our systems having the software vulnerability. These systems can even become the clients for the bot herders. These infected system aid in launching the DDoS attack to a target server. In this paper we introduced the concept of IP blacklisting which will blocked the entire blacklisted IP address, http count filter will enable us to detect the normal and the suspected IP addresses and the CAPTCHA technique to counter check whether these suspected IP address are in control by human or botnet.

Keywords: Internet; client-server systems; computer network security; computer viruses; transport protocols; CAPTCHA; DDOS attack detection; DDOS attack mitigation technique; HTTP count filter; HTTP verification; IP address; IP blacklisting; Internet; botnet; software vulnerability; target server; virus; worms; CAPTCHAs; Computer crime; IP networks; Internet; Radiation detectors; Servers; bot; botnets; captcha; filter; http; mitigation (ID#: 15-8232)



Sanatinia, A.; Noubir, G., "OnionBots: Subverting Privacy Infrastructure for Cyber Attacks," in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, pp. 69-80, 22-25 June 2015. doi: 10.1109/DSN.2015.40

Abstract: Over the last decade botnets survived by adopting a sequence of increasingly sophisticated strategies to evade detection and take overs, and to monetize their infrastructure. At the same time, the success of privacy infrastructures such as Tor opened the door to illegal activities, including botnets, ransomware, and a marketplace for drugs and contraband. We contend that the next waves of botnets will extensively attempt to subvert privacy infrastructure and cryptographic mechanisms. In this work we propose to preemptively investigate the design and mitigation of such botnets. We first, introduce OnionBots, what we believe will be the next generation of resilient, stealthy botnets. OnionBots use privacy infrastructures for cyber attacks by completely decoupling their operation from the infected host IP address and by carrying traffic that does not leak information about its source, destination, and nature. Such bots live symbiotically within the privacy infrastructures to evade detection, measurement, scale estimation, observation, and in general all IP-based current mitigation techniques. Furthermore, we show that with an adequate self-healing network maintenance scheme, that is simple to implement, OnionBots can achieve a low diameter and a low degree and be robust to partitioning under node deletions. We develop a mitigation technique, called SOAP, that neutralizes the nodes of the basic OnionBots. In light of the potential of such botnets, we believe that the research community should proactively develop detection and mitigation methods to thwart OnionBots, potentially making adjustments to privacy infrastructure.

Keywords: IP networks; computer network management; computer network security; data privacy; fault tolerant computing; telecommunication traffic; Cyber Attacks; IP-based mitigation techniques; OnionBots; SOAP; Tor; botnets; cryptographic mechanisms; destination information; host IP address; illegal activities; information nature; node deletions; privacy infrastructure subversion; resilient-stealthy botnets; self-healing network maintenance scheme; source information; Cryptography; IP networks; Maintenance engineering; Peer-to-peer computing; Privacy; Relays; Servers; Tor; botnet; cyber security; privacy infrastructure; self-healing network (ID#: 15-8233)



Karuppayah, S.; Roos, S.; Rossow, C.; Muhlhauser, M.; Fischer, M., "Zeus Milker: Circumventing the P2P Zeus Neighbor List Restriction Mechanism," in Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, pp. 619-629, June 29 2015-July 2 2015. doi: 10.1109/ICDCS.2015.69

Abstract: The emerging trend of highly-resilient P2P botnets poses a huge security threat to our modern society. Carefully designed countermeasures as applied in sophisticated P2P botnets such as P2P Zeus impede botnet monitoring and successive takedown. These countermeasures reduce the accuracy of the monitored data, such that an exact reconstruction of the botnet's topology is hard to obtain efficiently. However, an accurate topology snapshot, revealing particularly the identities of all bots, is crucial to execute effective botnet takedown operations. With the goal of obtaining the required snapshot in an efficient manner, we provide a detailed description and analysis of the P2P Zeus neighbor list restriction mechanism. As our main contribution, we propose ZeusMilker, a mechanism for circumventing the existing anti-monitoring countermeasures of P2P Zeus. In contrast to existing approaches, our mechanism deterministically reveals the complete neighbor lists of bots and hence can efficiently provide a reliable topology snapshot of P2P Zeus. We evaluated ZeusMilker on a real-world dataset and found that it outperforms state-of-the-art techniques for botnet monitoring with regard to the number of queries needed to retrieve a bot's complete neighbor list. Furthermore, ZeusMilker is provably optimal in retrieving the complete neighbor list, requiring at most 2n queries for an n-elemental list. Moreover, we also evaluated how the performance of ZeusMilker is impacted by various protocol changes designed to undermine its provable performance bounds.

Keywords: computer network security; invasive software; peer-to-peer computing; telecommunication network topology;P2P Zeus impede botnet monitoring;P2P Zeus neighbor list restriction mechanism; ZeusMilker mechanism; anti-monitoring countermeasures; botnet topology exact reconstruction; effective botnet takedown operations; highly-resilient P2P botnets; n-elemental list; security threat; topology snapshot; Algorithm design and analysis; Complexity theory; Crawlers; Monitoring; Peer-to-peer computing; Protocols; Topology; Anti-monitoring countermeasures; P2P Zeus; XOR metric; botnet; milking (ID#: 15-8234)



Garg, V.; Camp, L.J., "Spare the Rod, Spoil the Network Security? Economic Analysis of Sanctions Online," in Electronic Crime Research (eCrime), 2015 APWG Symposium on, pp. 1-10, 26-29 May 2015

doi: 10.1109/ECRIME.2015.7120800

Abstract: When and how should we encourage network providers to mitigate the harm of security and privacy risks? Poorly designed interventions that do not align with economic incentives can lead stakeholders to be less, rather than more, careful. We apply an economic framework that compares two fundamental regulatory approaches: risk based or ex ante and harm based or ex post. We posit that for well known security risks, such as botnets, ex ante sanctions are economically efficient. Systematic best practices, e.g. patching, can reduce the risk of becoming a bot and thus can be implemented ex ante. Conversely risks, which are contextual, poorly understood, and new, and where distribution of harm is difficult to estimate, should incur ex post sanctions, e.g. information disclosure. Privacy preferences and potential harm vary widely across domains; thus, post-hoc consideration of harm is more appropriate for privacy risks. We examine two current policy and enforcement efforts, i.e. Do Not Track and botnet takedowns, under the ex ante vs. ex post framework. We argue that these efforts may worsen security and privacy outcomes, as they distort market forces, reduce competition, or create artificial monopolies. Finally, we address the overlap between security and privacy risks.

Keywords: computer network security; data privacy; invasive software; risk management; Do Not Track approach; botnet takedowns; botnets; economic incentives; ex-ante sanction approach; ex-post sanction approach; fundamental regulatory approaches; harm based approach; information disclosure; network security; online sanction economic analysis; patching method privacy risks; risk reduction; risk-based approach; security risks; Biological system modeling; Companies; Economics; Google; Government; Privacy; Security (ID#: 15-8235)



Khatri, V.; Abendroth, J., "Mobile Guard Demo: Network Based Malware Detection," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 1177-1179, 20-22 Aug. 2015. doi: 10.1109/Trustcom.2015.501

Abstract: The growing trend of data traffic in mobile networks brings new security threats such as malwares, botnets, premium SMS frauds etc, and these threats affect the network resources in terms of revenue as well as performance. Some end user devices are using antivirus and anti-malware clients for protection against malware attacks, but the malicious activity affects mobile network elements as well. Therefore, a network based malware detection system, such as Mobile Guard, is essential in detecting malicious activities within a network, as well as protecting end users from malware attacks that are propagated through mobile operator's network. We present Mobile Guard -- a network based malware detection system and discuss its necessity, solution architecture and key features.

Keywords: computer network security; invasive software; mobile computing; radio networks; antimalware clients; botnets; data traffic; malicious activity detection; malware attacks; mobile network elements; mobile networks; mobile operator network; network based malware detection system; premium SMS frauds; security threats; Conferences; Malware; Mobile communication; Mobile computing; Mobile handsets; Privacy; Antivirus; Malware; Mobile Guard; Mobile Network; Network Based Malware Detection (ID#: 15-8236)



Leszczyna, R.; Wrobel, M.R., "Evaluation of Open Source SIEM for Situation Awareness Platform in the Smart Grid Environment," in Factory Communication Systems (WFCS), 2015 IEEE World Conference on, pp. 1-4, 27-29 May 2015. doi: 10.1109/WFCS.2015.7160577

Abstract: The smart grid as a large-scale system of systems has an exceptionally large surface exposed to cyber-attacks, including highly evolved and sophisticated threats such as Advanced Persistent Threats (APT) or Botnets. When addressing this situation the usual cyber security technologies are prerequisite, but not sufficient. The smart grid requires developing and deploying an extensive ICT infrastructure that supports significantly increased situational awareness and enables detailed and precise command and control. The paper presents one of the studies related to the development and deployment of the Situation Awareness Platform for the smart grid, namely the evaluation of open source Security Information and Event Management systems. These systems are the key components of the platform.

Keywords: Internet; computer network security; grid computing; public domain software; APT; ICT infrastructure; advanced persistent threats; botnets; command-and-control; cyber-attacks; open source SIEM evaluation; open source security information-and-event management systems; situation awareness platform; smart grid environment; Computer security; NIST; Sensor systems; Smart grids; Software; SIEM; evaluation; situation awareness; smart grid (ID#: 15-8237)



Badis, H.; Doyen, G.; Khatoun, R., "A Collaborative Approach for a Source Based Detection of Botclouds," in Integrated Network Management (IM), 2015 IFIP/IEEE International Symposium on, pp. 906-909, 11-15 May 2015. doi: 10.1109/INM.2015.7140406

Abstract: Since the last years, cloud computing is playing an important role in providing high quality of IT services. However, beyond a legitimate usage, the numerous advantages it presents are now exploited by attackers, and botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. In this paper, we present an original approach that enables a collaborative egress detection of DDoS attacks leveraged by a botcloud. We provide an early evaluation of our approach using simulations that rely on real workload traces, showing our detection system effectiveness and low overhead, as well as its support for incremental deployment in real cloud infrastructures.

Keywords: cloud computing; computer network security; groupware; software agents; DDoS attacks; IT services; botclouds; botnets; cloud computing; cloud infrastructures; collaborative approach; collaborative egress detection; incremental deployment; source based detection; workload traces; Biomedical monitoring; Cloud computing; Collaboration; Computer crime; Monitoring; Peer-to-peer computing; Principal component analysis (ID#: 15-8238)



Vokorokos, L.; Drienik, P.; Fortotira, O.; Hurtuk, J., "Abusing Mobile Devices for Denial of Service Attacks," in Applied Machine Intelligence and Informatics (SAMI), 2015 IEEE 13th International Symposium on, pp. 21-24, 22-24 Jan. 2015. doi: 10.1109/SAMI.2015.7061886

Abstract: The growing popularity of mobile devices have led to the rise of mobile malware. It is also one of the reasons why amount of new mobile malware families, which are secretly connected over the internet to a remote Command & Control server, is increasing. It gives attackers possibility to create botnets for Denial of Service attacks or mining of cryptocurrencies. This paper discusses state of the art in computer and mobile security. Paper also presents proof of concept which can be used to abuse mobile devices' capabilities for malicious purposes. Distributed Denial of Service attack scenario is presented using smartphones with Android operating system against wireless network. Measured results and techniques are presented such as description of Android application specially created for this purposes.

Keywords: Android (operating system);computer crime; computer network security; invasive software; mobile computing; smart phones; Android application; Android operating system; Internet; botnets; computer security; cryptocurrencies mining; distributed denial of service attack; malicious purposes; mobile devices capabilities; mobile malware; mobile security; remote command & control server; smartphones; wireless network; Androids; Computer crime; Humanoid robots; Mobile communication; Servers; Smart phones (ID#: 15-8239)



Shanthi, K.; Seenivasan, D., "Detection Of Botnet by Analyzing Network Traffic Flow Characteristics Using Open Source Tools," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, vol., no., pp.1-5, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282353

Abstract: Botnets are emerging as the most serious cyber threat among different forms of malware. Today botnets have been facilitating to launch many cybercriminal activities like DDoS, click fraud, phishing attacks etc. The main purpose of botnet is to perform massive financial threat. Many large organizations, banks and social networks became the target of bot masters. Botnets can also be leased to motivate the cybercriminal activities. Recently several researches and many efforts have been carried out to detect bot, C&C channels and bot masters. Ultimately bot maters also strengthen their activities through sophisticated techniques. Many botnet detection techniques are based on payload analysis. Most of these techniques are inefficient for encrypted C&C channels. In this paper we explore different categories of botnet and propose a detection methodology to classify bot host from the normal host by analyzing traffic flow characteristics based on time intervals instead of payload inspection. Due to that it is possible to detect botnet activity even encrypted C&C channels are used.

Keywords: computer crime; computer network security; fraud; invasive software; pattern classification; public domain software; C&C channels; DDoS; bot host classification; bot masters; botnet activity detection; botnet detection technique; click fraud; cyber threat; cybercriminal activities; encrypted C&C channel; financial threat; malware; network traffic flow characteristics analysis; open source tools; payload analysis; payload inspection; phishing attack; Bluetooth; Conferences; IP networks; Mobile communication; Payloads; Servers; Telecommunication traffic; Bot; Bot master; Botnet; Botnet cloud; Mobile Botnet (ID#: 15-8240)



Han Zhang; Papadopoulos, C., "BotTalker: Generating Encrypted, Customizable C&C Traces," in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, pp. 1-6, 14-16 April 2015. doi: 10.1109/THS.2015.7225305

Abstract: Encrypted botnets have seen an increasing use in recent years. To enable research in detecting encrypted botnets researchers need samples of encrypted botnet traces with ground truth, which are very hard to get. Traces that are available are not customizable, which prevents testing under various controlled scenarios. To address this problem we introduce BotTalker, a tool that can be used to generate customized encrypted botnet communication traffic. BotTalker emulates the actions a bot would take to encrypt communication. It includes a highly configurable encrypted-traffic converter along with real, non-encrypted bot traces and background traffic. The converter is able to convert non-encrypted botnet traces into encrypted ones by providing customization along three dimensions: (a) selection of real encryption algorithm, (b) flow or packet level conversion, SSL emulation and (c) IP address substitution. To the best of our knowledge, BotTalker is the first work that provides users customized encrypted botnet traffic. In the paper we also apply BotTalker to evaluate the damage result from encrypted botnet traffic on a widely used botnet detection system - BotHunter and two IDS' - Snort and Suricata. The results show that encrypted botnet traffic foils bot detection in these systems.

Keywords: IP networks; authorisation; computer network security; cryptography; invasive software; telecommunication traffic; BotHunter; BotTalker; IDS; IP address substitution; SSL emulation; Snort; Suricata; background traffic; botnet detection system; configurable encrypted-traffic converter; customized encrypted botnet traffic; encrypted botnet traces; encrypted customizable C&C traces; flow level conversion; ground truth; packet level conversion; real encryption algorithm; Ciphers; Emulation; Encryption; IP networks; Payloads; Servers (ID#: 15-8241)



Zigang Cao; Gang Xiong; Li Guo, "MimicHunter: A General Passive Network Protocol Mimicry Detection Framework," in Trustcom/BigDataSE/ISPA, 2015 IEEE, vol. 1, pp. 271-278, 20-22 Aug. 2015

doi: 10.1109/Trustcom.2015.384

Abstract: Network based intrusions and information theft events are becoming more and more popular today. To bypass the network security devices such as firewall, intrusion detection/prevention system (IDS/IPS) and web application firewall, attackers use evasive techniques to circumvent them, of which protocol mimicry is a very useful approach. The technique camouflages malicious communications as common protocols or generally innocent applications to avoid network security audit, which has been widely used in advanced Trojans, botnets, as well as anonymous communication systems, bringing a great challenge to current network management and security. To this end, we propose a general network protocol mimicry behavior discovery framework named MimicHunter to detect such evasive masquerade behaviors, which exploits protocol structure and state transition verifications, as well as primary protocol behavior elements. Experiment results on several datasets demonstrate the effectiveness of our method in practice. Besides, MimicHunter is flexible in deployment and can be easily implemented in passive detection systems with only a little cost compared with the active methods.

Keywords: security of data; IDS-IPS; MimicHunter framework; Web application firewall; evasive techniques; firewall; information theft events; intrusion detection system; intrusion prevention system; network based intrusion; network security audit; network security devices; passive network protocol mimicry detection framework; Inspection; Intrusion detection; MIMICs; Malware; Payloads; Protocols; evasive attack; intrusion detection; protocol mimicry; protocol structure; state transition (ID#: 15-8242)



Ichise, H.; Yong Jin; Iida, K., "Analysis of Via-Resolver DNS TXT Queries and Detection Possibility of Botnet Communications," In Communications, Computers and Signal Processing (PACRIM), 2015 IEEE Pacific Rim Conference on, pp. 216-221, 24-26 Aug. 2015. doi: 10.1109/PACRIM.2015.7334837

Abstract: Recent reports on Internet security have indicated that the DNS (Domain Name System) protocol is being used for botnet communication in various botnets; in particular, botnet communication based on DNS TXT record type has been observed as a new technique in some botnet-based cyber attacks. One of the most fundamental Internet protocols, the DNS protocol is used for basic name resolution as well as many Internet services, so it is not possible to simply block out all DNS traffic. To block out only malicious DNS TXT record based botnet communications, it would be necessary to distinguish them from legitimate DNS traffic involving DNS TXT records. However, the DNS TXT record is also used in many legitimate ways since this type is allowed to include any plain text up to a fairly long length. In this paper, we mainly focus on the usage of the DNS TXT record and explain our analysis using about 5.5 million real DNS TXT record queries obtained for over 3 months in our campus network. Based on the analysis findings, we discuss a new method to detect botnet communication. Our analysis results show that 330 unique destination IP addresses (cover approximately 22.1% of unknown usages of DNS TXT record queries) may have been involved in malicious communications and this proportion is a reasonable basis for network administrators to perform detailed manual checking in many organizations.

Keywords: Internet; invasive software; DNS TXT record type; Internet protocols ;Internet security; botnet-based cyber attacks; domain name system protocol; malicious DNS TXT record based botnet communications; via-resolver DNS TXT queries; Computers; Electronic mail; IP networks; Internet; Postal services; Protocols; Servers; Botnet; C&C; DNS; TXT record; botnet communication; detection method (ID#: 15-8243)



Graham, M.; Winckles, A.; Sanchez-Velazquez, E., "Botnet Detection Within Cloud Service Provider Networks Using Flow Protocols," in Industrial Informatics (INDIN), 2015 IEEE 13th International Conference on, pp. 1614-1619, 22-24 July 2015. doi: 10.1109/INDIN.2015.7281975

Abstract: Botnets continue to remain one of the most destructive threats to cyber security. This work aims to detect botnet traffic within an abstracted virtualised infrastructure, such as is found within cloud service providers. To achieve this an environment is created based on Xen hypervisor, using Open vSwitch to export NetFlow Version 9. This paper provides experimental evidence for how flow export can capture network traffic parameters for identifying the presence of a command and control botnet within a virtualised infrastructure. The conceptual framework described within this paper presents a non-intrusive detection element for a botnet protection system for cloud service providers. Such a system could protect the type of virtualised environments that will form the building blocks for the Internet of Things.

Keywords: Internet of Things; cloud computing; invasive software; protocols; telecommunication traffic; Internet of Things; NetFlow Version 9;Open vSwitch; Xen hypervisor; abstracted virtualised infrastructure; botnet detection; botnet protection system; cloud service provider networks; command and control botnet; conceptual framework; cyber security; flow protocols; network traffic parameters; non-intrusive detection element;5G mobile communication; Bismuth; botnet detection; cloud service provider; netflow; virtualised infrastructure (ID#: 15-8244)



Kalaivani, K.; Suguna, C., "Efficient Botnet Detection Based on Reputation Model and Content Auditing in P2P Networks," in Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, pp. 1-4, 9-10 Jan. 2015. doi: 10.1109/ISCO.2015.7282358

Abstract: Botnet is a number of computers connected through internet that can send malicious content such as spam and virus to other computers without the knowledge of the owners. In peer-to-peer (p2p) architecture, it is very difficult to identify the botnets because it does not have any centralized control. In this paper, we are going to use a security principle called data provenance integrity. It can verify the origin of the data. For this, the certificate of the peers can be exchanged. A reputation based trust model is used for identifying the authenticated peer during file transmission. Here the reputation value of each peer can be calculated and a hash table is used for efficient file searching. The proposed system can also verify the trustworthiness of transmitted data by using content auditing. In this, the data can be checked against trained data set and can identify the malicious content.

Keywords: authorisation; computer network security; data integrity; information retrieval; invasive software; peer-to-peer computing; trusted computing;P2P networks; authenticated peer; botnet detection; content auditing; data provenance integrity; file searching; file transmission; hash table; malicious content; peer-to-peer architecture; reputation based trust model; reputation model; reputation value; security principle; spam; transmitted data trustworthiness; virus; Computational modeling; Cryptography; Measurement; Peer-to-peer computing; Privacy; Superluminescent diodes; Data provenance integrity; content auditing; reputation value; trained data set (ID#: 15-8245)



Okayasu, S.; Sasaki, R., "Proposal and Evaluation of Methods Using the Quantification Theory and Machine Learning for Detecting C&C Server Used in a Botnet," in Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, vol. 3, pp. 24-29, 1-5 July 2015. doi: 10.1109/COMPSAC.2015.165

Abstract: In recent years, the damage caused by botnets has increased and become a big problem. To solve this problem, we proposed a method to detect unjust C&C servers by using Hayashi's quantification theory class II. This method is able to detect unjust C&C servers, even if they are not included in a blacklist. However, it was predicted that the detection rate for this method decreases with passing time. Therefore, we have been continuing the investigation of the detection rate and adjusting the optimal detection method in different time periods. This paper deals with the results of an investigation for 2014. In addition, we newly introduce a method using a support vector machine (SVM) for comparison with quantification theory class II. We found that the detection rates by using quantification theory class II and those by the SVM are both very good, with very little difference in accuracy between them.

Keywords: invasive software; learning (artificial intelligence); support vector machines; C&C server; Hayashi quantification theory class II;SVM; botnet; detection rate; machine learning; optimal detection method; support vector machine; Accuracy; Data models; Electronic mail; Malware; Mathematical model; Servers; Support vector machines; Botnet; C& C Server; DNS; Hayashi's quantification methods; SVM (ID#: 15-8246)



Stevanovic, M.; Pedersen, J.M., "An Analysis of Network Traffic Classification for Botnet Detection," in Cyber Situational Awareness, Data Analytics and Assessment (CyberSA), 2015 International Conference on, pp. 1-8, 8-9 June 2015. doi: 10.1109/CyberSA.2015.7361120

Abstract: Botnets represent one of the most serious threats to the Internet security today. This paper explores how network traffic classification can be used for accurate and efficient identification of botnet network activity at local and enterprise networks. The paper examines the effectiveness of detecting botnet network traffic using three methods that target protocols widely considered as the main carriers of botnet Command and Control (C&C) and attack traffic, i.e. TCP, UDP and DNS. We propose three traffic classification methods based on capable Random Forests classifier. The proposed methods have been evaluated through the series of experiments using traffic traces originating from 40 different bot samples and diverse non-malicious applications. The evaluation indicates accurate and time-efficient classification of botnet traffic for all three protocols. The future work will be devoted to the optimization of traffic analysis and the correlation of findings from the three analysis methods in order to identify compromised hosts within the network.

Keywords: Botnet; Botnet Detection; Features Selection; MLAs; Random Forests; Traffic Analysis; Traffic Classification (ID#: 15-8247)



Al-Duwairi, Basheer; Al-Hammouri, Ahmad; Aldwairi, Monther; Paxson, Vern, "GFlux: A Google-based System for Fast Flux Detection," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 755-756, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346920

Abstract: Fast Flux Networks (FFNs) are a technique used by botnets rapidly change the IP addresses associated with botnet infrastructure and spam websites by adopting mechanisms similar to those used in Content Distribution Networks (CDNs) and Round Robin DNS Systems (RRDNS). In this work we present a novel approach, called GFlux, for fast flux detection. GFlux analyzes result pages returned by the Google search engine for queries consisting of IP addresses associated with suspect domain names. We base the Gflux approach on the observation that the number of hits returned by Google for queries associated with FFNs domains should generally be much lower than those associated with legitimate domains, particularly those used by CDNs. Our preliminary results show that number of hits provides a key feature that can aid with accurately classifying domain names as either fast flux domains and non-fast-flux domains.

Keywords: Electronic mail; Feature extraction; Google; IP networks; Internet; Search engines; Security (ID#: 15-8248)



Pengkui Luo; Torres, Ruben; Zhi-Li Zhang; Saha, Sabyasachi; Sung-Ju Lee; Nucci, Antonio; Mellia, Marco, "Leveraging Client-Side DNS Failure Patterns to Identify Malicious Behaviors," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 406-414, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346852

Abstract: DNS has been increasingly abused by adversaries for cyber-attacks. Recent research has leveraged DNS failures (i.e. DNS queries that result in a Non-Existent-Domain response from the server) to identify malware activities, especially domain-flux botnets that generate many random domains as a rendezvous technique for command-&-control. Using ISP network traces, we conduct a systematic analysis of DNS failure characteristics, with the goal of uncovering how attackers exploit DNS for malicious activities. In addition to DNS failures generated by domain-flux bots, we discover many diverse and stealthy failure patterns that have received little attention. Based on these findings, we present a framework that detects diverse clusters of suspicious domain names that cause DNS failures, by considering multiple types of syntactic as well as temporal patterns. Our evolutionary learning framework evaluates the clusters produced over time to eliminate spurious cases while retaining sustaining (i.e., highly suspicious) clusters. One of the advantages of our framework is in analyzing DNS failures on per-client basis and not hinging on the existence of multiple clients infected by the same malware. Our evaluation on a large ISP network trace shows that our framework detects at least 97% of the clients with suspicious DNS behaviors, with over 81% precision.

Keywords: Clustering algorithms; Conferences; Electronic mail; Feature extraction; Malware; Servers; Syntactics (ID#: 15-8249)



Ghafir, I.; Prenosil, V., "DNS Traffic Analysis for Malicious Domains Detection," in Signal Processing and Integrated Networks (SPIN), 2015 2nd International Conference on, pp. 613-918, 19-20 Feb. 2015. doi: 10.1109/SPIN.2015.7095337

Abstract: The web has become the medium of choice for people to search for information, conduct business, and enjoy entertainment. At the same time, the web has also become the primary platform used by miscreants to attack users. For example, drive-by-download attacks, which could be through malicious domains, are a popular choice among bot herders to grow their botnets. In this paper we present our methodology for detecting any connection to malicious domain. Our detection method is based on a blacklist of malicious domains. We process the network traffic, particularly DNS traffic. We analyze all DNS requests and match the query with the blacklist. The blacklist of malicious domains is updated automatically and the detection is in the real time. We applied our methodology on a packet capture (pcap) file which contains traffic to malicious domains and we proved that our methodology can successfully detect the connections to malicious domains. We also applied our methodology on campus live traffic and showed that it can detect malicious domain connections in the real time.

Keywords: Internet; invasive software; query processing; telecommunication traffic; DNS traffic analysis; Web; bot herders; campus live traffic; drive-by-download attacks; malicious domain blacklist; malicious domain connections; malicious domain detection; network traffic; packet capture file; pcap file; Computers; IP networks; Malware; Monitoring; Real-time systems; Web sites; Cyber attacks; botnet; intrusion detection system; malicious domain; malware (ID#: 15-8250)



Compagno, Alberto; Conti, Mauro; Lain, Daniele; Lovisotto, Giulio; Mancini, Luigi Vincenzo, "Boten ELISA: A novel approach for botnet C&C in Online Social Networks," in Communications and Network Security (CNS), 2015 IEEE Conference on, pp. 74-82, 28-30 Sept. 2015. doi: 10.1109/CNS.2015.7346813

Abstract: The Command and Control (C&C) channel of modern botnets is migrating from traditional centralized solutions (such as the ones based on Internet Relay Chat and Hyper Text Transfer Protocol), towards new decentralized approaches. As an example, in order to conceal their traffic and avoid blacklisting mechanisms, recent C&C channels use peer-to-peer networks or abuse popular Online Social Networks (OSNs). A key reason for this paradigm shift is that current detection systems become quite effective in detecting centralized C&C.

Keywords: Command and control systems; Conferences; Facebook; Malware; Proposals; Protocols (ID#: 15-8251)



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications.