Visible to the public Weaknesses 2015

SoS Newsletter- Advanced Book Block


SoS Logo



Attackers need only find one or a few exploitable vulnerabilities to mount a successful attack, while defenders must shore up as many weaknesses as practicable. The research presented here covers a range of weaknesses and approaches for identifying and securing against attacks. Many articles focus on key systems, both public and private. Hard problems addressed include human behavior, policy-based governance, resilience, and metrics. The work cited here was presented in 2015.

Schneider, J.; Romanowski, C.; Raj, R.K.; Mishra, S.; Stein, K., “Measurement of Locality Specific Resilience,” in Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, vol., no., pp. 1–6, 14–16 April 2015. doi:10.1109/THS.2015.7225332
Abstract: Resilience has been defined at the local, state, and national levels, and subsequent attempts to refine the definition have added clarity. Quantitative measurements, however, are crucial to a shared understanding of resilience. This paper reviews the evolution of resiliency indicators and metrics and suggests extensions to current indicators to measure functional resilience at a jurisdictional or community level. Using a management systems approach, an input/output model may be developed to demonstrate abilities, actions, and activities needed to support a desired outcome. Applying systematic gap analysis and an improvement cycle with defined metrics, the paper proposes a model to evaluate a community’s operational capability to respond to stressors. As each locality is different-with unique risks, strengths, and weaknesses-the model incorporates these characteristics and calculates a relative measure of maturity for that community. Any community can use the resulting model output to plan and improve its resiliency capabilities.
Keywords: emergency management; social sciences; community operational capability; functional resilience measurement; locality specific resilience measurement; quantitative measurement; resiliency capability; resiliency indicators; resiliency metrics; systematic gap analysis; Economics; Emergency services; Hazards; Measurement; Resilience; Standards; Training; AHP; community resilience; operational resilience modeling; resilience capability metrics (ID#: 15-7200)


Bowu Zhang; Jinho Hwang; Liran Ma.; Wood, Timothy, “Towards Security-Aware Virtual Server Migration Optimization to the Cloud,” in Autonomic Computing (ICAC), 2015 IEEE International Conference on, vol., no., pp. 71–80, 7–10 July 2015. doi:10.1109/ICAC.2015.45
Abstract: Cloud computing, featured by shared servers and location independent services, has been widely adopted by various businesses to increase computing efficiency, and reduce operational costs. Despite significant benefits and interests, enterprises have a hard time to decide whether or not to migrate thousands of servers into the cloud because of various reasons such as lack of holistic migration (planning) tools, concerns on data security and cloud vendor lock-in. In particular, cloud security has become the major concern for decision makers, due to the nature weakness of virtualization — the fact that the cloud allows multiple users to share resources through Internet-facing interfaces can be easily taken advantage of by hackers. Therefore, setting up a secure environment for resource migration becomes the top priority for both enterprises and cloud providers. To achieve the goal of security, security policies such as firewalls and access control have been widely adopted, leading to significant cost as additional resources need to employed. In this paper, we address the challenge of the security-aware virtual server migration, and propose a migration strategy that minimizes the migration cost while promising the security needs of enterprises. We prove that the proposed security-aware cost minimization problem is NP hard and our solution can achieve an approximate factor of 2. We perform an extensive simulation study to evaluate the performance of the proposed solution under various settings. Our simulation results demonstrate that our approach can save 53% moving cost for a single enterprise case, and 66% for multiple enterprises case comparing to a random migration strategy.
Keywords: cloud computing; cost reduction; resource allocation; security of data; virtualisation; Internet-facing interfaces; NP hard problem; cloud security; cloud vendor lock-in; data security; moving cost savings; resource migration; resource sharing; security policy; security-aware cost minimization problem; security-aware virtual server migration optimization; Approximation algorithms; Approximation methods; Cloud computing; Clustering algorithms; Home appliances; Security; Servers; Cloud Computing; Cloud Migration; Cloud Security; Cost Minimization (ID#: 15-7201)


Suh-Lee, Candace; Juyeon Jo, “Quantifying Security Risk by Measuring Network Risk Conditions,” in Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, vol., no., pp. 9–14, June 28 2015–July 1 2015. doi:10.1109/ICIS.2015.7166562
Abstract: Software vulnerabilities are the weaknesses in the software that inadvertently allow dangerous operations. If the vulnerability is in a network service, it poses serious security threats because a cyber-attacker can exploit it to gain unauthorized access to the system. Hence, rapid discovery and remediation of network vulnerabilities is critical issues in network security. In today’s dynamic IT environment, it is common practice that an organization prioritizes the mitigation of discovered vulnerabilities according to their risk levels. Currently available technologies, however, associate each vulnerability to the static risk level which does not take the unique characteristics of the target network into account. This often leads to inaccurate risk prioritization and less-than-optimal resource allocation. In this research, we introduce a novel way of quantifying the risk of network vulnerability by augmenting the static risk level with conditions specific to the target network. The method calculates the risk value of each vulnerability by measuring the proximity to the untrusted network and risk of the neighboring hosts. The resulting risk value, RCR is a composite index of the individual risk, network location and neighborhood risk conditions. Thus, it can be effectively used for prioritization, comparison and trending. We tested the methodology through the network intrusion simulation. The results shows average 88.9% the correlation between RCR and number of successful attacks on each vulnerability.
Keywords: computer network security; resource allocation; risk management; RCR; cyber-attacker; dynamic IT environment; less-than-optimal resource allocation; network intrusion simulation; network location; network risk condition measurement; network security; network service; network vulnerability; risk prioritization; security risk quantification; security threats; software vulnerability; Internet; Organizations; Reliability; Security; Servers; Standards organizations; Workstations; quantitative risk analysis; useable security; vulnerability management (ID#: 15-7202)


Xiao Chen; Liang Pang; Yuhuan Tang; Hongpeng Yang; Zhi Xue, “Security in MIMO Wireless Hybrid Channel with Artificial Noise,” in Cyber Security of Smart Cities, Industrial Control System and Communications (SSIC), 2015 International Conference on, vol., no., pp. 1–4, 5–7 Aug. 2015. doi:10.1109/SSIC.2015.7245676
Abstract: Security is an important issue in the field of wireless channel. In this paper, the security problem of Gaussian MIMO wireless hybrid channel is considered where a transmitter with multiple antennas sends information to an intended receiver with one antenna in the presence of an eavesdropper with multiple antennas. Through utilizing some of the power to produce ‘artificial noise’, the transmitter can only degrade the eavesdropper’s channel to ensure the security of the communication. But there is an inherent weakness in this scheme. Then a Hybrid Blind Space Elimination (HBSE) scheme is proposed and proved to fix the design flaw in order to strengthen the original scheme.
Keywords: Gaussian channels; MIMO communication; wireless channels; Gaussian MIMO wireless hybrid channel; HBSE scheme; artificial noise; hybrid blind space elimination scheme; security problem; Communication system security; Noise; Receiving antennas; Security; Transmitting antennas; Wireless communication; HBSE; MIMO-WHC; secrecy capacity; wireless hybrid channel (ID#: 15-7203)


Lychev, R.; Jero, S.; Boldyreva, A.; Nita-Rotaru, C., “How Secure and Quick is QUIC? Provable Security and Performance Analyses,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 214–231, 17–21 May 2015. doi:10.1109/SP.2015.21
Abstract: QUIC is a secure transport protocol developed by Google and implemented in Chrome in 2013, currently representing one of the most promising solutions to decreasing latency while intending to provide security properties similar with TLS. In this work we shed some light on QUIC’s strengths and weaknesses in terms of its provable security and performance guarantees in the presence of attackers. We first introduce a security model for analyzing performance-driven protocols like QUIC and prove that QUIC satisfies our definition under reasonable assumptions on the protocol’s building blocks. However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS, e.g., TLS-DHE. Our analyses also reveal that with simple bit-flipping and replay attacks on some public parameters exchanged during the handshake, an adversary could easily prevent QUIC from achieving minimal latency advantages either by having it fall back to TCP or by causing the client and server to have an inconsistent view of their handshake leading to a failure to complete the connection. We have implemented these attacks and demonstrated that they are practical. Our results suggest that QUIC’s security weaknesses are introduced by the very mechanisms used to reduce latency, which highlights the seemingly inherent trade off between minimizing latency and providing “good” security guarantees.
Keywords: client-server systems; computer network security; transport protocols; Chrome; Google; QUIC; TLS-DHE; bit-flipping; performance analysis; performance guarantee; provable security; secure transport protocol; Encryption; IP networks; Protocols; Public key; Servers (ID#: 15-7204)


Almubark, A.; Hatanaka, N.; Uchida, O.; Ikeda, Y., “Identifying the Organizational Factors of Information Security Incidents,” in Computing Technology and Information Management (ICCTIM), 2015 Second International Conference on, vol., no., pp. 7–12, 21–23 April 2015. doi:10.1109/ICCTIM.2015.7224585
Abstract: Leakage of secret information have increasingly become a social problem. Information leaks typically target specified organizations or persons, considering the magnitude of risk involved in information security as a part of business activity. This paper aims to identify the causes of information leaks by applying the organization theory and the statistical analysis method to reveal the mechanism of information security incidents. Furthermore, the relationship between organizational objectives and social values is discussed in order to propose solutions to resolve organizational weakness.
Keywords: organisational aspects; security of data; social aspects of automation; statistical analysis; business activity; information leaks; information security incidents; organizational factors; organizational objectives; secret information; social values; statistical analysis method; Decision support systems; Information security; Organizations; Corporate Culture; Information Security Incidents (ID#: 15-7205)


Procházková, L.Ď.; Hromada, M., “The Security Risks Associated with Attacks on Soft Targets of State,” in Military Technologies (ICMT), 2015 International Conference on, vol., no., pp. 1–4, 19–21 May 2015. doi:10.1109/MILTECHS.2015.7153731
Abstract: The article will discuss the issue of attacks on soft targets of state. The theoretical part takes place to determine the theoretical knowledge and to define primary situations that emerge as weaknesses of attacks. The document analyses the situations which are closely linked to the attempted attack on a targets. In the article is the analysis of the causes that led to the attack. Analysis of the causes should define security vulnerabilities while should point to the possibility of introducing changes. There is analysis of the state of the global perspective, in term of several decades. Post primary analysis has attempted attacks on soft targets and points to a possible way of addressing the proposal of changes in that process.
Keywords: national security; risk analysis; post primary analysis; security risks; security vulnerability; soft target of state attack; Globalization; Proposals; Sociology; Statistics; Terrorism; Weapons; attack; attacker; reason; soft targets (ID#: 15-7206)


Gawron, Marian; Cheng, Feng; Meinel, Christoph, “Automatic Detection of Vulnerabilities for Advanced Security Analytics,” in Network Operations and Management Symposium (APNOMS), 2015 17th Asia-Pacific, vol., no., pp. 471–474, 19–21 Aug. 2015. doi:10.1109/APNOMS.2015.7275369
Abstract: The detection of vulnerabilities in computer systems and computer networks as well as the weakness analysis are crucial problems. The presented method tackles the problem with an automated detection. For identifying vulnerabilities the approach uses a logical representation of preconditions and postconditions of vulnerabilities. The conditional structure simulates requirements and impacts of each vulnerability. Thus an automated analytical function could detect security leaks on a target system based on this logical format. With this method it is possible to scan a system without much expertise, since the automated or computer-aided vulnerability detection does not require special knowledge about the target system. The gathered information is used to provide security advisories and enhanced diagnostics which could also detect attacks that exploit multiple vulnerabilities of the system.
Keywords: Browsers; Complexity theory; Data models; Databases; Operating systems; Security (ID#: 15-7207)


Kassicieh, Sul; Lipinski, Valerie; Seazzu, Alessandro F., “Human Centric Cyber Security: What Are the New Trends in Data Protection?,” in Management of Engineering and Technology (PICMET), 2015 Portland International Conference on, vol., no.,
pp. 1321–1338, 2–6 Aug. 2015. doi:10.1109/PICMET.2015.7273084
Abstract: The debate about the use of automated security measures versus training and awareness of people with access to data (such as employees) to protect sensitive and/or private information has been going on for some time. In this paper, we outline the thinking behind security, what hackers are trying to accomplish and the best ways of combating these efforts using the latest techniques that combine multiple lines of defense. Different major categories of automated security measures as well as major training and awareness techniques are discussed outlining strengths and weaknesses of each method.
Keywords: Companies; Computer crime; Media; Training (ID#: 15-7208)


Akgün, Mete; Çağlayan, M.Ufuk, “Weaknesses of Two RFID Protocols Regarding De-Synchronization Attacks,” in Wireless Communications and Mobile Computing Conference (IWCMC), 2015 International, vol., no., pp. 828–833, 24–28 Aug. 2015. doi:10.1109/IWCMC.2015.7289190
Abstract: Radio Frequency Identification (RFID) protocols should have a secret updating phase in order to protect the privacy of RFID tags against tag tracing attacks. In the literature, there are many lightweight RFID authentication protocols that try to provide key updating with lightweight cryptographic primitives. In this paper, we analyze the security of two recently proposed lightweight RFID authentication protocol against desynchronization attacks. We show that secret values shared between the back-end server and any given tag can be easily desynchronized. This weakness stems from the insufficient design of these protocols.
Keywords: Authentication; Privacy; Protocols; RFID tags; Servers; RFID; authentication; de-synchronization (ID#: 15-7209)


Arden, O.; Liu, J.; Myers, A.C., “Flow-Limited Authorization,” in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, vol., no., pp. 569–583, 13–17 July 2015. doi:10.1109/CSF.2015.42
Abstract: Because information flow control mechanisms often rely on an underlying authorization mechanism, their security guarantees can be subverted by weaknesses in authorization. Conversely, the security of authorization can be subverted by information flows that leak information or that influence how authority is delegated between principals. We argue that interactions between information flow and authorization create security vulnerabilities that have not been fully identified or addressed in prior work. We explore how the security of decentralized information flow control (DIFC) is affected by three aspects of its underlying authorization mechanism: first, delegation of authority between principals, second, revocation of previously delegated authority, third, information flows created by the authorization mechanisms themselves. It is no surprise that revocation poses challenges, but we show that even delegation is problematic because it enables unauthorized downgrading. Our solution is a new security model, the Flow-Limited Authorization Model (FLAM), which offers a new, integrated approach to authorization and information flow control. FLAM ensures robust authorization, a novel security condition for authorization queries that ensures attackers cannot influence authorization decisions or learn confidential trust relationships. We discuss our prototype implementation and its algorithm for proof search.
Keywords: authorisation; FLAM; authorization queries; confidential trust relationships; decentralized information flow control; flow-limited authorization model; proof search; robust authorization; security condition; security model; security vulnerabilities; Authorization; Buildings; Cognition; Fabrics; Lattices Robustness; access control; authorization logic; distributed systems; dynamic policies; information flow control; language-based security; security; trust management (ID#: 15-7210)


Kapur, P.K.; Yadavali, V.S.S.; Shrivastava, A.K., “A Comparative Study of Vulnerability Discovery Modeling and Software Reliability Growth Modeling,” in Futuristic Trends on Computational Analysis and Knowledge Management (ABLAZE), 2015 International Conference on, vol., no., pp. 246–251, 25–27 Feb. 2015. doi:10.1109/ABLAZE.2015.7155000
Abstract: Technological advancements are achieving greater heights with each passing day. Information technology is one of the area in which is developing at an agile pace. It has evolved in such a way that we all are interconnected through some medium viz. Internet, telecommunication etc. Technical advancements have grown enough to affect everyone’s day to day life. With this increasing dependency on software systems the issue of being secure is a big challenge. This security problem is becoming critical due to the presence of bad guys and attracted a lot of researchers towards identifying major attributes of security. One of the security attribute considered in this paper is software vulnerability. Software security vulnerability is a weakness in a software product that could allow an attacker to compromise the integrity, availability, or confidentiality of that product. In past, Vulnerabilities have been reported in the various operating systems. In order to mitigate the risk associated with these vulnerabilities both the developers as well as the users have to utilize their significant resources. Recently few researchers have shown their interest in investigating the potential number of vulnerabilities in the software by applying quantitative approach. In this paper we analytically describe existing models and compare it with our proposed models by evaluating these models using actual data for various software systems. Our proposed models capture the discovery process relatively better than the existing discovery models. Further it has also been shown that some of the existing SRGM can also be used for predicting security vulnerabilities in software.
Keywords: program verification; risk management; security of data; software reliability; Internet; SRGM; information technology; model evaluation; product availability; product confidentiality; product integrity; quantitative approach; risk mitigatation; security attributes; security problem; software product; software reliability growth modeling; software security vulnerability prediction; software system dependency; technical advancement; technological advancement; telecommunication; vulnerability discovery modeling; Analytical models; Computational modeling; Mathematical model; Security; Software reliability; Software systems; Non Homogeneous Poisson Process (NHPP); Software Reliability Growth Model (SRGM); Software Security; Vulnerability; Vulnerability Discovery Model (VDM) (ID#: 15-7211)


D’Lima, N.; Mittal, J., “Password Authentication Using Keystroke Biometrics,” in Communication, Information & Computing Technology (ICCICT), 2015 International Conference on, vol., no., pp. 1–6, 15–17 Jan. 2015. doi:10.1109/ICCICT.2015.7045681
Abstract: The majority of applications use a prompt for a username and password. Passwords are recommended to be unique, long, complex, alphanumeric and non-repetitive. These reasons that make passwords secure may prove to be a point of weakness. The complexity of the password provides a challenge for a user and they may choose to record it. This compromises the security of the password and takes away its advantage. An alternate method of security is Keystroke Biometrics. This approach uses the natural typing pattern of a user for authentication. This paper proposes a new method for reducing error rates and creating a robust technique. The new method makes use of multiple sensors to obtain information about a user. An artificial neural network is used to model a user’s behavior as well as for retraining the system. An alternate user verification mechanism is used in case a user is unable to match their typing pattern.
Keywords: authorisation; biometrics (access control); neural nets; pattern matching; artificial neural network; error rates; keystroke biometrics; password authentication; password security; robust security technique; typing pattern matching; user behavior; user natural typing pattern; user verification mechanism; Classification algorithms; Error analysis; Europe; Hardware; Monitoring; Support vector machines; Text recognition; Artificial Neural Networks; Authentication; Keystroke Biometrics; Password; Security
(ID#: 15-7212)


Maler, Eve, “Extending the Power of Consent with User-Managed Access: A Standard Architecture for Asynchronous, Centralizable, Internet-Scalable Consent,” in Security and Privacy Workshops (SPW), 2015 IEEE,  vol., no., pp. 175–179, 21–22 May 2015. doi:10.1109/SPW.2015.34
Abstract: The inherent weaknesses of existing notice-and-consent paradigms of data privacy are becoming clear, not just to privacy practitioners but to ordinary online users as well. The corporate privacy function is a maturing discipline, but greater maturity often equates just to greater regulatory compliance. At a time when many users are disturbed by the status quo, new trends in web security and data sharing are demonstrating useful new consent paradigms. Benefiting from these trends, the emerging standard User-Managed Access (UMA) allows apps to extend the power of consent. UMA corrects a power imbalance that favors companies over individuals, enabling privacy solutions that move beyond compliance.
Keywords: Internet; authorisation; data privacy; Internet-scalable consent; UMA; Web security; asynchronous consent; centralizable consent; corporate privacy function; data sharing; notice-and-consent paradigms; user-managed access; Authorization; Automation; Data privacy; Market research; Privacy; Servers; Standards; privacy; consent; authorization; permission; access control; security; personal data; digital identity; Internet of Things (ID#: 15-7213)


Zeb, K.; Baig, O.; Asif, M.K., “DDoS Attacks and Countermeasures in Cyberspace,” in Web Applications and Networking (WSWAN), 2015 2nd World Symposium on, vol., no., pp. 1–6, 21–23 March 2015. doi:10.1109/WSWAN.2015.7210322
Abstract: In cyberspace, availability of the resources is the key component of cyber security along with confidentiality and integrity. Distributed Denial of Service (DDoS) attack has become one of the major threats to the availability of resources in computer networks. It is a challenging problem in the Internet. In this paper, we present a detailed study of DDoS attacks on the Internet specifically the attacks due to protocols vulnerabilities in the TCP/IP model, their countermeasures and various DDoS attack mechanisms. We thoroughly review DDoS attacks defense and analyze the strengths and weaknesses of different proposed mechanisms.
Keywords: Internet; computer network security; transport protocols; DDoS attack mechanisms; Internet; TCP-IP model; computer networks; cyber security; cyberspace; distributed denial of service attacks; Computer crime; Filtering; Floods; IP networks; Internet; Protocols; Servers; Cyber security; Cyber-attack; Cyberspace; DDoS Defense; DDoS attack; Mitigation; Vulnerability (ID#: 15-7214)


Lounis, O.; Malika, B., “A New Vision for Intrusion Detection System in Information Systems,” in Science and Information Conference (SAI), 2015, vol., no., pp. 1352–1356, 28–30 July 2015. doi:10.1109/SAI.2015.7237318
Abstract: In recent years, information systems have seen an amazing increase in attacks. Intrusion detection systems have become the mainstream of information assurance. While firewalls and the two basic systems of cryptography (symmetric and asymmetric) do provide some protection, they do not provide complete protection and still need to be supplemented by an intrusion detection system. Most of the work done on the IDS is based on two approaches; the anomaly approach and misuse approach. Each of these approaches whether they are implemented in HIDS or NIDS have weaknesses. To respond these limitations, we propose a new way of seeing in intrusion detection systems. This vision can be described as follows: “Instead of taking and analyzing each attack separately one from the other (have several signature for each type of attack knowing that there is various attacks and several variant of these attacks) or, instead of analyzing log files of the system, so why not see the consequences of these attacks and try to ensure that the security properties affected by these attacks will not be compromise”. To do so, we will take the language which is realized by Jonathan Rouzauld Cornabas to modelize the system’s entities to protect. This paper represents only the idea on which we will base on, in order to design an effective IDS in the operating system running in user space.
Keywords: cryptography; firewalls; information systems; operating systems (computers); IDS; anomaly approach; information assurance; intrusion detection system; misuse approach; operating system; security properties; Access control; Computational modeling; Computers; Databases; Intrusion detection; Operating systems; realtime system; security (ID#: 15-7215)


Ahmadi, Mohammad; Chizari, Milad; Eslami, Mohammad; Golkar, Mohammad Javad; Vali, Mostafa, “Access Control and User Authentication Concerns in Cloud Computing Environments,” in Telematics and Future Generation Networks (TAFGEN), 2015 1st International Conference on, vol., no., pp. 39–43, 26–28 May 2015. doi:10.1109/TAFGEN.2015.7289572
Abstract: Cloud computing is a newfound service that has a rapid growth in IT industry during recent years. Despite the several advantages of this technology there are some issues such as security and privacy that affect the reliability of cloud computing models. Access control and user authentication are the most important security issues in cloud computing. Therefore, the research has been prepared to provide the overall information about this security concerns and specific details about the identified issues in access control and user authentication researches. Therefore, cloud computing benefits and disadvantages have been explained in the first part. The second part reviewed some of access control and user authentication algorithms and identifying benefits and weaknesses of each algorithm. The main aim of this survey is considering limitations and problems of previous research in the research area to find out the most challenging issue in access control and user authentication algorithms.
Keywords: Access control; Authentication; Cloud computing; Computational modeling; Encryption; Servers; Access Control; Cloud Computing; Privacy; Security; User Authentication (ID#: 15-7216)


Chakhchoukh, Y.; Ishii, H., “Cyber Attacks Scenarios on the Measurement Function of Power State Estimation,” in American Control Conference (ACC), 2015, pp. 3676–3681, 1–3 July 2015. doi:10.1109/ACC.2015.7171901
Abstract: Cyber security provided by robust power systems state estimation (SE) methods is evaluated. Tools and estimators developed in robust statistics theory are considered. The least trimmed squares (LTS) based diagnostic is studied. The impact of cyber attacks on the Jacobian matrix or observation function, which generate coordinated outliers known as leverage points, is assessed. Two scenarios of attacks are proposed where the first scenario generates a masked attack resulting in a contaminated state uncontrolled by the intruder. The second scenario leads to a stealthy attack with an estimated targeted state fixed by the same intruder. Intervals for the necessary number of attacks for each scenario and their positions are shown. Theoretical derivations based on a projection framework highlights the conditions that minimize detection with robust SE approaches developed from the regression model assumption. More specifically, affine equivariant robust estimators present a weakness towards such intrusions. Simulations on IEEE power system test beds illustrate the behavior of the robust LTS with decomposition and the popular detection methods analyzing the weighted least squares (WLS) residuals when subject to both scenarios’ attacks.
Keywords: least mean squares methods; power system state estimation; regression analysis; IEEE power system test beds; Jacobian matrix; affine equivariant robust estimators; coordinated outliers; cyber attacks scenarios; cyber security; least trimmed squares based diagnostic; leverage points; observation function; regression model assumption; robust power systems state estimation; robust statistics theory; stealthy attack; weighted least squares residuals; Electric breakdown; Jacobian matrices; Least squares approximations; Power systems; Redundancy; Robustness; Topology (ID#: 15-7217)


Ergün, Salih, “Cryptanalysis of a Double Scroll Based ‘True’ Random Bit Generator,” in Circuits and Systems (MWSCAS), 2015 IEEE 58th International Midwest Symposium on, vol., no., pp. 1–4, 2–5 Aug. 2015. doi:10.1109/MWSCAS.2015.7282066
Abstract: An algebraic cryptanalysis of a “true” random bit generator (RBG) based on a double-scroll attractor is provided. An attack system is proposed to analyze the security weaknesses of the RBG. Convergence of the attack system is proved using synchronization of chaotic systems with unknown parameters called auto-synchronization. All secret parameters of the RBG are recovered from a scalar time series using auto-synchronization where the other information available are the structure of the RBG and output bit sequence obtained from the RBG. Simulation and numerical results verifying the feasibility of the attack system are given. The RBG doesn’t fulfill NIST-800-22 statistical test suite, the next bit can be predicted, while the same output bit stream of the RBG can be reproduced.
Keywords: Chaotic communication; Generators; Oscillators; Random number generation; Synchronization (ID#: 15-7218)


Naderi-Afooshteh, Abbas; Nguyen-Tuong, Anh; Bagheri-Marzijarani, Mandana; Hiser, Jason D.; Davidson, Jack W., “Joza: Hybrid Taint Inference for Defeating Web Application SQL Injection Attacks,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 172–183, 22–25 June 2015. doi:10.1109/DSN.2015.13
Abstract: Despite years of research on taint-tracking techniques to detect SQL injection attacks, taint tracking is rarely used in practice because it suffers from high performance overhead, intrusive instrumentation, and other deployment issues. Taint inference techniques address these shortcomings by obviating the need to track the flow of data during program execution by inferring markings based on either the program’s input (negative taint inference), or the program itself (positive taint inference). We show that existing taint inference techniques are insecure by developing new attacks that exploit inherent weaknesses of the inferencing process. To address these exposed weaknesses, we developed Joza, a novel hybrid taint inference approach that exploits the complementary nature of negative and positive taint inference to mitigate their respective weaknesses. Our evaluation shows that Joza prevents real-world SQL injection attacks, exhibits no false positives, incurs low performance overhead (4%), and is easy to deploy.
Keywords: Approximation algorithms; Databases; Encoding; Inference algorithms; Optimization; Payloads; Security; SQL injection; Taint inference; Taint tracking; Web application security (ID#: 15-7219)


Aditya, S.; Mittal, V., “Multi-Layered Crypto Cloud Integration of oPass,” in Computer Communication and Informatics (ICCCI), 2015 International Conference on, vol., no., pp. 1–7, 8–10 Jan. 2015. doi:10.1109/ICCCI.2015.7218114
Abstract: One of the most popular forms of user authentication is the Text Passwords. It is due to its convenience and simplicity. Still, the passwords are susceptible to be taken and compromised under various threats and weaknesses. In order to overcome these problems, a protocol called oPass was proposed. A cryptanalysis of it was done. We found out four kinds of attacks which could be done on it i.e. Use of SMS service, Attacks on oPass communication links, Unauthorized intruder access using the master password, Network attacks on untrusted web browser. One of them was Impersonation of the User. In order to overcome these problems in cloud environment, a protocol is proposed based on oPass to implement multi-layer crypto-cloud integration with oPass which can handle this kind of attack.
Keywords: cloud computing; cryptography; SMS service; Short Messaging Service; cloud environment; cryptanalysis; master password; multilayered crypto cloud integration; oPass communication links; oPass protocol; text password; user authentication; user impersonation; Authentication; Cloud computing; Encryption; Protocols; Servers; Cloud; Digital Signature; Impersonation; Network Security; RSA; SMS; oPass (ID#: 15-7220)


You, I.; Leu, F., “Comments on ‘SPAM: A Secure Password Authentication Mechanism for Seamless Handover in Proxy Mobile IPv6 Networks’,” in Systems Journal, IEEE, vol. PP, no. 99, pp.1–4. doi:10.1109/JSYST.2015.2477415
Abstract: Recently, Chuang et al. have introduced a secure password authentication mechanism for seamless handover in Proxy Mobile IPv6 (SPAM). SPAM aimed at providing high-security properties while optimizing the handover latency and the computation overhead. However, it is still vulnerable to replay and malicious insider attacks, as well as the compromise of a single node. This paper formally and precisely analyzes SPAM based on the Burrows–Abadi–Needham logic, followed by its weaknesses and related attacks.
Keywords: Authentication; Handover; Manganese; Mobile communication; Unsolicited electronic mail; Burrows–Abadi–Needham (BAN) logic; Proxy Mobile IPv6 (PMIPv6); fast handover security; formal security analysis (ID#: 15-7221)


Eldib, H.; Chao Wang; Taha, M.; Schaumont, P., “Quantitative Masking Strength: Quantifying the Power Side-Channel Resistance of Software Code,” in Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 34, no.10, pp.1558–1568, Oct. 2015. doi:10.1109/TCAD.2015.2424951
Abstract: Many commercial systems in the embedded space have shown weakness against power analysis-based side-channel attacks in recent years. Random masking is a commonly used technique for removing the statistical dependency between the sensitive data and the side-channel information. However, the process of designing masking countermeasures is both labor intensive and error prone. Furthermore, there is a lack of formal methods for quantifying the actual strength of a countermeasure implementation. Security design errors may therefore go undetected until the side-channel leakage is physically measured and evaluated. We show a better solution based on static analysis of C source code. We introduce the new notion of quantitative masking strength (QMS) to estimate the amount of information leakage from software through side channels. Once the user has identified the sensitive variables, the QMS can be automatically computed from the source code of a countermeasure implementation. Our experiments, based on measurement on real devices, show that the QMS accurately reflects the side-channel resistance of the software implementation.
Keywords: safety-critical software; security of data; source code (software); statistical analysis; C source code static analysis; QMS; formal methods; power analysis-based side-channel attacks; quantitative masking strength; random masking; security design errors; side-channel information; side-channel leakage; software code power side-channel resistance; statistical dependency; Algorithm design and analysis; Analytical models; Cryptography; Random variables; Resistance; Software; Software algorithms; Countermeasure; Verification; countermeasure; differential power analysis; differential power analysis (DPA); quantitative masking strength; quantitative masking strength (QMS); satisfiability modulo theory (SMT) solver; security; verification (ID#: 15-7222)


Douziech, P.-E.; Curtis, B., “Cross-Technology, Cross-Layer Defect Detection in IT Systems — Challenges and Achievements,” in Complex Faults and Failures in Large Software Systems (COUFLESS), 2015 IEEE/ACM 1st International Workshop on, vol., no., pp. 21–26, 23–23 May 2015. doi:10.1109/COUFLESS.2015.11
Abstract: Although critical for delivering resilient, secure, efficient, and easily changed IT systems, cross-technology, cross-layer quality defect detection in IT systems still faces hurdles. Two hurdles involve the absence of an absolute target architecture and the difficulty of apprehending multi-component anti-patterns. However, Static analysis and measurement technologies are now able to both consume contextual input and detect system-level anti-patterns. This paper will provide several examples of the information required to detect system-level anti-patterns using examples from the Common Weakness Enumeration repository maintained by MITRE Corp.
Keywords: program diagnostics; program testing; software architecture; software quality; IT systems; MITRE Corp; common weakness enumeration repository; cross-layer quality defect detection; cross-technology defect detection; measurement technologies; multicomponent antipatterns; static analysis; system-level antipattern detection; Computer architecture; Java; Organizations; Reliability; Security; Software; Software measurement; CWE; IT systems; software anti-patterns; software pattern detection; software quality measures; structural quality (ID#: 15-7223)


Monica Catherine S; George, Geogen, “S-Compiler: A Code Vulnerability Detection Method,” in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, vol., no., pp. 1–4, 24–25 Jan. 2015. doi:10.1109/EESCO.2015.7254018
Abstract: Nowadays, security breaches are greatly increasing in number. This is one of the major threats that are being faced by most organisations which usually lead to a massive loss. The major cause for these breaches could potentially be the vulnerabilities in software products. There are many tools available to detect such vulnerabilities but detection and correction of vulnerabilities during development phase would be more beneficial. Though there are many standard secure coding practices to be followed in development phase, software developers fail to utilize them and this leads to an unsecured end product. The difficulty in manual analysis of vulnerabilities in source code is what leads to the evolution of automated analysis tools. Static and dynamic analyses are the two complementary methods used to detect vulnerabilities in development phase. Static analysis scans the source code which eliminates the need of execution of the code but it has many false positives and false negatives. On the other hand, dynamic analysis tests the code by running it along with the test cases. The proposed approach integrates static and dynamic analysis. This eliminates the false positives and false negatives problem of the existing practices and helps developers to correct their code in the most efficient way. It deals with common buffer overflow vulnerabilities and vulnerabilities from Common Weakness Enumeration (CWE). The whole scenario is implemented as a web interface.
Keywords: source coding; telecommunication security; S-compiler; automated analysis tools; code vulnerability detection method; common weakness enumeration; false negatives; false positives; source code; Buffer overflows; Buffer storage; Encoding; Forensics; Information security; Software; Buffer overflow; Dynamic analysis; Secure coding; Static analysis (ID#: 15-7224)


Kaynar, K.; Sivrikaya, F., “Distributed Attack Graph Generation,” in Dependable and Secure Computing, IEEE Transactions on, vol. PP, no. 99, Apr. 2015, pp. 1-1. doi:10.1109/TDSC.2015.2423682
Abstract: Attack graphs show possible paths that an attacker can use to intrude into a target network and gain privileges through series of vulnerability exploits. The computation of attack graphs suffers from the state explosion problem occurring most notably when the number of vulnerabilities in the target network grows large. Parallel computation of attack graphs can be utilized to attenuate this problem. When employed in online network security evaluation, the computation of attack graphs can be triggered with the correlated intrusion alerts received from sensors scattered throughout the target network. In such cases, distributed computation of attack graphs becomes valuable. This article introduces a parallel and distributed memory-based algorithm that builds vulnerability-based attack graphs on a distributed multi-agent platform. A virtual shared memory abstraction is proposed to be used over such a platform, whose memory pages are initialized by partitioning the network reachability information. We demonstrate the feasibility of parallel distributed computation of attack graphs and show that even a small degree of parallelism can effectively speed up the generation process as the problem size grows. We also introduce a rich attack template and network model in order to form chains of vulnerability exploits in attack graphs more precisely.
Keywords: Buildings; Computational modeling; Databases; Explosions; Search problems; Security; Software; attack graph; distributed computing; exploit; reachability; vulnerability; weakness (ID#: 15-7225)


Kumar, K.S.; Chanamala, R.; Sahoo, S.R.; Mahapatra, K.K., “An Improved AES Hardware Trojan Benchmark to Validate Trojan Detection Schemes in an ASIC Design Flow,” in VLSI Design and Test (VDAT), 2015 19th International Symposium on, vol., no., pp. 1–6, 26–29 June 2015. doi:10.1109/ISVDAT.2015.7208064
Abstract: The semiconductor design industry has globalized and it is economical for the chip makers to get services from the different geographies in design, manufacturing and testing. Globalization raises the question of trust in an integrated circuit. It is for the every chip maker to ensure there is no malicious inclusion in the design, which is referred as Hardware Trojans. Malicious inclusion can occur by an in-house adversary design engineer, Intellectual Property (IP) core supplied from the third party vendor or at untrusted manufacturing foundry. Several researchers have proposed hardware Trojan detection schemes in the recent years. Trust-Hub provides Trojan benchmark circuits to verify the strength of the Trojan detection techniques. In this work, our focus is on Advanced Encryption Standard (AES) Trojan benchmarks, which is most vulnerable block cipher for Trojan attacks. All 21 Benchmarks available in Trusthub are analyzed against standard coverage driven verification practices, synthesis, DFT insertion and ATPG simulations. The analysis reveals that 19 AES benchmarks are weak and Trojan inclusion can be detected using standard procedures used in ASIC design flow. Based on the weakness observed design modification is proposed to improve the quality of Trojan benchmarks. The strength of proposed Trojan benchmarks is better than existing circuits and their original features are also preserved after design modification.
Keywords: application specific integrated circuits; cryptography; integrated circuit design; AES hardware Trojan benchmark; ASIC design flow; Trojan detection schemes; advanced encryption standard; intellectual property core; malicious inclusion; Benchmark testing; Discrete Fourier transforms; Hardware; Leakage currents; Logic gates; Shift registers; Trojan horses; AES; ASIC; Hardware Trojan; Security; Trust-Hub (ID#: 15-7226)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.