Visible to the public Software Security, 2014 (IEEE), Part 2

SoS Newsletter- Advanced Book Block

SoS Logo

Software Security, 2014 (IEEE), Part 2


This set of bibliographical references about software security research papers is from conference publications posted in the IEEE Digital Library. More than 1100 conference papers were presented on this topic in 2014. The set presented here represents those likely to be of most interest to the Science of Security community. They address issues related to measurement, scalability, reliability, and other hard problem issues.  ACM papers are presented in a separate series.


Jun Cai; Shangfei Yang; Jinquan Men; Jun He, "Automatic Software Vulnerability Detection Based on Guided Deep Fuzzing," Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on , vol., no., pp.231,234, 27-29 June 2014. doi:10.1109/ICSESS.2014.6933551 Abstract: Software security has become a very import part of information security in recent years. Fuzzing has proven successful in finding software vulnerabilities which are one major cause of information security incidents. However, the efficiency of traditional fuzz testing tools is usually very poor due to the blindness of test generation. In this paper, we present Sword, an automatic fuzzing system for software vulnerability detection, which combines fuzzing with symbolic execution and taint analysis techniques to tackle the above problem. Sword first uses symbolic execution to collect program execution paths and their corresponding constrains, then uses taint analysis to check these paths, the most dangerous paths which most likely lead to vulnerabilities will be further deep fuzzed. Thus, with the guidance of symbolic execution and taint analysis, Sword generates test cases most likely to trigger potential vulnerabilities lying deep in applications.
Keywords: program diagnostics; program testing; security of data; Sword; automatic fuzzing system; automatic software vulnerability detection; guided deep fuzzing; information security; software security; symbolic execution; taint analysis technique; Databases; Engines; Information security; Monitoring; Software; Software testing; fuzzing; software vulnerability detection; symbolic execution; taint analysis (ID#: 15-4567)


Kulenovic, M.; Donko, D., "A Survey of Static Code Analysis Methods for Security Vulnerabilities Detection," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1381,1386, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859783 Abstract: Software security is becoming highly important for universal acceptance of applications for many kinds of transactions. Automated code analyzers can be utilized to detect security vulnerabilities during the development phase. This paper is aimed to provide a survey on Static code analysis and how it can be used to detect security vulnerabilities. The most recent findings and publications are summarized and presented in this paper. This paper provides an overview of the gains, flows and algorithms of static code analyzers. It can be considered a stepping stone for further research in this domain.
Keywords: program diagnostics; security of data; software engineering; development phase; software security vulnerabilities detection; static code analysis methods; Access control; Analytical models; Java; Privacy; Software; security; static code analysis; survey; vulnerability (ID#: 15-4568)


Sparrow, R.D.; Adekunle, A.A.; Berry, R.J.; Farnish, R.J., "Simulating and Modelling The Impact of Security Constructs on Latency for Open Loop Control," Computer Science and Electronic Engineering Conference (CEEC), 2014 6th, pp.85,90, 25-26 Sept. 2014. doi:10.1109/CEEC.2014.6958560 Abstract: Open loop control has commonly been used to conduct tasks for a range of Industrial Control Systems (ICS). ICS however, are susceptible to security exploits. A possible countermeasure to the active and passive attacks on ICS is to provide cryptography to thwart the attacker by providing confidentiality and integrity for transmitted data between nodes on the ICS network; however, a drawback of applying cryptographic algorithms to ICS is the additional communication latency that is generated. The proposed solution presented in this paper delivers a mathematical model suitable for predicting the latency and impact of software security constructs on ICS communications. The proposed model has been tested and validated against a software simulated open loop control scenario, the results obtained indicate on average a 1.3 percentage difference between the model and simulation.
Keywords: control engineering computing; cryptography; data integrity; open loop systems; ICS communication latency; active attack; cryptographic algorithm; data confidentiality; data integrity; industrial control system; open loop control; passive attack; software security; Crystals; Frequency conversion; Mathematical model; Microcontrollers; Real-time systems; Security; Time-frequency analysis; Impact modelling; Industrial Control Systems; Real-Time communication (ID#: 15-4569)


Priya, R.L.; Lifna, C.S.; Jagli, D.; Joy, A., "Rational Unified Treatment for Web application Vulnerability Assessment," Circuits, Systems, Communication and Information Technology Applications (CSCITA), 2014 International Conference on, pp.336,340, 4-5 April 2014. doi:10.1109/CSCITA.2014.6839283 Abstract: Web applications are more and more accustomed offer e-services like online banking, online searching, and social networking over the web. With the boost of the web applications in information society, Web application software security becomes more and more important. With this advancement, the attacks over the web applications have conjointly multiplied. The root causes following these vulnerabilities are lacking of security awareness, design flaws and implementation bugs. Detecting and solving vulnerability is the effective technique to enhance Web security. Many vulnerability analysis techniques in web-based applications observe and report on different types of vulnerabilities. Even though, no particular technique provides a generic technology-independent handling of Web-based vulnerabilities. In this paper, a replacement approach is proposed, implemented and analysed results for Web application Vulnerability Assessment (WVA) based on the Rational Unified Process (RUP) framework, hereafter referred as the Rational Unified WVA.
Keywords: Internet; security of data; RUP framework; WVA; Web application software security; Web application vulnerability assessment;  design flaws; e-services; information society; online banking; online searching; rational unified process framework; rational unified treatment; security awareness; social networking; vulnerability analysis techniques; DH-HEMTs; Educational institutions; Information technology; Organizations; Security; Web servers; Rational Unified Process; The Open Web Application Security Project; Web application Vulnerability Assessment (ID#: 15-4570)


Agosta, G.; Barenghi, A.; Pelosi, G.; Scandale, M., "A Multiple Equivalent Execution Trace Approach to Secure Cryptographic Embedded Software," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, pp.1, 6, 1-5 June 2014. doi:10.1145/2593069.2593073 Abstract: We propose an efficient and effective method to secure software implementations of cryptographic primitives on low-end embedded systems, against passive side-channel attacks relying on the observation of power consumption or electro-magnetic emissions. The proposed approach exploits a modified LLVM compiler toolchain to automatically generate a secure binary characterized by a randomized execution flow. Also, we provide a new method to refresh the random values employed in the share splitting approaches to lookup table protection, addressing a currently open issue. We improve the current state-of-the-art in dynamic executable code countermeasures removing the requirement of a writeable code segment, and reducing the countermeasure overhead.
Keywords: cryptography; embedded systems; program compilers; table lookup; LLVM compiler toolchain; countermeasure overhead reduction; cryptographic embedded software security; cryptographic primitives; dynamic executable code countermeasures; electromagnetic emissions; lookup table protection; low-end embedded systems; multiple equivalent execution trace approach; passive side-channel attacks; power consumption observation; random values; randomized execution flow; share splitting approach; writeable code segment; Ciphers; Optimization; Power demand; Registers; Software; Power Analysis Attacks; Software Countermeasures; Static Analysis (ID#: 15-4571)


Gupta, M.K.; Govil, M.C.; Singh, G., "Static Analysis Approaches to Detect SQL Injection and Cross Site Scripting Vulnerabilities in Web Applications: A survey," Recent Advances and Innovations in Engineering (ICRAIE), 2014, pp.1, 5, 9-11 May 2014. doi:10.1109/ICRAIE.2014.6909173 Abstract: Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.
Keywords: Internet; SQL; program diagnostics; security of data; software maintenance; software reliability; source code (software); SQL injection; SQLI; Web applications; XSS; attack prevention; cross site scripting vulnerabilities; defensive programming; financial transaction; health problem; legacy Web applications; malicious users; programming mistakes; security vulnerabilities; security weaknesses; social communications; software development life cycle; source code; static analysis; vulnerability detection; Analytical models; Guidelines; Manuals; Programming; Servers; Software; Testing; SQL injection; cross site scripting; static analysis; vulnerabilities; web application (ID#: 15-4572)


Mell, P.; Harang, R.E., "Using Network Tainting to Bound the Scope of Network Ingress Attacks," Software Security and Reliability (SERE), 2014 Eighth International Conference on, pp.206,215, June 30 2014-July 2 2014. doi:10.1109/SERE.2014.34 Abstract: This research describes a novel security metric, network taint, which is related to software taint analysis. We use it here to bound the possible malicious influence of a known compromised node through monitoring and evaluating network flows. The result is a dynamically changing defense-in-depth map that shows threat level indicators gleaned from monotonically decreasing threat chains. We augment this analysis with concepts from the complex networks research area in forming dynamically changing security perimeters and measuring the cardinality of the set of threatened nodes within them. In providing this, we hope to advance network incident response activities by providing a rapid automated initial triage service that can guide and prioritize investigative activities.
Keywords: network theory (graphs);security of data; defense-in-depth map; network flow evaluation; network flow monitoring; network incident response activities; network ingress attacks; network tainting metric; security metric; security perimeters; software taint analysis; threat level indicators; Algorithm design and analysis; Complex networks; Digital signal processing; Measurement; Monitoring; Security; Software; complex networks; network tainting; scale-free; security (ID#: 15-4573)


Siyuan Jiang; Santelices, R.; Haipeng Cai; Grechanik, M., "How Accurate is Dynamic Program Slicing? An Empirical Approach to Compute Accuracy Bounds," Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, pp.3, 4, June 30 2014-July 2 2014. doi:10.1109/SERE-C.2014.14 Abstract: Dynamic program slicing attempts to find runtime dependencies among statements to support security, reliability, and quality tasks such as information-flow analysis, testing, and debugging. However, it is not known how accurately dynamic slices identify statements that really affect each other. We propose a new approach to estimate the accuracy of dynamic slices. We use this approach to obtain bounds on the accuracy of multiple dynamic slices in Java software. Early results suggest that dynamic slices suffer from some imprecision and, more critically, can have a low recall whose upper bound we estimate to be 60% on average.
Keywords: Java; data flow analysis; program debugging; program slicing; program testing; Java software; dynamic program slicing; information-flow analysis; quality tasks; reliability; runtime dependencies; security; software debugging; software testing; Accuracy; Reliability; Runtime; Security; Semantics; Software; Upper bound; dynamic slicing; program slicing; semantic dependence; sensitivity analysis (ID#: 15-4574)


Riaz, M.; King, J.; Slankas, J.; Williams, L., "Hidden in Plain Sight: Automatically Identifying Security Requirements from Natural Language Artifacts," Requirements Engineering Conference (RE), 2014 IEEE 22nd International, pp.183,192, 25-29 Aug. 2014. doi:10.1109/RE.2014.6912260 Abstract: Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.
Keywords: formal specification; learning (artificial intelligence); natural language processing; security of data; context-specific security requirements templates; context-specific templates; functional requirements; functional security requirements; healthcare domain; machine learning techniques; natural language artifacts; natural language requirements artifacts; requirements engineer; requirements specifications; security objectives; security-relevant sentences; software systems; tool-assisted process; Availability; Medical services; Natural languages; Object recognition; Security; Software systems; Text categorization; Security; access control; auditing; constraints; natural language parsing; objectives; requirements; templates; text classification (ID#: 15-4575)


Hesse, T.-M.; Gartner, S.; Roehm, T.; Paech, B.; Schneider, K.; Bruegge, B., "Semiautomatic Security Requirements Engineering and Evolution Using Decision Documentation, Heuristics, and User Monitoring," Evolving Security and Privacy Requirements Engineering (ESPRE), 2014 IEEE 1st Workshop on, pp.1,6, 25-25 Aug. 2014. doi: 10.1109/ESPRE.2014.6890520  Abstract: Security issues can have a significant negative impact on the business or reputation of an organization. In most cases they are not identified in requirements and are not continuously monitored during software evolution. Therefore, the inability of a system to conform to regulations or its endangerment by new vulnerabilities is not recognized. In consequence, decisions related to security might not be taken at all or become obsolete quickly. But to evaluate efficiently whether an issue is already addressed appropriately, software engineers need explicit decision documentation. Often, such documentation is not performed due to high overhead. To cope with this problem, we propose to document decisions made to address security requirements. To lower the manual effort, information from heuristic analysis and end user monitoring is incorporated. The heuristic assessment method is used to identify security issues in given requirements automatically. This helps to uncover security decisions needed to mitigate those issues. We describe how the corresponding security knowledge for each issue can be incorporated into the decision documentation semiautomatically. In addition, violations of security requirements at runtime are monitored. We show how decisions related to those security requirements can be identified through the documentation and updated manually. Overall, our approach improves the quality and completeness of security decision documentation to support the engineering and evolution of security requirements.
Keywords: formal specification; security of data; system documentation; end user monitoring; heuristic analysis; heuristic assessment method; organization; security decision documentation; security decisions; security issues; security knowledge; semiautomatic security requirements engineering; software engineers; software evolution; vulnerability; Context; Documentation; IEEE Potentials; Knowledge engineering; Monitoring; Security; Software; Security requirements engineering; decision documentation; decision knowledge; heuristic analysis; knowledge carrying software; software evolution; user monitoring (ID#: 15-4576)


Uzunov, A.V.; Falkner, K.; Fernandez, E.B., "A Comprehensive Pattern-Driven Security Methodology for Distributed Systems," Software Engineering Conference (ASWEC), 2014 23rd Australian, pp.142, 151, 7-10 April 2014.  doi:10.1109/ASWEC.2014.14 Abstract: Incorporating security features is one of the most important and challenging tasks in designing distributed systems. Over the last decade, researchers and practitioners have come to recognize that the incorporation of security features should proceed by means of a systematic approach, combining principles from both software and security engineering. Such systematic approaches, particularly those implying some sort of process aligned with the development life-cycle, are termed security methodologies. One of the most important classes of such methodologies is based on the use of security patterns. While the literature presents a number of pattern-driven security methodologies, none of them are designed specifically for general distributed systems. Going further, there are also currently no methodologies with mixed specific applicability, e.g. for both general and peer-to-peer distributed systems. In this paper we aim to fill these gaps by presenting a comprehensive pattern-driven security methodology specifically designed for general distributed systems, which is also capable of taking into account the specifics of peer-to-peer systems. Our methodology takes the principle of encapsulation several steps further, by employing patterns not only for the incorporation of security features (via security solution frames), but also for the modeling of threats, and even as part of its process. We illustrate and evaluate the presented methodology via a realistic example -- the development of a distributed system for file sharing and collaborative editing. In both the presentation of the methodology and example our focus is on the early life-cycle phases (analysis and design).
Keywords: peer-to-peer computing; security of data; software engineering; collaborative editing; comprehensive pattern-driven security methodology; file sharing; life-cycle phase development; peer-to-peer distributed systems; security engineering; security patterns; software engineering; systematic approach; Analytical models; Computer architecture; Context; Object oriented modeling; Security; Software; Taxonomy; distributed systems security; secure software engineering; security methodologies; security patterns; security solution frames; threat patterns (ID#: 15-4577)


Azab, M., "Multidimensional Diversity Employment for Software Behavior Encryption," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014. doi:10.1109/NTMS.2014.6814033 Abstract: Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
Keywords: computational complexity; cryptography; multidimensional systems; software fault tolerance; system recovery; CBE mechanism; ChameleonSoft Behavior Encryption; ChameleonSoft recovery mechanisms; autonomic failure recovery mechanism; avalanche effect percentage; behaviorally-mutated functionally-equivalent code variants; computational cost; confusion levels; diffusion levels; moving target defense; multidimensional software diversity employment; online programmable software-execution foundation separating logic; security level; security provisioning; software monoculture; spatiotemporal software behavior encryption; system resilience; Employment;Encryption;Resilience; Runtime;Software; Spatiotemporal phenomena (ID#: 15-4578)


Bansal, S.K.; Jolly, A., "An Encyclopedic Approach for Realization of Security Activities with Agile Methodologies," Confluence The Next Generation Information Technology Summit (Confluence), 2014 5th International Conference, pp.767,772, 25-26 Sept. 2014. doi:10.1109/CONFLUENCE.2014.6949242 Abstract: Agility among the software is searching concern during the development phase, as it boost adaptive planning, incremental and evolutionary development with many other features that are lightweight in nature. Security is one of the considerable concern in today's highly agile software development industry. More assertion is on to produce a protected software, so as to lessen the amount of risk and damage caused by the software. Evolving protected software with high agile characteristics is always a tough task to do because of heavy weight quality of security activities. This paper submit a innovative approach by which security activities can be combined with agile activities by calculating the mean agility value of both activities i.e. agile likewise security keeping in mind the aspects such as cost, time, recurrence, benefits affecting the agility of the activity. By accepting fuzzy value compatibility table (FVCT), rapport of embodiment of both the activities is done with fuzzy values.
Keywords: security of data; software prototyping; adaptive planning; agile activities; agile methodologies; agile software development; encyclopedic approach; fuzzy value compatibility table; protected software; security activities; Encoding; Industries; Next generation networking; Planning; Security; Software; Testing; Agility Degree; Fuzzy Logics; Security Activities (ID#: 15-4579)


Kaur, R.; Singh, R.P., "Enhanced Cloud Computing Security and Integrity Verification via Novel Encryption Techniques," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp.1227,1233, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968328 Abstract: Cloud computing is a revolutionary movement in the area of IT industry that provides storage, computing power, network and software as an abstraction and as a service, on demand over the internet, which enables its clients to access these services remotely from anywhere, anytime via any terminal equipment. Since cloud has modified the definition of data storage from personal computers to the huge data centers, security of data has become one of the major concerns for the developers of cloud. In this paper a security model is proposed, implemented in Cloud Analyst to tighten the level of cloud storage security, which provides security based on different encryption algorithms with integrity verification scheme. We begin with the storage section selection phase divided into three different sections Private, Public, and Hybrid. Various encryption techniques are implemented in all three sections based on the security factors namely authentication, confidentiality, security, privacy, non-repudiation and integrity. Unique token generation mechanism implemented in Private section helps ensure the authenticity of the user, Hybrid section provides On Demand Two Tier security architecture and Public section provides faster computation of data encryption and decryption. Overall data is wrapped in two folds of encryption and integrity verification in all the three sections. The user wants to access data, required to enter the user login and password before granting permission to the encrypted data stored either in Private, Public, or Hybrid section, thereby making it difficult for the hacker to gain access of the authorized environment.
Keywords: cloud computing; cryptography; IT industry; authentication factor; cloud analyst; cloud computing integrity verification; cloud computing security; confidentiality factor; data decryption; data encryption; data security; data storage; encryption algorithms; encryption techniques; hybrid storage selection; information technology; integrity factor; nonrepudiation factor; privacy factor; private storage selection; public storage selection; security factor; token generation mechanism; Authentication; Cloud computing; Computational modeling; Data models; Encryption; AES;Blowfish; IDEA; SAES; SHA-1; Token (ID#: 15-4580)


He Sun; Lin Liu; Letong Feng; Yuan Xiang Gu, "Introducing Code Assets of a New White-Box Security Modeling Language," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.116,121, 21-25 July 2014. doi:10.1109/COMPSACW.2014.24 Abstract: This paper argues about a new conceptual modeling language for the White-Box (WB) security analysis. In the WB security domain, an attacker may have access to the inner structure of an application or even the entire binary code. It becomes pretty easy for attackers to inspect, reverse engineer, and tamper the application with the information they steal. The basis of this paper is the 14 patterns developed by a leading provider of software protection technologies and solutions. We provide a part of a new modeling language named i-WBS(White-Box Security) to describe problems of WB security better. The essence of White-Box security problem is code security. We made the new modeling language focus on code more than ever before. In this way, developers who are not security experts can easily understand what they need to really protect.
Keywords: computer crime; data protection; source code (software); specification languages; WB security analysis; attacker; binary code; code assets; code security; i-WBS; reverse engineer; software protection solutions; software protection technologies; white-box security modeling language; Analytical models; Binary codes; Computational modeling; Conferences; Security; Software; Testing; Code security; Security modeling language; White-box security; i-WBS (ID#: 15-4581)


McCarthy, M.A.; Herger, L.M.; Khan, S.M., "A Compliance Aware Software Defined Infrastructure," Services Computing (SCC), 2014 IEEE International Conference on, pp.560,567, June 27 2014-July 2 2014. doi:10.1109/SCC.2014.79 Abstract: With cloud eclipsing the $100B mark, it is clear that the main driver is no longer strictly cost savings. The focus now is to exploit the cloud for innovation, utilizing the agility to expand resources to quickly build out new designs, products, simulations and analysis. As the cloud lowers the unit cost of IT and improves agility, the time to market for applications will improve significantly. Companies will use this agility and speed as competitive advantage. An example of the agility is the adoption by enterprises of the software-defined datacenter (SDDC)[3] model, which allows for the rapid build of environments with composable infrastructures. With adoption of the SDDC model, intelligent and automated management of the SDDC is an immediate priority, required to support the changing workloads and dynamic patterns of the enterprise. Often, security and compliance become an 'after thought', bolted on later when problems arise. In this paper, we will discuss our experience in developing and deploying a centralized management system for public, as well as an Openstack [4] based cloud platform in SoftLayer, with an innovative, analytics-driven 'security compliance as a service' that constantly adjusts to varying compliance requirements based on workload, security and compliance requirements. In this paper we will also focus on techniques we have developed for capturing and replaying the previous state of a failing client virtual machine (VM) image, roll back, and then re-execute to analyze failures related to security or compliance. This technique contributes to agility, since failing VM's with security issues can quickly be analyzed and brought back online, this is often not the case with security problems, where analysis and forensics can take several days/weeks.
Keywords: cloud computing; configuration management; security of data; Openstack; SDDC model; SoftLayer; centralized management system; cloud platform; compliance aware software defined infrastructure; security compliance; software-defined datacenter; virtual machine; Companies; Forensics; Monitoring; Process control; Security; Software; Compliance; Compliance Architecture; Compliance Remediation; Compliance as a Service (ID#: 15-4582)


Behl, D.; Handa, S.; Arora, A., "A Bug Mining Tool to Identify and Analyze Security Bugs Using Naive Bayes and TF-IDF," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp.294,299, 6-8 Feb. 2014. doi:10.1109/ICROIT.2014.6798341 Abstract: Bug report contains a vital role during software development, However bug reports belongs to different categories such as performance, usability, security etc. This paper focuses on security bug and presents a bug mining system for the identification of security and non-security bugs using the term frequency-inverse document frequency (TF-IDF) weights and naïve bayes. We performed experiments on bug report repositories of bug tracking systems such as bugzilla and debugger. In the proposed approach we apply text mining methodology and TF-IDF on the existing historic bug report database based on the bug s description to predict the nature of the bug and to train a statistical model for manually mislabeled bug reports present in the database. The tool helps in deciding the priorities of the incoming bugs depending on the category of the bugs i.e. whether it is a security bug report or a non-security bug report, using naïve bayes. Our evaluation shows that our tool using TF-IDF is giving better results than the naïve bayes method.
Keywords: Bayes methods; data mining; security of data; statistical analysis; text analysis; Naive Bayes method; TF-IDF; bug mining tool; bug tracking systems; historic bug report database; nonsecurity bug identification; nonsecurity bug report; security bug report; security bugs identification; software development; statistical model; term frequency-inverse document frequency weights; text mining methodology; Computer bugs; Integrated circuit modeling; Vectors; Bug; Naïve Bayes; TF-IDF; mining; non-security bug report; security bug reports; text analysis (ID#: 15-4583)


Bouaziz, R.; Kallel, S.; Coulette, B., "A Collaborative Process for Developing Secure Component Based Applications," WETICE Conference (WETICE), 2014 IEEE 23rd International,  pp.306, 311, 23-25 June 2014. doi:10.1109/WETICE.2014.82 Abstract: Security patterns describe security solutions that can be used in a particular context for recurring problems in order to solve a security problem in a more structured and reusable way. Patterns in general and Security patterns in particular, have become important concepts in software engineering, and their integration is a widely accepted practice. In this paper, we propose a model-driven methodology for security pattern integration. This methodology consists of a collaborative engineering process, called collaborative security pattern Integration process (C-SCRIP), and a tool that supports the full life-cycle of the development of a secure system from modeling to code.
Keywords: groupware; object-oriented programming; security of data; C-SCRIP process; collaborative engineering process; collaborative security pattern integration process; component based application security; model-driven methodology; security pattern integration; security solutions; software engineering; system development lifecycle; Analytical models; Collaboration; Context; Prototypes; Security; Software; Unified modeling language; CMSPEM; Collaborative process; Security patterns; based systems (ID#: 15-4584)


Yier Jin, "EDA Tools Trust Evaluation Through Security Property Proofs," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, pp.1,4, 24-28 March 2014. doi:10.7873/DATE.2014.260 Abstract: The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gatelevel netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist.
Keywords: cryptography; electronic design automation; integrated circuit design; AES encryption core; EDA tools trust evaluation; Synopsys design compiler; functional operation; gate-level information assurance scheme; gate-level netlist; hardware security researchers; proof-carrying based IP protection; security property proofs; software code; third-party IP cores; trusted integrated circuits; Hardware; IP networks ;Integrated circuits; Logic gates; Sensitivity; Trojan horses (ID#: 15-4585)


Bozic, J.; Wotawa, F., "Security Testing Based on Attack Patterns," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.4,11, March 31 2014-April 4 2014. doi:10.1109/ICSTW.2014.58 Abstract: Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study.
Keywords: Internet; Unified Modeling Language; program testing; security of data; software prototyping; Internet; UML state charts; agile software development processes; attack patterns; security testing; Adaptation models; Databases; HTML; Security; Software; Testing; Unified modeling language; Attack pattern; SQL injection; UML state machine; cross-site scripting; model-based testing; security testing (ID#: 15-4586)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.