Visible to the public Compiler Security 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo

Compiler Security 2015

Much of software security focuses on applications, but compiler security should also be an area of concern.  Compilers can “correct” secure coding in the name of efficient processing. The works cited here look at various approaches and issues in compiler security.  These works were presented in 2015.

D'Silva, V.; Payer, M.; Song, D., "The Correctness-Security Gap in Compiler Optimization," in Security and Privacy Workshops (SPW), 2015 IEEE, pp.73-87, 21-22 May 2015. doi: 10.1109/SPW.2015.33

Abstract: There is a significant body of work devoted to testing, verifying, and certifying the correctness of optimizing compilers. The focus of such work is to determine if source code and optimized code have the same functional semantics. In this paper, we introduce the correctness-security gap, which arises when a compiler optimization preserves the functionality of but violates a security guarantee made by source code. We show with concrete code examples that several standard optimizations, which have been formally proved correct, in-habit this correctness-security gap. We analyze this gap and conclude that it arises due to techniques that model the state of the program but not the state of the underlying machine. We propose a broad research programme whose goal is to identify, understand, and mitigate the impact of security errors introduced by compiler optimizations. Our proposal includes research in testing, program analysis, theorem proving, and the development of new, accurate machine models for reasoning about the impact of compiler optimizations on security.

Keywords: optimising compilers; program diagnostics; program testing; reasoning about programs; security of data; theorem proving; compiler optimization; correctness certification; correctness testing; correctness verification; correctness-security gap; functional semantics; machine model; optimized code; optimizing compiler; program analysis; program state; program testing; reasoning; security error; security guarantee; source code; theorem proving; Cryptography; Optimization; Optimizing compilers; Semantics; Standards; Syntactics; compiler optimization; formal correctness; security (ID#: 15-8829)



Agosta, G.; Barenghi, A.; Pelosi, G.; Scandale, M., "Information Leakage Chaff: Feeding Red Herrings to Side Channel Attackers," in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, pp. 1-6, 8-12 June 2015. doi: 10.1145/2744769.2744859

Abstract: A prominent threat to embedded systems security is represented by side-channel attacks: they have proven effective in breaching confidentiality, violating trust guarantees and IP protection schemes. State-of-the-art countermeasures reduce the leaked information to prevent the attacker from retrieving the secret key of the cipher. We propose an alternate defense strategy augmenting the regular information leakage with false targets, quite like chaff countermeasures against radars, hiding the correct secret key among a volley of chaff targets. This in turn feeds the attacker with a large amount of invalid keys, which can be used to trigger an alarm whenever the attack attempts a content forgery using them, thus providing a reactive security measure. We realized a LLVM compiler pass able to automatically apply the proposed countermeasure to software implementations of block ciphers. We provide effectiveness and efficiency results on an AES implementation running on an ARM Cortex-M4 showing performance overheads comparable with state-of-the-art countermeasures.

Keywords: cryptography; program compilers; trusted computing; AES implementation; ARM Cortex-M4; IP protection schemes; LLVM compiler; confidentiality breaching; content forgery; defense strategy; embedded system security; information leakage chaff; reactive security measure; side channel attackers; software implementations; trust guarantees; Ciphers; Correlation; Optimization; Software; Switches; Embedded Security; Side Channel Attacks; Software Countermeasures (ID#: 15-8830)


Prasad, T.S.; Kisore, N.R., "Application of Hidden Markov Model for Classifying Metamorphic Virus," in Advance Computing Conference (IACC), 2015 IEEE International, pp. 1201-1206, 12-13 June 2015. doi: 10.1109/IADCC.2015.7154893

Abstract: Computer virus is a rapidly evolving threat to the computing community. These viruses fall into different categories. It is generally believed that metamorphic viruses are extremely difficult to detect. Metamorphic virus generating kits are readily available using which potentially dangerous viruses can be created with very little knowledge or skill. Classification of computer virus is very important for effective defection of any malware using anti virus software. It is also necessary for building and applying right software patch to overcome the security vulnerability. Recent research work on Hidden Markov Model (HMM) analysis has shown that it is more effective tool than other techniques like machine learning in detecting of computer viruses and their classification. In this paper, we present a classification technique based on Hidden Markov Model for computer virus classification. We trained multiple HMMs with 500 malware files belonging to different virus families as well as compilers. Once trained the model was used to classify new malware of its kind efficiently.

Keywords: computer viruses; hidden Markov models; invasive software; pattern classification; HMM analysis; antivirus software; compilers; computer virus classification; hidden Markov model; malware files; metamorphic virus classification; security vulnerability; software patch; Computational modeling; Computers; Hidden Markov models; Malware; Software; Training; Viruses (medical); Hidden Markov Model; Malware Classification; Metamorphic Malware; N-gram (ID#: 15-8831)



Maldonado-Lopez, F.A.; Calle, E.; Donoso, Y., "Detection and Prevention of Firewall-Rule Conflicts on Software-Defined Networking," in Reliable Networks Design and Modeling (RNDM), 2015 7th International Workshop on, pp. 259-265, 5-7 Oct. 2015. doi: 10.1109/RNDM.2015.7325238

Abstract: Software-Defined Networking (SDN) is a different approach to manage a network by software. It could use well-defined software expressions and predicates to regulate network behavior. Current SDN controllers, such as Floodlight, offer a framework to develop, test and run applications that control the network operation, including the firewall function. However, they are not able to validate firewall policies, detect conflicts; neither avoids contradictory configurations on network devices. Some compilers only detect conflicts by a subset of the language; hence, it cannot detect conflicts related to contradicting rules with security controls. This paper presents our framework based on Alloy called FireWell. FireWell is able to model firewall policies as formal predicates to validate, detect and prevent conflicts in firewall policies. In addition we present the implementation of FireWell and test it using the Floodlight controller and firewall application.

Keywords: computer network management; firewalls; floodlighting; software defined networking; FireWell; SDN; contradictory configuration avoidance; firewall-rule conflict detection; firewall-rule conflict prevention; floodlight controller; network management; security control; software defined networking; Metals; Network topology; Ports (Computers);Protocols; Semantics; Shadow mapping; Topology; Conflict detection; model checking; policy-based network management; protocol verification}, (ID#: 15-8832)



Carrozza, G.; Cinque, M.; Giordano, U.; Pietrantuono, R.; Russo, S., "Prioritizing Correction of Static Analysis Infringements for Cost-Effective Code Sanitization," in Software Engineering Research and Industrial Practice (SER&IP), 2015 IEEE/ACM 2nd International Workshop on, pp. 25-31, 17-17 May 2015. doi: 10.1109/SERIP.2015.13

Abstract: Static analysis is a widely adopted technique in the industrial development of software systems. It allows to automatically check for code compliance with respect to predefined programming rules. When applied to large software systems, sanitizing the code in an efficient way requires a careful guidance, as a high number of (more or less relevant) rule infringements can result from the analysis. We report the results of a static analysis study conducted on several industrial software systems developed by SELEX ES, a large manufacturer of software-intensive mission-critical systems. We analyzed results on a set of 156 software components developed in SELEX ES, based on them, we developed and experimented an approach to prioritize components and violated rules to correct for a cost-effective code sanitization. Results highlight the benefits that can be achieved in terms of quality targets and incurred cost.

Keywords: program compilers; program diagnostics; program verification; safety-critical software; software development management; SELEX ES; code compliance; cost effective code sanitization; industrial software system development; prioritize components; software components development; software intensive mission-critical system; static analysis; Companies; Encoding; Programming; Resource management; Security; Software; Standards; critical systems; defect analysis; effort allocation; industrial study; static analysis (ID#: 15-8833)



Abdellaoui, Z.; Ben Mbarek, I.; Bouhouch, R.; Hasnaoui, S., "DDS Middleware on FlexRay Network: Simulink Blockset Implementation of Wheel's Sub-blocks and its Adaptation to DDS Concept," in Intelligent Signal Processing (WISP), 2015 IEEE 9th International Symposium on, pp. 1-6, 15-17 May 2015. doi: 10.1109/WISP.2015.7139166

Abstract: Due to the search for improving vehicle safety, security and reliability, the challenges in the automotive sector have continued to increase in order to confront these requirements. In this context, we have implemented a vehicle Simulink blockset model. The proposed blockset corresponds to the Society of Automotive Engineers SAE benchmark model which is normally connected by the CAN bus we extended it to the FlexRay Bus. We have chosen the Embedded Matlab tool for implementing this blockset. It permits us to generate C code for different blocks in order to validate the vehicle system design. In this paper, we have focused our interest on the Blockset Simulink implementation of the wheels model with its different sub-blocks. Then we have identified the DDS Data Readers and Data Writers adapted to this Blockset using the FlexRay Network.

Keywords: C language; automobiles; automotive engineering; controller area networks; embedded systems; field buses; middleware; program compilers; protocols; road safety; vehicular ad hoc networks; wheels; C code generation; CAN bus; DDS data readers; DDS middleware; FlexRay bus; FlexRay network; SAE benchmark model; Society of Automotive Engineers; automotive sector; data writers; embedded Matlab tool; vehicle Simulink blockset model; vehicle reliability; vehicle safety; vehicle security; vehicle system design; wheel subblocks; wheels model; Benchmark testing; Data models; Mathematical model; Software packages; Suspensions; Vehicles; Wheels; DDS; Embedded MATLAB; FlexRay; SAE Benchmark; Simulink Blockset; Wheels (ID#: 15-8834)



Chang Liu; Xiao Shaun Wang; Nayak, K.; Yan Huang; Shi, E., "ObliVM: A Programming Framework for Secure Computation," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 359-376, 17-21 May 2015. doi: 10.1109/SP.2015.29

Abstract: We design and develop ObliVM, a programming framework for secure computation. ObliVM offers a domain specific language designed for compilation of programs into efficient oblivious representations suitable for secure computation. ObliVM offers a powerful, expressive programming language and user-friendly oblivious programming abstractions. We develop various showcase applications such as data mining, streaming algorithms, graph algorithms, genomic data analysis, and data structures, and demonstrate the scalability of ObliVM to bigger data sizes. We also show how ObliVM significantly reduces development effort while retaining competitive performance for a wide range of applications in comparison with hand-crafted solutions. We are in the process of open-sourcing ObliVM and our rich libraries to the community (, offering a reusable framework to implement and distribute new cryptographic algorithms.

Keywords: cryptography; programming; specification languages; ObliVM programming framework; cryptographic algorithms; domain specific language; program compilation; programming abstraction; secure computation; Cryptography; Libraries; Logic gates; Program processors; Programming; Protocols; Compiler; Oblivious Algorithms; Oblivious RAM; Programming Language; Secure Computation; Type System (ID#: 15-8835)



Skalicky, S.; Lopez, S.; Lukowiak, M.; Schmidt, A.G., "A Parallelizing Matlab Compiler Framework and Run Time for Heterogeneous Systems," in High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, pp. 232-237, 24-26 Aug. 2015. doi: 10.1109/HPCC-CSS-ICESS.2015.51

Abstract: Compute-intensive applications incorporate ever increasing data processing requirements on hardware systems. Many of these applications have only recently become feasible thanks to the increasing computing power of modern processors. The Matlab language is uniquely situated to support the description of these compute-intensive scientific applications, and consequently has been continuously improved to provide increasing computational support in the form of multithreading for CPUs and utilizing accelerators such as GPUs and FPGAs. Moreover, to take advantage of the computational support in these heterogeneous systems from the problem domain to the computer architecture necessitates a wide breadth of knowledge and understanding. In this work, we present a framework for the development of compute-intensive scientific applications in Matlab using heterogeneous processor systems. We investigate systems containing CPUs, GPUs, and FPGAs. We leverage the capabilities of Matlab and supplement them by automating the mapping, scheduling, and parallel code generation. Our experimental results on a set of benchmarks achieved from 20x to 60x speedups compared to the standard Matlab CPU environment with minimal effort required on the part of the user.

Keywords: graphics processing units; mathematics computing; multi-threading; parallel architectures; parallelising compilers; FPGA; GPU; Matlab compiler framework; Matlab language; compute-intensive scientific application; computer architecture; heterogeneous processor system; heterogeneous system; multithreading; parallel code generation; standard Matlab CPU environment; Data transfer; Field programmable gate arrays; Kernel; MATLAB; Message systems; Processor scheduling; Scheduling; Heterogeneous computing; Matlab; compiler (ID#: 15-8836)



Qining Lu; Farahani, M.; Jiesheng Wei; Thomas, A.; Pattabiraman, K., "LLFI: An Intermediate Code-Level Fault Injection Tool for Hardware Faults," in Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, pp. 11-16, 3-5 Aug. 2015. doi: 10.1109/QRS.2015.13

Abstract: Hardware errors are becoming more prominent with reducing feature sizes, however tolerating them exclusively in hardware is expensive. Researchers have explored software-based techniques for building error resilient applications for hardware faults. However, software based error resilience techniques need configurable and accurate fault injection techniques to evaluate their effectiveness. In this paper, we present LLFI, a fault injector that works at the LLVM compiler's intermediate representation (IR) level of the application. LLFI is highly configurable, and can be used to inject faults into selected targets in the program in a fine-grained manner. We demonstrate the utility of LLFI by using it to perform fault injection experiments into nine programs, and study the effect of different injection choices on their resilience, namely instruction type, register target and number of bits flipped. We find that these parameters have a marked effect on the evaluation of overall resilience.

Keywords: software fault tolerance; LLFI; error resilient applications; fault injection techniques; fine-grained manner; hardware errors; hardware faults; intermediate code-level fault injection tool; intermediate representation level; software based error resilience techniques; software-based techniques; Benchmark testing; Computer crashes; Hardware; Instruments; Registers; Resilience; Software (ID#: 15-8837)



Hataba, M.; Elkhouly, R.; El-Mahdy, A., "Diversified Remote Code Execution Using Dynamic Obfuscation of Conditional Branches," in Distributed Computing Systems Workshops (ICDCSW), 2015 IEEE 35th International Conference on, pp. 120-127, June 29 2015-July 2 2015. doi: 10.1109/ICDCSW.2015.37

Abstract: Information leakage via timing side-channel attacks is one of the main threats that target code executing on remote platforms such as the cloud computing environment. These attacks can be further leveraged to reverse-engineer or even tamper with the running code. In this paper, we propose a security obfuscation technique, which helps making the generated code more resistant to these attacks, by means of increasing logical complexity to hinder the formulation of a solid hypothesis about code behavior. More importantly, this software solution is portable, generic and does not require special setup or hardware or software modifications. In particular, we consider mangling the control-flow inside a program via converting a random set of conditional branches into linear code, using if conversion transformation. Moreover, our method exploits the dynamic compilation technology to continually and randomly alter the branches. All of this mangling should diversify code execution, hence it becomes difficult for an attacker to infer timing correlations through statistical analysis. We extend the LLVM JIT compiler to provide for an initial investigation of this approach. This makes our system applicable to a wide variety of programming languages and hardware platforms. We have studied the system using a simple test program and selected benchmarks from the standard SPEC CPU 2006 suite with different input loads and experimental setups. Initial results show significant changes in program's control-flow and hence data dependences, resulting in noticeable different execution times even for the same input data, thereby complicating such attacks. More notably, the performance penalty is within reasonable margins.

Keywords: cloud computing; program compilers; security of data; LLVM JIT compiler; cloud computing environment; conditional branches; diversified remote code execution; dynamic compilation technology; dynamic obfuscation; information leakage; security obfuscation technique; standard SPEC CPU 2006 suite; statistical analysis; timing side-channel attacks; Benchmark testing; Cloud computing; Hardware; Optimization; Program processors; Runtime; Security; If-; JIT Compilation; Obfuscation; Side-Channels (ID#: 15-8838)



Seuschek, H.; Rass, S., "Side-Channel Leakage Models for RISC Instruction Set Architectures from Empirical Data," in Digital System Design (DSD), 2015 Euromicro Conference on, pp. 423-430, 26-28 Aug. 2015. doi: 10.1109/DSD.2015.117

Abstract: Side-channel attacks are currently among the most serious threats for embedded systems. Popular countermeasures to mitigate the impact of such attacks are masking schemes, where secret intermediate values are split in two or more values by virtue of secret sharing. Processing the secret happens on separate execution paths, which are executed on the same central processing unit (CPU). In case of unwanted correlations between different registers inside the CPU the shared secret may leak out through a side-channel. This problem is particularly evident on low cost embedded systems, such as nodes for the Internet of Things (IoT), where cryptographic algorithms are often implemented in pure software on a reduced instruction set computer (RISC). On such an architecture, all data manipulation operations are carried out on the contents of the CPU's register file. This means that all intermediate values of the cryptographic algorithm at some stage pass through the register file. Towards avoiding unwanted correlations and leakages thereof, special care has to be taken in the mapping of the registers to intermediate values of the algorithm. In this work, we describe an empirical study that reveals effects of unintended unmasking of masked intermediate values and thus leaking secret values. The observed phenomena are related to the leakage of masked hardware implementations caused by glitches in the combinatorial path of the circuit but the effects are abstracted to the level of the instruction set architecture on a RISC CPU. Furthermore, we discuss countermeasures to have the compiler thwart such leakages.

Keywords: cryptography; embedded systems; program compilers; reduced instruction set computing; RISC CPU;RISC instruction set architectures; central processing unit; compiler; cryptographic algorithm; data manipulation operations; embedded systems; masked hardware implementations; masking schemes; secret sharing; side-channel attacks; side-channel leakage models; Central Processing Unit; Computer architecture; Correlation; Cryptography; Hamming distance; Reduced instruction set computing; Registers (ID#: 15-8839)



Papadakis, M.; Yue Jia; Harman, M.; Le Traon, Y., "Trivial Compiler Equivalence: A Large Scale Empirical Study of a Simple, Fast and Effective Equivalent Mutant Detection Technique," in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 1, pp. 936-946, 16-24 May 2015. doi: 10.1109/ICSE.2015.103

Abstract: Identifying equivalent mutants remains the largest impediment to the widespread uptake of mutation testing. Despite being researched for more than three decades, the problem remains. We propose Trivial Compiler Equivalence (TCE) a technique that exploits the use of readily available compiler technology to address this long-standing challenge. TCE is directly applicable to real-world programs and can imbue existing tools with the ability to detect equivalent mutants and a special form of useless mutants called duplicated mutants. We present a thorough empirical study using 6 large open source programs, several orders of magnitude larger than those used in previous work, and 18 benchmark programs with hand-analysis equivalent mutants. Our results reveal that, on large real-world programs, TCE can discard more than 7% and 21% of all the mutants as being equivalent and duplicated mutants respectively. A human-based equivalence verification reveals that TCE has the ability to detect approximately 30% of all the existing equivalent mutants.

Keywords: formal verification; program compilers; program testing; TCE technique; duplicated mutants; human-based equivalence verification; mutant detection technique; mutation testing; trivial compiler equivalence technology; Benchmark testing; Java; Optimization; Scalability; Syntactics (ID#: 15-8840)



Husak, M.; Velan, P.; Vykopal, J., "Security Monitoring of HTTP Traffic Using Extended Flows," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 258-265, 24-27 Aug. 2015.  doi: 10.1109/ARES.2015.42

Abstract: In this paper, we present an analysis of HTTP traffic in a large-scale environment which uses network flow monitoring extended by parsing HTTP requests. In contrast to previously published analyses, we were the first to classify patterns of HTTP traffic which are relevant to network security. We described three classes of HTTP traffic which contain brute-force password attacks, connections to proxies, HTTP scanners, and web crawlers. Using the classification, we were able to detect up to 16 previously undetectable brute-force password attacks and 19 HTTP scans per day in our campus network. The activity of proxy servers and web crawlers was also observed. Symptoms of these attacks may be detected by other methods based on traditional flow monitoring, but detection using the analysis of HTTP requests is more straightforward. We, thus, confirm the added value of extended flow monitoring in comparison to the traditional method.

Keywords: computer network security; program compilers; telecommunication traffic; transport protocols; HTTP request parsing; HTTP traffic; brute-force password attack; network flow monitoring; network security monitoring; Crawlers; IP networks; Monitoring; Protocols; Security; Web servers (ID#: 15-8841)



Dewey, D.; Reaves, B.; Traynor, P., "Uncovering Use-After-Free Conditions in Compiled Code," in Availability, Reliability and Security (ARES), 2015 10th International Conference on, pp. 90-99, 24-27 Aug. 2015. doi: 10.1109/ARES.2015.61

Abstract: Use-after-free conditions occur when an execution path of a process accesses an incorrectly deal located object. Such access is problematic because it may potentially allow for the execution of arbitrary code by an adversary. However, while increasingly common, such flaws are rarely detected by compilers in even the most obvious instances. In this paper, we design and implement a static analysis method for the detection of use-after-free conditions in binary code. Our new analysis is similar to available expression analysis and traverses all code paths to ensure that every object is defined before each use. Failure to achieve this property indicates that an object is improperly freed and potentially vulnerable to compromise. After discussing the details of our algorithm, we implement a tool and run it against a set of enterprise-grade, publicly available binaries. We show that our tool can not only catch textbook and recently released in-situ examples of this flaw, but that it has also identified 127 additional use-after-free conditions in a search of 652 compiled binaries in the Windows system32 directory. In so doing, we demonstrate not only the power of this approach in combating this increasingly common vulnerability, but also the ability to identify such problems in software for which the source code is not necessarily publicly available.

Keywords: software engineering; Windows system32 directory; binary code; compiled code; static analysis method; use-after-free conditions; Algorithm design and analysis; Binary codes; Object recognition; Runtime; Security; Software; Visualization; Binary Decompilation; Software Security; Static Analysis (ID#: 15-8842)



Catherine S, M.; George, G., "S-Compiler: A Code Vulnerability Detection Method," in Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on,  pp. 1-4, 24-25 Jan. 2015. doi: 10.1109/EESCO.2015.7254018

Abstract: Nowadays, security breaches are greatly increasing in number. This is one of the major threats that are being faced by most organisations which usually lead to a massive loss. The major cause for these breaches could potentially be the vulnerabilities in software products. There are many tools available to detect such vulnerabilities but detection and correction of vulnerabilities during development phase would be more beneficial. Though there are many standard secure coding practices to be followed in development phase, software developers fail to utilize them and this leads to an unsecured end product. The difficulty in manual analysis of vulnerabilities in source code is what leads to the evolution of automated analysis tools. Static and dynamic analyses are the two complementary methods used to detect vulnerabilities in development phase. Static analysis scans the source code which eliminates the need of execution of the code but it has many false positives and false negatives. On the other hand, dynamic analysis tests the code by running it along with the test cases. The proposed approach integrates static and dynamic analysis. This eliminates the false positives and false negatives problem of the existing practices and helps developers to correct their code in the most efficient way. It deals with common buffer overflow vulnerabilities and vulnerabilities from Common Weakness Enumeration (CWE). The whole scenario is implemented as a web interface.

Keywords: source coding; telecommunication security; S-compiler; automated analysis tools; code vulnerability detection method; common weakness enumeration; false negatives; false positives; source code; Buffer overflows; Buffer storage; Encoding; Forensics; Information security; Software; Buffer overflow; Dynamic analysis; Secure coding; Static analysis (ID#: 15-8843)



Saito, T.; Miyazaki, H.; Baba, T.; Sumida, Y.; Hori, Y., "Study on Diffusion of Protection/Mitigation against Memory Corruption Attack in Linux Distributions," in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, pp. 525-530, 8-10 July 2015. doi: 10.1109/IMIS.2015.73

Abstract: Memory corruption attacks that exploit software vulnerabilities have become a serious problem on the Internet. Effective protection and/or mitigation technologies aimed at countering these attacks are currently being provided with operating systems, compilers, and libraries. Unfortunately, the attacks continue. One of the reasons for this state of affairs can be attributed to the uneven diffusion of the latest (and thus most potent) protection and/or mitigation technologies. This results because attackers are likely to have found ways of circumventing most well-known older versions, thus causing them to lose effectiveness. Therefore, in this paper, we will explore diffusion of relatively new technologies, and analyze the results of a Linux distributions survey.

Keywords: Linux; security of data; Internet; Linux distributions; memory corruption attack mitigation; memory corruption attack protection; software vulnerabilities; Buffer overflows; Geophysical measurement techniques; Ground penetrating radar; Kernel; Libraries; Linux; Anti-thread; Buffer Overflow; Diffusion of countermeasure techniques; Memory corruption attacks (ID#: 15-8844)



Crane, S.; Liebchen, C.; Homescu, A.; Davi, L.; Larsen, P.; Sadeghi, A.-R.; Brunthaler, S.; Franz, M., "Readactor: Practical Code Randomization Resilient to Memory Disclosure," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 763-780, 17-21 May 2015. doi: 10.1109/SP.2015.52

Abstract: Code-reuse attacks such as return-oriented programming (ROP) pose a severe threat to modern software. Designing practical and effective defenses against code-reuse attacks is highly challenging. One line of defense builds upon fine-grained code diversification to prevent the adversary from constructing a reliable code-reuse attack. However, all solutions proposed so far are either vulnerable to memory disclosure or are impractical for deployment on commodity systems. In this paper, we address the deficiencies of existing solutions and present the first practical, fine-grained code randomization defense, called Read actor, resilient to both static and dynamic ROP attacks. We distinguish between direct memory disclosure, where the attacker reads code pages, and indirect memory disclosure, where attackers use code pointers on data pages to infer the code layout without reading code pages. Unlike previous work, Read actor resists both types of memory disclosure. Moreover, our technique protects both statically and dynamically generated code. We use a new compiler-based code generation paradigm that uses hardware features provided by modern CPUs to enable execute-only memory and hide code pointers from leakage to the adversary. Finally, our extensive evaluation shows that our approach is practical -- we protect the entire Google Chromium browser and its V8 JIT compiler -- and efficient with an average SPEC CPU2006 performance overhead of only 6.4%.

Keywords: online front-ends; program compilers; Google Chromium browser; ROP; Readactor; V8 JIT compiler; code randomization; code-reuse attacks; compiler-based code generation paradigm; memory disclosure; return-oriented programming; Hardware; Layout; Operating systems; Program processors; Security; Virtual machine monitors (ID#: 15-8845)



Jeehong Kim; Young Ik Eom, "Fast and Space-Efficient Defense Against Jump-Oriented Programming Attacks," in Big Data and Smart Computing (BigComp), 2015 International Conference on, pp. 7-10, 9-11 Feb. 2015. doi: 10.1109/35021BIGCOMP.2015.7072839

Abstract: Recently, Jump-oriented Programming (JOP) attack has become widespread in various systems including server, desktop, and smart devices. JOP attack rearranges existing code snippets in program to make gadget sequences, and hijacks control flow of program by chaining and executing gadget sequences consecutively. However, existing defense schemes have limitations such as high execution overhead, high binary size increase overhead, and low applicability. In this paper, to solve these problems, we introduce target shepherding, which is a fast and space-efficient defender against general JOP attack. Our defense scheme generates monitoring code to determine whether the target is legitimate or not just before each indirect jump instruction at compile time, and then checks whether a control flow has been subverted by JOP attack at run time. We achieved very low run-time overhead with very small increase in file size. In our experimental results, the performance overhead is 2.36% and the file size overhead is 5.82% with secure execution.

Keywords: program compilers; security of data; JOP attack; code snippets; compile time; control flow; file size overhead; gadget sequences; indirect jump instruction; jump-oriented programming attack; monitoring code generation; performance overhead; program hijacks control flow; run-time overhead; space-efficient defense; target shepherding; Law; Monitoring; Programming; Registers; Security; Servers; Code Reuse Attack; Jump-oriented Programming; Return-oriented Programming; Software Security (ID#: 15-8846)



Chia-Nan Kao; I-Ju Liao; Yung-Cheng Chang; Che-Wei Lin; Nen-Fu Huang; Rong-Tai Liu; Hsien-Wei Hung, "A Retargetable Multiple String Matching Code Generation for Embedded Network Intrusion Detection Platforms," in Communication Software and Networks (ICCSN), 2015 IEEE International Conference on, pp. 93-99, 6-7 June 2015. doi: 10.1109/ICCSN.2015.7296134

Abstract: The common means of defense for network security systems is to block the intrusions by matching the signatures. Intrusion-signature matching is the critical operation. However, small and medium-sized enterprise (SME) or Small Office Home Office (SOHO) network security systems may not have sufficient resources to maintain good matching performance with full-set rules. Code generation is a technique used to convert data structures or instruction to other forms to obtain greater benefits within execution environments. This study analyzes intrusion detection system (IDS) signatures and discovers character occurrence to be significantly uneven. Based on this property, this study designs a method to generate a string matching source code according to the state table of AC algorithm for embedded network intrusion detection platforms. The generated source code requires less memory and relies not only on table lookup, but also on the ability of processor. This method can upgrade the performance by compiling optimization and contribute to the application of network processors and DSP-like based platforms. From evaluation, this method requires use of only 20% memory and can achieve 86% performance in clean traffic compared to the original Aho-Corasick algorithm (AC).

Keywords: computer network security; digital signatures; program compilers; string matching; AC algorithm; DSP-like based platforms; character occurrence discovery; data structures; embedded network intrusion detection platforms; intrusion detection system signatures; intrusion-signature matching; network security systems; optimization compilation; processor ability; retargetable multiple string matching code generation; table lookup; Arrays; Intrusion detection; Memory management; Optimization; Switches; Table lookup; Thyristors; Code Generation; Intrusion Detection System; String Matching (ID#: 15-8847)



Costello, C.; Fournet, C.; Howell, J.; Kohlweiss, M.; Kreuter, B.; Naehrig, M.; Parno, B.; Zahur, S., "Geppetto: Versatile Verifiable Computation," in Security and Privacy (SP), 2015 IEEE Symposium on, pp. 253-270, 17-21 May 2015. doi: 10.1109/SP.2015.23

Abstract: Cloud computing sparked interest in Verifiable Computation protocols, which allow a weak client to securely outsource computations to remote parties. Recent work has dramatically reduced the client's cost to verify the correctness of their results, but the overhead to produce proofs remains largely impractical. Geppetto introduces complementary techniques for reducing prover overhead and increasing prover flexibility. With Multi QAPs, Geppetto reduces the cost of sharing state between computations (e.g, For MapReduce) or within a single computation by up to two orders of magnitude. Via a careful choice of cryptographic primitives, Geppetto's instantiation of bounded proof bootstrapping improves on prior bootstrapped systems by up to five orders of magnitude, albeit at some cost in universality. Geppetto also efficiently verifies the correct execution of proprietary (i.e, Secret) algorithms. Finally, Geppetto's use of energy-saving circuits brings the prover's costs more in line with the program's actual (rather than worst-case) execution time. Geppetto is implemented in a full-fledged, scalable compiler and runtime that consume LLVM code generated from a variety of source C programs and cryptographic libraries.

Keywords: cloud computing; computer bootstrapping; cryptographic protocols; program compilers; program verification; Geppetto; LLVM code generation; QAPs; bootstrapped systems; bounded proof bootstrapping; cloud computing; compiler; correctness verification; cryptographic libraries; cryptographic primitives; energy-saving circuits; outsource computation security; prover flexibility; prover overhead reduction; source C programs; verifiable computation protocols; Cryptography; Generators; Libraries; Logic gates; Protocols; Random access memory; Schedules (ID#: 15-8848)



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.