Visible to the public Dynamic Execution

SoS Newsletter- Advanced Book Block

Dynamic Execution

Dynamic execution is the subject of a new IEEE international standard draft. The draft document was published in April 2014 "IEEE Draft International Standard for Software and Systems Engineering--Software Testing--Part 4: Test Techniques," IEEE P29119-4/DIS2-Feb2014 , vol., no., pp.1,139, Feb. 21 2014 Available at: Articles cited cover describe research on run-time task allocation, debugging state anomalies, deceptive virtual hosts for industrial control networks, and malware dynamic recompilation.

  • "IEEE Draft International Standard for Software and Systems Engineering--Software Testing--Part 4: Test Techniques," IEEE P29119-4/DIS2-Feb2014 , vol., no., pp.1,139, Feb. 21 2014 (ID#:14-1283) Available at: This part of ISO/IEC 29119 defines software testing techniques that can be used by any organization, project or smaller testing activity. The test techniques in this International Standard are used to derive the test cases executed as part of the dynamic testing process specified in part two of this standard. This International Standard is applicable to the testing in all software development lifecycle models. This document is intended for, but not limited to, testers, test managers, developers, project managers, particularly those responsible for governing, managing and implementing software testing.
  • "Techniques to Minimize State Transfer Costs for Dynamic Execution Offloading in Mobile Cloud Computing," Yang, S.; Kwon, D.; Yi, H.; Cho, Y.; Kwon, Y.; Paek, Y. Mobile Computing, IEEE Transactions on , vol. PP, no.99, pp.1,1 2014. (ID#:14-1284) Available at: In order to meet the increasing demand for high performance in smartphones, recent studies suggested mobile cloud computing techniques that aim to connect the phones to adjacent powerful cloud servers to throw their computational burden to the servers. These techniques often employ execution offloading schemes that migrate a process between machines during its execution. In execution offloading, code regions to be executed on the server are decided statically or dynamically based on the complex analysis on execution time and process state transfer costs of every region. Expectedly, the transfer cost is a deciding factor for the success of execution offloading. According to our analysis, it is dominated by the total size of heap objects transferred over the network. But previous work did not try hard to minimize this size. Thus in this paper, we introduce novel techniques based on compiler code analysis that effectively reduce the transferred data size by transferring only the essential heap objects and the stack frames actually referenced in the server. The experiments exhibit that the reduced size positively influences not only the transfer time itself but also the overall effectiveness of execution offloading, and ultimately, improves the performance of our mobile cloud computing significantly in terms of execution time and energy consumption.
  • "Adjustable contiguity of run-time task allocation in networked many-core systems," Fattah, Mohammad; Liljeberg, Pasi; Plosila, Juha; Tenhunen, Hannu, Design Automation Conference (ASP-DAC), 2014 19th Asia and South Pacific , vol., no., pp.349,354, 20-23 Jan. 2014 (ID#:14-1285) Available at: The authors propose a run-time mapping algorithm, CASqA, for networked many-core systems. In this algorithm, the level of contiguousness of the allocated processors (a) can be adjusted in a fine-grained fashion. A strictly contiguous allocation (a = 0) decreases the latency and power dissipation of the network and improves the applications execution time. However, it limits the achievable throughput and increases the turnaround time of the applications. As a result, recent works consider non-contiguous allocation (a = 1) to improve the throughput traded off against applications execution time and network metrics. In contradiction, their experiments show that a higher throughput (by 3%) with improved network performance can be achieved when using intermediate a values. More precisely, up to 35% drop in the network costs can be gained by adjusting the level of contiguity compared to non-contiguous cases, while the achieved throughput is kept constant. Moreover, CASqA provides at least 32% energy saving in the network compared to other works.
  • "Follow the path: Debugging state anomalies along execution histories," Perscheid, Michael; Felgentreff, Tim; Hirschfeld, Robert, Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on , vol., no., pp.124,133, 3-6 Feb. 2014. (ID#:14-1286) Available at: To understand how observable failures come into being, back-in-time debuggers help developers by providing full access to past executions. However, such potentially large execution histories do not include any hints to failure causes. For that reason, developers are forced to ascertain unexpected state properties and wrong behavior completely on their own. Without deep program understanding, back-in-time debugging can end in countless and difficult questions about possible failure causes that consume a lot of time for following failures back to their root causes. In this paper, we present state navigation as a debugging guide that highlights unexpected state properties along execution histories. After deriving common object properties from the expected behavior of passing test cases, we generate likely invariants, compare them with the failing run, and map differences as state anomalies to the past execution. So, developers obtain a common thread through the large amount of run-time data which helps them to answer what causes the observable failure. We implement our completely automatic state navigation as part of our test-driven fault navigation and its Path tools framework. To evaluate our approach, we observe eight developers during debugging four non-trivial failures. As a result, we find out that our state navigation is able to aid developers and to decrease the required time for localizing the root cause of a failure.
  • "Cyber-Physical System Security with Deceptive Virtual Hosts for Industrial Control Networks," Vollmer, D.; Manic, M., Industrial Informatics, IEEE Transactions on , vol. PP, no.99, pp.1,1 2014. (ID#:14-1287) Available at: A challenge facing industrial control network administrators is protecting the typically large number of connected assets for which they are responsible. These cyber devices may be tightly coupled with the physical processes they control and human induced failures risk dire real world consequences. Dynamic virtual honeypots are effective tools for observing and attracting network intruder activity. This paper presents a design and implementation for self-configuring honeypots that passively examines control system network traffic and actively adapts to the observed environment. In contrast to prior work in the field, six tools were analyzed for suitability of network entity information gathering. Ettercap, an established network security tool not commonly used in this capacity, outperformed the other tools and was chosen for implementation. Utilizing Ettercap XML output, a novel four-step algorithm was developed for autonomous creation and update of a Honeyd configuration. This algorithm was tested on an existing small campus grid and sensor network by execution of a collaborative usage scenario. Automatically created virtual hosts were deployed in concert with an Anomaly Behavior (AB) system in an attack scenario. Virtual hosts were automatically configured with unique emulated network stack behaviors for 92% of the targeted devices. The AB system alerted on 100% of the monitored emulated devices.
  • "Malware Dynamic Recompilation," Josse, Sebastien, System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.5080,5089, 6-9 Jan. 2014. (ID#:14-1288) Available at: This paper addresses the increasing difficulty of analyzing and understanding protected malware code using traditional static and dynamic analysis tools. The concept of multi-targets in introduced, a proposed general, automatic rewriting tool used to analyze protected, malicious binary programs. In broad scope, the tool begins by noting the malicious program is execution environment, in order to subsequently glean and interpret its representation. This method follows the conventional methods of de-obfuscation and extraction.
  • "Potent and Stealthy Control Flow Obfuscation by Stack Based Self-Modifying Code," Balachandran, V.; Emmanuel, S., Information Forensics and Security, IEEE Transactions on , vol.8, no.4, pp.669,681, April 2013. (ID#:14-1289) Available at: Software code released to the user has the risk of reverse engineering attacks. Software obfuscation techniques can be employed to make the reverse engineering of software programs harder. In this paper, we propose a potent, stealthy, and cost-effective algorithm to obfuscate software programs. The main idea of the algorithm is to remove control flow information from the code area and hide them in the data area. During execution time, these instructions are reconstructed, thereby preserving the semantics of the program. Experimental results indicate that the algorithm performs well against static and dynamic attacks. Also the obfuscated program is hard to be differentiated from normal binary programs demonstrating the obfuscations good stealth measure.
  • "Security-enhanced 3D communication structure for dynamic 3D-MPSoCs protection," Sepulveda, J.; Gogniat, G.; Pires, R.; Wang Chau; Strum, M., Integrated Circuits and Systems Design (SBCCI), 2013 26th Symposium on , vol., no., pp.1,6, 2-6 Sept. 2013. (ID#:14-1290) Available at: This article addresses the security challenges accompanying the use of 3D Multiprocessors System-on-Chip (3D-MPSoCs). 3D communication structures (3D-HoCs), with their use of buses and network-on-chip, are considered by the authors of this paper as an apt solution to the current 3D-MPSoC vulnerabilities. The authors go further to suggest the use of Quality of Security Service (QoSS), meaning agile and dynamic firewalls, in 3D-HoC as a method of exploitation detection and prevention.
  • "Binary-Level Testing of Embedded Programs," Bardin, S.; Baufreton, P.; Cornuet, N.; Herrmann, P.; Labbe, S., Quality Software (QSIC), 2013 13th International Conference on , vol., no., pp.11,20, 29-30 July 2013. (ID#:14-1291) Available at: This article details the implementation of Dynamic Symbolic Execution (DSE), used for automated test data generation and vulnerability detection in desktop programs, in testing critical embedded systems. The authors also discuss novel characteristics featured in OSMOSE, their DSE tool.
  • "A Late Treatment of C Precondition in Dynamic Symbolic Execution," Delahaye, M.; Kosmatov, N., Software Testing, Verification and Validation Workshops (ICSTW), 2013 IEEE Sixth International Conference on , vol., no., pp.230,231, 18-22 March 2013. (ID#:14-1292) Available at: Relevance of automatically generated test cases depends on an appropriate definition of a test context, or precondition. This paper presents a novel method for handling a precondition in dynamic symbolic execution (DSE) testing tools. This method allows PathCrawler, a DSE tool for C programs, to accept a precondition defined as a C function. It provides a simple way to express a precondition even for developers who are not familiar with specification formalisms. It has also proven useful when combining static and dynamic analysis


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.