Visible to the public Biblio

Found 1545 results

Filters: First Letter Of Title is S  [Clear All Filters]
1953
Pulvari, Charles F..  1953.  The Snapping Dipoles of Ferroelectrics As a Memory Element for Digital Computers. Proceedings of the February 4-6, 1953, Western Computer Conference. :140–159.

A brief review is given of the memory properties of non-linear ferroelectric materials in terms of the direction of polarization. A sensitive pulse method has been developed for obtaining static remanent polarization data of ferroelectric materials. This method has been applied to study the effect of pulse duration and amplitude and decay of polarization on ferroelectric ceramic materials with fairly high crystalline orientation. These studies indicate that ferroelectric memory devices can be operated in the megacycle ranges. Attempts have been made to develop electrostatically induced memory devices using ferroelectric substances as a medium for storing information. As an illustration, a ferroelectric memory using a new type of switching matrix is presented having a selection ratio 50 or more.

2006
Sekine, Junko, Campos-Náñnez, Enrique, Harrald, John R., Abeledo, Hernán.  2006.  A Simulation-Based Approach to Trade-off Analysis of Port Security. Proceedings of the 38th Conference on Winter Simulation. :521–528.

Motivated by the September 11 attacks, we are addressing the problem of policy analysis of supply-chain security. Considering the potential economic and operational impacts of inspection together with the inherent difficulty of assigning a reasonable cost to an inspection failure call for a policy analysis methodology in which stakeholders can understand the trade-offs between the diverse and potentially conflicting objectives. To obtain this information, we used a simulation-based methodology to characterize the set of Pareto optimal solutions with respect to the multiple objectives represented in the decision problem. Our methodology relies on simulation and the response surface method (RSM) to model the relationships between inspection policies and relevant stakeholder objectives in order to construct a set of Pareto optimal solutions. The approach is illustrated with an application to a real-world supply chain.

2007
Ferreira, Pedro, Orvalho, Joao, Boavida, Fernando.  2007.  Security and privacy in a middleware for large scale mobile and pervasive augmented reality. 2007 15th International Conference on Software, Telecommunications and Computer Networks. :1—5.
Ubiquitous or pervasive computing is a new kind of computing, where specialized elements of hardware and software will have such high level of deployment that their use will be fully integrated with the environment. Augmented reality extends reality with virtual elements but tries to place the computer in a relatively unobtrusive, assistive role. In this paper we propose, test and analyse a security and privacy architecture for a previously proposed middleware architecture for mobile and pervasive large scale augmented reality games, which is the main contribution of this paper. The results show that the security features proposed in the scope of this work do not affect the overall performance of the system.
2010
Zadig, Sean M., Tejay, Gurvirender.  2010.  Securing IS assets through hacker deterrence: A case study. 2010 eCrime Researchers Summit. :1–7.
Computer crime is a topic prevalent in both the research literature and in industry, due to a number of recent high-profile cyber attacks on e-commerce organizations. While technical means for defending against internal and external hackers have been discussed at great length, researchers have shown a distinct preference towards understanding deterrence of the internal threat and have paid little attention to external deterrence. This paper uses the criminological thesis known as Broken Windows Theory to understand how external computer criminals might be deterred from attacking a particular organization. The theory's focus upon disorder as a precursor to crime is discussed, and the notion of decreasing public IS disorder to create the illusion of strong information systems security is examined. A case study of a victim e-commerce organization is reviewed in light of the theory and implications for research and practice are discussed.
Santiago Escobar, Universidad Politécnica de Valencia, Spain, Catherine Meadows, Naval Research Laboratory, Jose Meseguer, University of Illinois at Urbana-Champaign, Sonia Santiago, Universidad Politécnica de Valencia, Spain.  2010.  Sequential Protocol Composition in Maude-NPA. 15th European Conference on Research in Computer Security (ESORICS 2010).

Protocols do not work alone, but together, one protocol relying on another to provide needed services. Many of the problems in cryptographic protocols arise when such composition is done incorrectly or is not well understood. In this paper we discuss an extension to the Maude-NPA syntax and operational semantics to support dynamic sequential composition of protocols, so that protocols can be specified sepa- rately and composed when desired. This allows one to reason about many different compositions with minimal changes to the specification. Moreover, we show that, by a simple protocol transformation, we are able to analyze and verify this dynamic composition in the current Maude-NPA tool. We prove soundness and completeness of the protocol transforma- tion with respect to the extended operational semantics, and illustrate our results on some examples.

Bau, J., Bursztein, E., Gupta, D., Mitchell, J..  2010.  State of the Art: Automated Black-Box Web Application Vulnerability Testing. Security and Privacy (SP), 2010 IEEE Symposium on. :332-345.

Black-box web application vulnerability scanners are automated tools that probe web applications for security vulnerabilities. In order to assess the current state of the art, we obtained access to eight leading tools and carried out a study of: (i) the class of vulnerabilities tested by these scanners, (ii) their effectiveness against target vulnerabilities, and (iii) the relevance of the target vulnerabilities to vulnerabilities found in the wild. To conduct our study we used a custom web application vulnerable to known and projected vulnerabilities, and previous versions of widely used web applications containing known vulnerabilities. Our results show the promise and effectiveness of automated tools, as a group, and also some limitations. In particular, "stored" forms of Cross Site Scripting (XSS) and SQL Injection (SQLI) vulnerabilities are not currently found by many tools. Because our goal is to assess the potential of future research, not to evaluate specific vendors, we do not report comparative data or make any recommendations about purchase of specific tools.

2012
Fulton, Nathan.  2012.  Security Through Extensible Type Systems. Proceedings of the 3rd Annual Conference on Systems, Programming, and Applications: Software for Humanity. :107–108.
Researchers interested in security often wish to introduce new primitives into a language. Extensible languages hold promise in such scenarios, but only if the extension mechanism is sufficiently safe and expressive. This paper describes several modifications to an extensible language motivated by end-to-end security concerns.
Farquharson, J., Wang, A., Howard, J..  2012.  Smart Grid Cyber Security and Substation Network Security. 2012 IEEE PES Innovative Smart Grid Technologies (ISGT). :1–5.

A successful Smart Grid system requires purpose-built security architecture which is explicitly designed to protect customer data confidentiality. In addition to the investment on electric power infrastructure for protecting the privacy of Smart Grid-related data, entities need to actively participate in the NIST interoperability framework process; establish policies and oversight structure for the enforcement of cyber security controls of the data through adoption of security best practices, personnel training, cyber vulnerability assessments, and consumer privacy audits.

Kloft, Marius, Laskov, Pavel.  2012.  Security Analysis of Online Centroid Anomaly Detection. J. Mach. Learn. Res.. 13:3681–3724.

Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). In such cases, learning algorithms may have to cope with manipulated data aimed at hampering decision making. Although some previous work addressed the issue of handling malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method–online centroid anomaly detection–in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, and analysis of attack efficiency and limitations. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly detection under different conditions: attacker's full or limited control over the traffic and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation, carried out on real traces of HTTP and exploit traffic, confirms the tightness of our theoretical bounds and the practicality of our protection mechanisms.

2013
Hui Lin, University of Illinois at Urbana-Champaign, Adam Slagell, University of Illinois at Urbana-Champaign, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Peter W. Sauer, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2013.  Semantic Security Analysis of SCADA Networks to Detect Malicious Control Commands in Power Grids. First ACM Workshop on Smart Engergy Grid Security.

In the current generation of SCADA (Supervisory Control And Data Acquisition) systems used in power grids, a sophisticated attacker can exploit system vulnerabilities and use a legitimate maliciously crafted command to cause a wide range of system changes that traditional contingency analysis does not consider and remedial action schemes cannot handle. To detect such malicious commands, we propose a semantic analysis framework based on a distributed network of intrusion detection systems (IDSes). The framework combines system knowledge of both cyber and physical infrastructure in power grid to help IDS to estimate execution consequences of control commands, thus to reveal attacker’s malicious intentions. We evaluated the approach on the IEEE 30-bus system. Our experiments demonstrate that: (i) by opening 3 transmission lines, an attacker can avoid detection by the traditional contingency analysis and instantly put the tested 30-bus system into an insecure state and (ii) the semantic analysis provides reliable detection of malicious commands with a small amount of analysis time.

Szekeres, L., Payer, M., Tao Wei, Song, D..  2013.  SoK: Eternal War in Memory. Security and Privacy (SP), 2013 IEEE Symposium on. :48-62.

Memory corruption bugs in software written in low-level languages like C or C++ are one of the oldest problems in computer security. The lack of safety in these languages allows attackers to alter the program's behavior or take full control over it by hijacking its control flow. This problem has existed for more than 30 years and a vast number of potential solutions have been proposed, yet memory corruption attacks continue to pose a serious threat. Real world exploits show that all currently deployed protections can be defeated. This paper sheds light on the primary reasons for this by describing attacks that succeed on today's systems. We systematize the current knowledge about various protection techniques by setting up a general model for memory corruption attacks. Using this model we show what policies can stop which attacks. The model identifies weaknesses of currently deployed techniques, as well as other proposed protections enforcing stricter policies. We analyze the reasons why protection mechanisms implementing stricter polices are not deployed. To achieve wide adoption, protection mechanisms must support a multitude of features and must satisfy a host of requirements. Especially important is performance, as experience shows that only solutions whose overhead is in reasonable bounds get deployed. A comparison of different enforceable policies helps designers of new protection mechanisms in finding the balance between effectiveness (security) and efficiency. We identify some open research problems, and provide suggestions on improving the adoption of newer techniques.

2014
Baek, J., Vu, Q., Liu, J., Huang, X., Xiang, Y..  2014.  A secure cloud computing based framework for big data information management of smart grid. Cloud Computing, IEEE Transactions on. PP:1-1.

Smart grid is a technological innovation that improves efficiency, reliability, economics, and sustainability of electricity services. It plays a crucial role in modern energy infrastructure. The main challenges of smart grids, however, are how to manage different types of front-end intelligent devices such as power assets and smart meters efficiently; and how to process a huge amount of data received from these devices. Cloud computing, a technology that provides computational resources on demands, is a good candidate to address these challenges since it has several good properties such as energy saving, cost saving, agility, scalability, and flexibility. In this paper, we propose a secure cloud computing based framework for big data information management in smart grids, which we call “Smart-Frame.” The main idea of our framework is to build a hierarchical structure of cloud computing centers to provide different types of computing services for information management and big data analysis. In addition to this structural framework, we present a security solution based on identity-based encryption, signature and proxy re-encryption to address critical security issues of the proposed framework.
 

Odelu, Vanga, Das, Ashok Kumar, Goswami, Adrijit.  2014.  A Secure Effective Key Management Scheme for Dynamic Access Control in a Large Leaf Class Hierarchy. Inf. Sci.. 269:270–285.

Lo et al. (2011) proposed an efficient key assignment scheme for access control in a large leaf class hierarchy where the alternations in leaf classes are more frequent than in non-leaf classes in the hierarchy. Their scheme is based on the public-key cryptosystem and hash function where operations like modular exponentiations are very much costly compared to symmetric-key encryptions and decryptions, and hash computations. Their scheme performs better than the previously proposed schemes. However, in this paper, we show that Lo et al.’s scheme fails to preserve the forward security property where a security class can also derive the secret keys of its successor classes ’s even after deleting the security class  from the hierarchy. We aim to propose a new key management scheme for dynamic access control in a large leaf class hierarchy, which makes use of symmetric-key cryptosystem and one-way hash function. We show that our scheme requires significantly less storage and computational overheads as compared to Lo et al.’s scheme and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against all possible attacks including the forward security. In addition, our scheme supports efficiently dynamic access control problems compared to Lo et al.’s scheme and other related schemes. Thus, higher security along with low storage and computational costs make our scheme more suitable for practical applications compared to other schemes.

Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Yu, Xianqing, Ning, Peng, Vouk, Mladen A..  2014.  Securing Hadoop in Cloud. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :26:1–26:2.

Hadoop is a map-reduce implementation that rapidly processes data in parallel. Cloud provides reliability, flexibility, scalability, elasticity and cost saving to customers. Moving Hadoop into Cloud can be beneficial to Hadoop users. However, Hadoop has two vulnerabilities that can dramatically impact its security in a Cloud. The vulnerabilities are its overloaded authentication key, and the lack of fine-grained access control at the data access level. We propose and develop a security enhancement for Cloud-based Hadoop.

Wei, Lifei, Zhu, Haojin, Cao, Zhenfu, Dong, Xiaolei, Jia, Weiwei, Chen, Yunlu, Vasilakos, Athanasios V..  2014.  Security and Privacy for Storage and Computation in Cloud Computing. Inf. Sci.. 258:371–386.

Cloud computing emerges as a new computing paradigm that aims to provide reliable, customized and quality of service guaranteed computation environments for cloud users. Applications and databases are moved to the large centralized data centers, called cloud. Due to resource virtualization, global replication and migration, the physical absence of data and machine in the cloud, the stored data in the cloud and the computation results may not be well managed and fully trusted by the cloud users. Most of the previous work on the cloud security focuses on the storage security rather than taking the computation security into consideration together. In this paper, we propose a privacy cheating discouragement and secure computation auditing protocol, or SecCloud, which is a first protocol bridging secure storage and secure computation auditing in cloud and achieving privacy cheating discouragement by designated verifier signature, batch verification and probabilistic sampling techniques. The detailed analysis is given to obtain an optimal sampling size to minimize the cost. Another major contribution of this paper is that we build a practical secure-aware cloud computing experimental environment, or SecHDFS, as a test bed to implement SecCloud. Further experimental results have demonstrated the effectiveness and efficiency of the proposed SecCloud.

Y. Seifi, S. Suriadi, E. Foo, C. Boyd.  2014.  Security properties analysis in a TPM-based protocol. Int. J. of Security and Networks, 2014 Vol.9, No.2, pp.85 - 103.

Security protocols are designed in order to provide security properties (goals). They achieve their goals using cryptographic primitives such as key agreement or hash functions. Security analysis tools are used in order to verify whether a security protocol achieves its goals or not. The analysed property by specific purpose tools are predefined properties such as secrecy (confidentiality), authentication or non-repudiation. There are security goals that are defined by the user in systems with security requirements. Analysis of these properties is possible with general purpose analysis tools such as coloured petri nets (CPN). This research analyses two security properties that are defined in a protocol that is based on trusted platform module (TPM). The analysed protocol is proposed by Delaune to use TPM capabilities and secrets in order to open only one secret from two submitted secrets to a recipient.

Azimi, Mahdi, Sami, Ashkan, Khalili, Abdullah.  2014.  A Security Test-Bed for Industrial Control Systems. Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation. :26–31.

Industrial Control Systems (ICS) such as Supervisory Control And Data Acquisition (SCADA), Distributed Control Systems (DCS) and Distributed Automation Systems (DAS) control and monitor critical infrastructures. In recent years, proliferation of cyber-attacks to ICS revealed that a large number of security vulnerabilities exist in such systems. Excessive security solutions are proposed to remove the vulnerabilities and improve the security of ICS. However, to the best of our knowledge, none of them presented or developed a security test-bed which is vital to evaluate the security of ICS tools and products. In this paper, a test-bed is proposed for evaluating the security of industrial applications by providing different metrics for static testing, dynamic testing and network testing in industrial settings. Using these metrics and results of the three tests, industrial applications can be compared with each other from security point of view. Experimental results on several real world applications indicate that proposed test-bed can be successfully employed to evaluate and compare the security level of industrial applications.

Macedonio, Damiano, Merro, Massimo.  2014.  A Semantic Analysis of Key Management Protocols for Wireless Sensor Networks. Sci. Comput. Program.. 81:53–78.

Gorrieri and Martinelli’s timed Generalized Non-Deducibility on Compositions () schema is a well-known general framework for the formal verification of security protocols in a concurrent scenario. We generalise the  schema to verify wireless network security protocols. Our generalisation relies on a simple timed broadcasting process calculus whose operational semantics is given in terms of a labelled transition system which is used to derive a standard simulation theory. We apply our  framework to perform a security analysis of three well-known key management protocols for wireless sensor networks: , LEAP+ and LiSP.

Yu, Tingting, Srisa-an, Witawas, Rothermel, Gregg.  2014.  SimRT: An Automated Framework to Support Regression Testing for Data Races. Proceedings of the 36th International Conference on Software Engineering. :48–59.

Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance.

Craig Buchanan, University of Illinois at Urbana-Champaign.  2014.  Simulation Debugging and Visualization in the Mobius Modeling Framework. Department of Electrical and Computer Engineering. M.S.

Large and complex models can be difficult to analyze using static analysis results from current tools, including the M¨obius modeling framework, which provides a powerful, formalism- independent, discrete-event simulator that outputs static results such as execution traces. The M¨obius Simulation Debugger and Visualization (MSDV) feature adds user interaction to running simulations to provide a more transparent view into the dynamics of the models under consideration. This thesis discusses the details of the design and implementation of this feature in the M¨obius modeling environment. Also, a case study is presented to demonstrate the new capabilities provided by the feature.

Chen, L.M., Hsiao, S.-W., Chen, M.C., Liao, W..  2014.  Slow-Paced Persistent Network Attacks Analysis and Detection Using Spectrum Analysis. Systems Journal, IEEE. PP:1-12.

A slow-paced persistent attack, such as slow worm or bot, can bewilder the detection system by slowing down their attack. Detecting such attacks based on traditional anomaly detection techniques may yield high false alarm rates. In this paper, we frame our problem as detecting slow-paced persistent attacks from a time series obtained from network trace. We focus on time series spectrum analysis to identify peculiar spectral patterns that may represent the occurrence of a persistent activity in the time domain. We propose a method to adaptively detect slow-paced persistent attacks in a time series and evaluate the proposed method by conducting experiments using both synthesized traffic and real-world traffic. The results show that the proposed method is capable of detecting slow-paced persistent attacks even in a noisy environment mixed with legitimate traffic.

Keivanloo, Iman, Rilling, Juergen.  2014.  Software Trustworthiness 2.0-A Semantic Web Enabled Global Source Code Analysis Approach. J. Syst. Softw.. 89:33–50.

There has been an ongoing trend toward collaborative software development using open and shared source code published in large software repositories on the Internet. While traditional source code analysis techniques perform well in single project contexts, new types of source code analysis techniques are ermerging, which focus on global source code analysis challenges. In this article, we discuss how the Semantic Web, can become an enabling technology to provide a standardized, formal, and semantic rich representations for modeling and analyzing large global source code corpora. Furthermore, inference services and other services provided by Semantic Web technologies can be used to support a variety of core source code analysis techniques, such as semantic code search, call graph construction, and clone detection. In this paper, we introduce SeCold, the first publicly available online linked data source code dataset for software engineering researchers and practitioners. Along with its dataset, SeCold also provides some Semantic Web enabled core services to support the analysis of Internet-scale source code repositories. We illustrated through several examples how this linked data combined with Semantic Web technologies can be harvested for different source code analysis tasks to support software trustworthiness. For the case studies, we combine both our linked-data set and Semantic Web enabled source code analysis services with knowledge extracted from StackOverflow, a crowdsourcing website. These case studies, we demonstrate that our approach is not only capable of crawling, processing, and scaling to traditional types of structured data (e.g., source code), but also supports emerging non-structured data sources, such as crowdsourced information (e.g., StackOverflow.com) to support a global source code analysis context.

[Anonymous].  2014.  Solving Complex Path Conditions through Heuristic Search on Induced Polytopes. 22nd ACM SIGSOFT Symposium on Foundations of Software Engineering.

Test input generators using symbolic and concolic execution must solve path conditions to systematically explore a program and generate high coverage tests. However, path conditions may contain complicated arithmetic constraints that are infeasible to solve: a solver may be unavailable, solving may be computationally intractable, or the constraints may be undecidable. Existing test generators either simplify such constraints with concrete values to make them decidable, or rely on strong but incomplete constraint solvers. Unfortunately, simplification yields coarse approximations whose solutions rarely satisfy the original constraint. Moreover, constraint solvers cannot handle calls to native library methods. We show how a simple combination of linear constraint solving and heuristic search can overcome these limitations. We call this technique Concolic Walk. On a corpus of 11 programs, an instance of our Concolic Walk algorithm using tabu search generates tests with two- to three-times higher coverage than simplification-based tools while being up to five-times as efficient. Furthermore, our algorithm improves the coverage of two state-of-the-art test generators by 21% and 32%. Other concolic and symbolic testing tools could integrate our algorithm to solve complex path conditions without having to sacrifice any of their own capabilities, leading to higher overall coverage.