Visible to the public Biblio

Found 4166 results

Filters: Keyword is Resiliency  [Clear All Filters]
1953
Pulvari, Charles F..  1953.  The Snapping Dipoles of Ferroelectrics As a Memory Element for Digital Computers. Proceedings of the February 4-6, 1953, Western Computer Conference. :140–159.

A brief review is given of the memory properties of non-linear ferroelectric materials in terms of the direction of polarization. A sensitive pulse method has been developed for obtaining static remanent polarization data of ferroelectric materials. This method has been applied to study the effect of pulse duration and amplitude and decay of polarization on ferroelectric ceramic materials with fairly high crystalline orientation. These studies indicate that ferroelectric memory devices can be operated in the megacycle ranges. Attempts have been made to develop electrostatically induced memory devices using ferroelectric substances as a medium for storing information. As an illustration, a ferroelectric memory using a new type of switching matrix is presented having a selection ratio 50 or more.

2004
Du, Xiaojiang.  2004.  Using k-nearest neighbor method to identify poison message failure. IEEE Global Telecommunications Conference, 2004. GLOBECOM '04. 4:2113–2117Vol.4.

Poison message failure is a mechanism that has been responsible for large scale failures in both telecommunications and IP networks. The poison message failure can propagate in the network and cause an unstable network. We apply a machine learning, data mining technique in the network fault management area. We use the k-nearest neighbor method to identity the poison message failure. We also propose a "probabilistic" k-nearest neighbor method which outputs a probability distribution about the poison message. Through extensive simulations, we show that the k-nearest neighbor method is very effective in identifying the responsible message type.

2008
Liu, C., Feng, Y., Fan, M., Wang, G..  2008.  PKI Mesh Trust Model Based on Trusted Computing. 2008 The 9th International Conference for Young Computer Scientists. :1401–1405.
Different organizations or countries maybe adopt different PKI trust model in real applications. On a large scale, all certification authorities (CA) and end entities construct a huge mesh network. PKI trust model exhibits unstructured mesh network as a whole. However, mesh trust model worsens computational complexity in certification path processing when the number of PKI domains increases. This paper proposes an enhanced mesh trust model for PKI. Keys generation and signature are fulfilled in Trusted Platform Module (TPM) for higher security level. An algorithm is suggested to improve the performance of certification path processing in this model. This trust model is less complex but more efficient and robust than the existing PKI trust models.
Chan, Ellick M., Carlyle, Jeffrey C., David, Francis M., Farivar, Reza, Campbell, Roy H..  2008.  BootJacker: Compromising Computers Using Forced Restarts. Proceedings of the 15th ACM Conference on Computer and Communications Security. :555–564.

BootJacker is a proof-of-concept attack tool which demonstrates that authentication mechanisms employed by an operating system can be bypassed by obtaining physical access and simply forcing a restart. The key insight that enables this attack is that the contents of memory on some machines are fully preserved across a warm boot. Upon a reboot, BootJacker uses this residual memory state to revive the original host operating system environment and run malicious payloads. Using BootJacker, an attacker can break into a locked user session and gain access to open encrypted disks, web browser sessions or other secure network connections. BootJacker's non-persistent design makes it possible for an attacker to leave no traces on the victim machine.

Phillips, B. J., Schmidt, C. D., Kelly, D. R..  2008.  Recovering Data from USB Flash Memory Sticks That Have Been Damaged or Electronically Erased. Proceedings of the 1st International Conference on Forensic Applications and Techniques in Telecommunications, Information, and Multimedia and Workshop. :19:1–19:6.

In this paper we consider recovering data from USB Flash memory sticks after they have been damaged or electronically erased. We describe the physical structure and theory of operation of Flash memories; review the literature of Flash memory data recovery; and report results of new experiments in which we damage USB Flash memory sticks and attempt to recover their contents. The experiments include smashing and shooting memory sticks, incinerating them in petrol and cooking them in a microwave oven.

2009
Halderman, J. Alex, Schoen, Seth D., Heninger, Nadia, Clarkson, William, Paul, William, Calandrino, Joseph A., Feldman, Ariel J., Appelbaum, Jacob, Felten, Edward W..  2009.  Lest We Remember: Cold-boot Attacks on Encryption Keys. Commun. ACM. 52:91–98.

Contrary to widespread assumption, dynamic RAM (DRAM), the main memory in most modern computers, retains its contents for several seconds after power is lost, even at room temperature and even if removed from a motherboard. Although DRAM becomes less reliable when it is not refreshed, it is not immediately erased, and its contents persist sufficiently for malicious (or forensic) acquisition of usable full-system memory images. We show that this phenomenon limits the ability of an operating system to protect cryptographic key material from an attacker with physical access to a machine. It poses a particular threat to laptop users who rely on disk encryption: we demonstrate that it could be used to compromise several popular disk encryption products without the need for any special devices or materials. We experimentally characterize the extent and predictability of memory retention and report that remanence times can be increased dramatically with simple cooling techniques. We offer new algorithms for finding cryptographic keys in memory images and for correcting errors caused by bit decay. Though we discuss several strategies for mitigating these risks, we know of no simple remedy that would eliminate them.

Bianculli, Domenico, Binder, Walter, Drago, Mauro Luigi, Ghezzi, Carlo.  2009.  ReMan: A Pro-active Reputation Management Infrastructure for Composite Web Services. Proceedings of the 31st International Conference on Software Engineering. :623–626.

REMAN is a reputation management infrastructure for composite Web services. It supports the aggregation of client feedback on the perceived QoS of external services, using reputation mechanisms to build service rankings. Changes in rankings are pro-actively notified to composite service clients to enable self-tuning properties in their execution.

2010
Chan, Ellick, Venkataraman, Shivaram, David, Francis, Chaugule, Amey, Campbell, Roy.  2010.  Forenscope: A Framework for Live Forensics. Proceedings of the 26th Annual Computer Security Applications Conference. :307–316.

Current post-mortem cyber-forensic techniques may cause significant disruption to the evidence gathering process by breaking active network connections and unmounting encrypted disks. Although newer live forensic analysis tools can preserve active state, they may taint evidence by leaving footprints in memory. To help address these concerns we present Forenscope, a framework that allows an investigator to examine the state of an active system without the effects of taint or forensic blurriness caused by analyzing a running system. We show how Forenscope can fit into accepted workflows to improve the evidence gathering process. Forenscope preserves the state of the running system and allows running processes, open files, encrypted filesystems and open network sockets to persist during the analysis process. Forenscope has been tested on live systems to show that it does not operationally disrupt critical processes and that it can perform an analysis in less than 15 seconds while using only 125 KB of memory. We show that Forenscope can detect stealth rootkits, neutralize threats and expedite the investigation process by finding evidence in memory.

Gil-Quijano, Javier, Sabouret, Nicolas.  2010.  Prediction of Humans' Activity for Learning the Behaviors of Electrical Appliances in an Intelligent Ambient Environment. Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 02. :283–286.

In this paper we propose a mechanism of prediction of domestic human activity in a smart home context. We use those predictions to adapt the behavior of home appliances whose impact on the environment is delayed (for example the heating). The behaviors of appliances are built by a reinforcement learning mechanism. We compare the behavior built by the learning approach with both a merely reactive behavior and a state-remanent behavior.

2012
Kloft, Marius, Laskov, Pavel.  2012.  Security Analysis of Online Centroid Anomaly Detection. J. Mach. Learn. Res.. 13:3681–3724.

Security issues are crucial in a number of machine learning applications, especially in scenarios dealing with human activity rather than natural phenomena (e.g., information ranking, spam detection, malware detection, etc.). In such cases, learning algorithms may have to cope with manipulated data aimed at hampering decision making. Although some previous work addressed the issue of handling malicious data in the context of supervised learning, very little is known about the behavior of anomaly detection methods in such scenarios. In this contribution, we analyze the performance of a particular method–online centroid anomaly detection–in the presence of adversarial noise. Our analysis addresses the following security-related issues: formalization of learning and attack processes, derivation of an optimal attack, and analysis of attack efficiency and limitations. We derive bounds on the effectiveness of a poisoning attack against centroid anomaly detection under different conditions: attacker's full or limited control over the traffic and bounded false positive rate. Our bounds show that whereas a poisoning attack can be effectively staged in the unconstrained case, it can be made arbitrarily difficult (a strict upper bound on the attacker's gain) if external constraints are properly used. Our experimental evaluation, carried out on real traces of HTTP and exploit traffic, confirms the tightness of our theoretical bounds and the practicality of our protection mechanisms.

2013
Dietrich, Christian J., Rossow, Christian, Pohlmann, Norbert.  2013.  Exploiting Visual Appearance to Cluster and Detect Rogue Software. Proceedings of the 28th Annual ACM Symposium on Applied Computing. :1776–1783.
Rogue software, such as Fake A/V and ransomware, trick users into paying without giving return. We show that using a perceptual hash function and hierarchical clustering, more than 213,671 screenshots of executed malware samples can be grouped into subsets of structurally similar images, reflecting image clusters of one malware family or campaign. Based on the clustering results, we show that ransomware campaigns favor prepay payment methods such as ukash, paysafecard and moneypak, while Fake A/V campaigns use credit cards for payment. Furthermore, especially given the low A/V detection rates of current rogue software – sometimes even as low as 11% – our screenshot analysis approach could serve as a complementary last line of defense.
Ivars, Eugene, Armands, Vadim.  2013.  Alias-free compressed signal digitizing and recording on the basis of Event Timer. 2013 21st Telecommunications Forum Telfor (℡FOR). :443–446.

Specifics of an alias-free digitizer application for compressed digitizing and recording of wideband signals are considered. Signal sampling in this case is performed on the basis of picosecond resolution event timing, the digitizer actually is a subsystem of Event Timer A033-ET and specific events that are detected and then timed are the signal and reference sine-wave crossings. The used approach to development of this subsystem is described and some results of experimental studies are given.

2014
Biggio, Battista, Rieck, Konrad, Ariu, Davide, Wressnegger, Christian, Corona, Igino, Giacinto, Giorgio, Roli, Fabio.  2014.  Poisoning Behavioral Malware Clustering. Proceedings of the 2014 Workshop on Artificial Intelligent and Security Workshop. :27–36.
Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems. However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data. In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior. To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Ananth, Prabhanjan, Gupta, Divya, Ishai, Yuval, Sahai, Amit.  2014.  Optimizing Obfuscation: Avoiding Barrington's Theorem. Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security. :646–658.

In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs – this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.

Shila, D.M., Venugopal, V..  2014.  Design, implementation and security analysis of Hardware Trojan Threats in FPGA. Communications (ICC), 2014 IEEE International Conference on. :719-724.

Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.

Tunc, C., Fargo, F., Al-Nashif, Y., Hariri, S., Hughes, J..  2014.  Autonomic Resilient Cloud Management (ARCM) Design and Evaluation. Cloud and Autonomic Computing (ICCAC), 2014 International Conference on. :44-49.

Cloud Computing is emerging as a new paradigm that aims delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used, it is critical that the security mechanisms are robust and resilient to faults and attacks. Securing cloud systems is extremely complex due to the many interdependent tasks such as application layer firewalls, alert monitoring and analysis, source code analysis, and user identity management. It is strongly believed that we cannot build cloud services that are immune to attacks. Resiliency to attacks is becoming an important approach to address cyber-attacks and mitigate their impacts. Resiliency for mission critical systems is demanded higher. In this paper, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on moving target defense, cloud service Behavior Obfuscation (BO), and autonomic computing. By continuously and randomly changing the cloud execution environments and platform types, it will be difficult especially for insider attackers to figure out the current execution environment and their existing vulnerabilities, thus allowing the system to evade attacks. We show how to apply the ARCM to one class of applications, Map/Reduce, and evaluate its performance and overhead.
 

2015
Heckman, M. R., Schell, R. R., Reed, E. E..  2015.  A Multi-Level Secure File Sharing Server and Its Application to a Multi-Level Secure Cloud. MILCOM 2015 - 2015 IEEE Military Communications Conference. :1224–1229.
Contemporary cloud environments are built on low-assurance components, so they cannot provide a high level of assurance about the isolation and protection of information. A ``multi-level'' secure cloud environment thus typically consists of multiple, isolated clouds, each of which handles data of only one security level. Not only are such environments duplicative and costly, data ``sharing'' must be implemented by massive, wasteful copying of data from low-level domains to high-level domains. The requirements for certifiable, scalable, multi-level cloud security are threefold: 1) To have trusted, high-assurance components available for use in creating a multi-level secure cloud environment; 2) To design a cloud architecture that efficiently uses the high-assurance components in a scalable way, and 3) To compose the secure components within the scalable architecture while still verifiably maintaining the system security properties. This paper introduces a trusted, high-assurance file server and architecture that satisfies all three requirements. The file server is built on mature technology that was previously certified and deployed across domains from TS/SCI to Unclassified and that supports high-performance, low-to-high and high-to-low file sharing with verifiable security.
Haitzer, Thomas, Navarro, Elena, Zdun, Uwe.  2015.  Architecting for Decision Making About Code Evolution. Proceedings of the 2015 European Conference on Software Architecture Workshops. :52:1–52:7.

During software evolution, it is important to evolve not only the source code, but also its architecture to prevent architecture drift and architecture erosion. This is a complex activity, especially for large software projects, with multiple development teams that might be located in different countries or on different continents. To ease this kind of evolution, we have developed a domain-specific language for making decisions about the evolution. It supports the definition of architectural changes based on multiple implementation tasks that can have temporal dependencies among each other. Then, by means of a model-to-model transformation, we automatically create a constraint model that we use to generate, by means of the Alloy model analyzer, the possible alternative decisions for executing the implementation tasks. The tight integration with architecture abstractions enables architects to automatically check the changes related to an implementation task in relation to the architecture description. This helps keeping architecture and code in sync, avoiding drift and erosion.

Ahsan, Muhammad, Meter, Rodney Van, Kim, Jungsang.  2015.  Designing a Million-Qubit Quantum Computer Using a Resource Performance Simulator. J. Emerg. Technol. Comput. Syst.. 12:39:1–39:25.

The optimal design of a fault-tolerant quantum computer involves finding an appropriate balance between the burden of large-scale integration of noisy components and the load of improving the reliability of hardware technology. This balance can be evaluated by quantitatively modeling the execution of quantum logic operations on a realistic quantum hardware containing limited computational resources. In this work, we report a complete performance simulation software tool capable of (1) searching the hardware design space by varying resource architecture and technology parameters, (2) synthesizing and scheduling a fault-tolerant quantum algorithm within the hardware constraints, (3) quantifying the performance metrics such as the execution time and the failure probability of the algorithm, and (4) analyzing the breakdown of these metrics to highlight the performance bottlenecks and visualizing resource utilization to evaluate the adequacy of the chosen design. Using this tool, we investigate a vast design space for implementing key building blocks of Shor’s algorithm to factor a 1,024-bit number with a baseline budget of 1.5 million qubits. We show that a trapped-ion quantum computer designed with twice as many qubits and one-tenth of the baseline infidelity of the communication channel can factor a 2,048-bit integer in less than 5 months.

Lokesh, M. R., Kumaraswamy, Y. S..  2015.  Healing process towards resiliency in cyber-physical system: A modified danger theory based artifical immune recogization2 algorithm approach. 2015 IEEE International Conference on Computer Graphics, Vision and Information Security (CGVIS). :226–232.

Healing Process is a major role in developing resiliency in cyber-physical system where the environment is diverse in nature. Cyber-physical system is modelled with Multi Agent Paradigm and biological inspired Danger Theory based-Artificial Immune Recognization2 Algorithm Methodology towards developing healing process. The Proposed methodology is implemented in a simulation environment and percentage of Convergence rates shown in achieving accuracy in the healing process to resiliency in cyber-physical system environment is shown.

Zhang, F., Chan, P. P. K., Tang, T. Q..  2015.  L-GEM based robust learning against poisoning attack. 2015 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR). :175–178.

Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack.

Das, Subhasis, Aamodt, Tor M., Dally, William J..  2015.  Reuse Distance-Based Probabilistic Cache Replacement. ACM Trans. Archit. Code Optim.. 12:33:1–33:22.

This article proposes Probabilistic Replacement Policy (PRP), a novel replacement policy that evicts the line with minimum estimated hit probability under optimal replacement instead of the line with maximum expected reuse distance. The latter is optimal under the independent reference model of programs, which does not hold for last-level caches (LLC). PRP requires 7% and 2% metadata overheads in the cache and DRAM respectively. Using a sampling scheme makes DRAM overhead negligible, with minimal performance impact. Including detailed overhead modeling and equal cache areas, PRP outperforms SHiP, a state-of-the-art LLC replacement algorithm, by 4% for memory-intensive SPEC-CPU2006 benchmarks.

Tan, Li, Chen, Zizhong, Song, Shuaiwen Leon.  2015.  Scalable Energy Efficiency with Resilience for High Performance Computing Systems: A Quantitative Methodology. ACM Trans. Archit. Code Optim.. 12:35:1–35:27.

Ever-growing performance of supercomputers nowadays brings demanding requirements of energy efficiency and resilience, due to rapidly expanding size and duration in use of the large-scale computing systems. Many application/architecture-dependent parameters that determine energy efficiency and resilience individually have causal effects with each other, which directly affect the trade-offs among performance, energy efficiency and resilience at scale. To enable high-efficiency management for large-scale High-Performance Computing (HPC) systems nowadays, quantitatively understanding the entangled effects among performance, energy efficiency, and resilience is thus required. While previous work focuses on exploring energy-saving and resilience-enhancing opportunities separately, little has been done to theoretically and empirically investigate the interplay between energy efficiency and resilience at scale. In this article, by extending the Amdahl’s Law and the Karp-Flatt Metric, taking resilience into consideration, we quantitatively model the integrated energy efficiency in terms of performance per Watt and showcase the trade-offs among typical HPC parameters, such as number of cores, frequency/voltage, and failure rates. Experimental results for a wide spectrum of HPC benchmarks on two HPC systems show that the proposed models are accurate in extrapolating resilience-aware performance and energy efficiency, and capable of capturing the interplay among various energy-saving and resilience factors. Moreover, the models can help find the optimal HPC configuration for the highest integrated energy efficiency, in the presence of failures and applied resilience techniques.

Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., Jha, N. K..  2015.  Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare. IEEE Journal of Biomedical and Health Informatics. 19:1893–1905.

Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.