Visible to the public Biblio

Found 4358 results

Filters: Keyword is Resiliency  [Clear All Filters]
2019-10-30
Loruenser, Thomas, Pöhls, Henrich C., Sell, Leon, Laenger, Thomas.  2018.  CryptSDLC: Embedding Cryptographic Engineering into Secure Software Development Lifecycle. Proceedings of the 13th International Conference on Availability, Reliability and Security. :4:1-4:9.

Application development for the cloud is already challenging because of the complexity caused by the ubiquitous, interconnected, and scalable nature of the cloud paradigm. But when modern secure and privacy aware cloud applications require the integration of cryptographic algorithms, developers even need to face additional challenges: An incorrect application may not only lead to a loss of the intended strong security properties but may also open up additional loopholes for potential breaches some time in the near or far future. To avoid these pitfalls and to achieve dependable security and privacy by design, cryptography needs to be systematically designed into the software, and from scratch. We present a system architecture providing a practical abstraction for the many specialists involved in such a development process, plus a suitable cryptographic software development life cycle methodology on top of the architecture. The methodology is complemented with additional tools supporting structured inter–domain communication and thus the generation of consistent results: cloud security and privacy patterns, and modelling of cloud service level agreements. We conclude with an assessment of the use of the Cryptographic Software Design Life Cycle (CryptSDLC) in a EU research project.

Jansen, Rob, Traudt, Matthew, Hopper, Nicholas.  2018.  Privacy-Preserving Dynamic Learning of Tor Network Traffic. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1944-1961.

Experimentation tools facilitate exploration of Tor performance and security research problems and allow researchers to safely and privately conduct Tor experiments without risking harm to real Tor users. However, researchers using these tools configure them to generate network traffic based on simplifying assumptions and outdated measurements and without understanding the efficacy of their configuration choices. In this work, we design a novel technique for dynamically learning Tor network traffic models using hidden Markov modeling and privacy-preserving measurement techniques. We conduct a safe but detailed measurement study of Tor using 17 relays (\textasciitilde2% of Tor bandwidth) over the course of 6 months, measuring general statistics and models that can be used to generate a sequence of streams and packets. We show how our measurement results and traffic models can be used to generate traffic flows in private Tor networks and how our models are more realistic than standard and alternative network traffic generation\textasciitildemethods.

Demoulin, Henri Maxime, Vaidya, Tavish, Pedisich, Isaac, DiMaiolo, Bob, Qian, Jingyu, Shah, Chirag, Zhang, Yuankai, Chen, Ang, Haeberlen, Andreas, Loo, Boon Thau et al..  2018.  DeDoS: Defusing DoS with Dispersion Oriented Software. Proceedings of the 34th Annual Computer Security Applications Conference. :712-722.

This paper presents DeDoS, a novel platform for mitigating asymmetric DoS attacks. These attacks are particularly challenging since even attackers with limited resources can exhaust the resources of well-provisioned servers. DeDoS offers a framework to deploy code in a highly modular fashion. If part of the application stack is experiencing a DoS attack, DeDoS can massively replicate only the affected component, potentially across many machines. This allows scaling of the impacted resource separately from the rest of the application stack, so that resources can be precisely added where needed to combat the attack. Our evaluation results show that DeDoS incurs reasonable overheads in normal operations, and that it significantly outperforms standard replication techniques when defending against a range of asymmetric attacks.

Dean, Andrew, Agyeman, Michael Opoku.  2018.  A Study of the Advances in IoT Security. Proceedings of the 2Nd International Symposium on Computer Science and Intelligent Control. :15:1-15:5.

The Internet-of-things (IoT) holds a lot of benefits to our lives by removing menial tasks and improving efficiency of everyday objects. You are trusting your personal data and device control to the manufactures and you may not be aware of how much risk your putting your privacy at by sending your data over the internet. The internet-of-things may not be as secure as you think when the devices used are constrained by a lot of variables which attackers can exploit to gain access to your data / device and anything they connected to and as the internet-of-things is all about connecting devices together one weak point can be all it takes to gain full access. In this paper we have a look at the current advances in IoT security and the most efficient methods to protect IoT devices.

Belkin, Maxim, Haas, Roland, Arnold, Galen Wesley, Leong, Hon Wai, Huerta, Eliu A., Lesny, David, Neubauer, Mark.  2018.  Container Solutions for HPC Systems: A Case Study of Using Shifter on Blue Waters. Proceedings of the Practice and Experience on Advanced Research Computing. :43:1-43:8.

Software container solutions have revolutionized application development approaches by enabling lightweight platform abstractions within the so-called "containers." Several solutions are being actively developed in attempts to bring the benefits of containers to high-performance computing systems with their stringent security demands on the one hand and fundamental resource sharing requirements on the other. In this paper, we discuss the benefits and short-comings of such solutions when deployed on real HPC systems and applied to production scientific applications. We highlight use cases that are either enabled by or significantly benefit from such solutions. We discuss the efforts by HPC system administrators and support staff to support users of these type of workloads on HPC systems not initially designed with these workloads in mind focusing on NCSA's Blue Waters system.

Madani, Pooria, Vlajic, Natalija.  2018.  Robustness of Deep Autoencoder in Intrusion Detection Under Adversarial Contamination. Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. :1:1-1:8.

The existing state-of-the-art in the field of intrusion detection systems (IDSs) generally involves some use of machine learning algorithms. However, the computer security community is growing increasingly aware that a sophisticated adversary could target the learning module of these IDSs in order to circumvent future detections. Consequently, going forward, robustness of machine-learning based IDSs against adversarial manipulation (i.e., poisoning) will be the key factor for the overall success of these systems in the real world. In our work, we focus on adaptive IDSs that use anomaly-based detection to identify malicious activities in an information system. To be able to evaluate the susceptibility of these IDSs to deliberate adversarial poisoning, we have developed a novel framework for their performance testing under adversarial contamination. We have also studied the viability of using deep autoencoders in the detection of anomalies in adaptive IDSs, as well as their overall robustness against adversarial poisoning. Our experimental results show that our proposed autoencoder-based IDS outperforms a generic PCA-based counterpart by more than 15% in terms of detection accuracy. The obtained results concerning the detection ability of the deep autoencoder IDS under adversarial contamination, compared to that of the PCA-based IDS, are also encouraging, with the deep autoencoder IDS maintaining a more stable detection in parallel to limiting the contamination of its training dataset to just bellow 2%.

Bugeja, Joseph, Vogel, Bahtijar, Jacobsson, Andreas, Varshney, Rimpu.  2019.  IoTSM: An End-to-End Security Model for IoT Ecosystems. 2019 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). :267-272.

The Internet of Things (IoT) market is growing rapidly, allowing continuous evolution of new technologies. Alongside this development, most IoT devices are easy to compromise, as security is often not a prioritized characteristic. This paper proposes a novel IoT Security Model (IoTSM) that can be used by organizations to formulate and implement a strategy for developing end-to-end IoT security. IoTSM is grounded by the Software Assurance Maturity Model (SAMM) framework, however it expands it with new security practices and empirical data gathered from IoT practitioners. Moreover, we generalize the model into a conceptual framework. This approach allows the formal analysis for security in general and evaluates an organization's security practices. Overall, our proposed approach can help researchers, practitioners, and IoT organizations, to discourse about IoT security from an end-to-end perspective.

Jenkins, Ira Ray, Bratus, Sergey, Smith, Sean, Koo, Maxwell.  2018.  Reinventing the Privilege Drop: How Principled Preservation of Programmer Intent Would Prevent Security Bugs. Proceedings of the 5th Annual Symposium and Bootcamp on Hot Topics in the Science of Security. :3:1-3:9.

The principle of least privilege requires that components of a program have access to only those resources necessary for their proper function. Defining proper function is a difficult task. Existing methods of privilege separation, like Control Flow Integrity and Software Fault Isolation, attempt to infer proper function by bridging the gaps between language abstractions and hardware capabilities. However, it is programmer intent that defines proper function, as the programmer writes the code that becomes law. Codifying programmer intent into policy is a promising way to capture proper function; however, often onerous policy creation can unnecessarily delay development and adoption. In this paper, we demonstrate the use of our ELF-based access control (ELFbac), a novel technique for policy definition and enforcement. ELFbac leverages the common programmer's existing mental model of scope, and allows for policy definition at the Application Binary Interface (ABI) level. We consider the roaming vulnerability found in OpenSSH, and demonstrate how using ELFbac would have provided strong mitigation with minimal program modification. This serves to illustrate the effectiveness of ELFbac as a means of privilege separation in further applications, and the intuitive, yet robust nature of our general approach to policy creation.

Redmiles, Elissa M., Zhu, Ziyun, Kross, Sean, Kuchhal, Dhruv, Dumitras, Tudor, Mazurek, Michelle L..  2018.  Asking for a Friend: Evaluating Response Biases in Security User Studies. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1238-1255.

The security field relies on user studies, often including survey questions, to query end users' general security behavior and experiences, or hypothetical responses to new messages or tools. Self-report data has many benefits – ease of collection, control, and depth of understanding – but also many well-known biases stemming from people's difficulty remembering prior events or predicting how they might behave, as well as their tendency to shape their answers to a perceived audience. Prior work in fields like public health has focused on measuring these biases and developing effective mitigations; however, there is limited evidence as to whether and how these biases and mitigations apply specifically in a computer-security context. In this work, we systematically compare real-world measurement data to survey results, focusing on an exemplar, well-studied security behavior: software updating. We align field measurements about specific software updates (n=517,932) with survey results in which participants respond to the update messages that were used when those versions were released (n=2,092). This allows us to examine differences in self-reported and observed update speeds, as well as examining self-reported responses to particular message features that may correlate with these results. The results indicate that for the most part, self-reported data varies consistently and systematically with measured data. However, this systematic relationship breaks down when survey respondents are required to notice and act on minor details of experimental manipulations. Our results suggest that many insights from self-report security data can, when used with care, translate to real-world environments; however, insights about specific variations in message texts or other details may be more difficult to assess with surveys.

2019-10-28
Kanewala, Thejaka Amila, Zalewski, Marcin, Lumsdaine, Andrew.  2018.  Distributed, Shared-Memory Parallel Triangle Counting. Proceedings of the Platform for Advanced Scientific Computing Conference. :5:1–5:12.
Triangles are the most basic non-trivial subgraphs. Triangle counting is used in a number of different applications, including social network mining, cyber security, and spam detection. In general, triangle counting algorithms are readily parallelizable, but when implemented in distributed, shared-memory, their performance is poor due to high communication, imbalance of work, and the difficulty of exploiting locality available in shared memory. In this paper, we discuss four different (but related) triangle counting algorithms and how their performance can be improved in distributed, shared-memory by reducing in-node load imbalance, improving cache utilization, minimizing network overhead, and minimizing algorithmic work. We generalize the four different triangle counting algorithms into a common framework and show that for all four algorithms the in-node load imbalance can be minimized while utilizing caches by partitioning work into blocks of vertices, the network overhead can be minimized by aggregation of blocks of work, and algorithm work can be reduced by partitioning vertex neighbors by degree. We experimentally evaluate the weak and the strong scaling performance of the proposed algorithms with two types of synthetic graph inputs and three real-world graph inputs. We also compare the performance of our implementations with the distributed, shared-memory triangle counting algorithms available in PowerGraph-GraphLab and show that our proposed algorithms outperform those algorithms, both in terms of space and time.
Zhai, Keke, Banerjee, Tania, Zwick, David, Hackl, Jason, Ranka, Sanjay.  2018.  Dynamic Load Balancing for Compressible Multiphase Turbulence. Proceedings of the 2018 International Conference on Supercomputing. :318–327.
CMT-nek is a new scientific application for performing high fidelity predictive simulations of particle laden explosively dispersed turbulent flows. CMT-nek involves detailed simulations, is compute intensive and is targeted to be deployed on exascale platforms. The moving particles are the main source of load imbalance as the application is executed on parallel processors. In a demonstration problem, all the particles are initially in a closed container until a detonation occurs and the particles move apart. If all processors get an equal share of the fluid domain, then only some of the processors get sections of the domain that are initially laden with particles, leading to disparate load on the processors. In order to eliminate load imbalance in different processors and to speedup the makespan, we present different load balancing algorithms for CMT-nek on large scale multicore platforms consisting of hundred of thousands of cores. The detailed process of the load balancing algorithms are presented. The performance of the different load balancing algorithms are compared and the associated overheads are analyzed. Evaluations on the application with and without load balancing are conducted and these show that with load balancing, simulation time becomes faster by a factor of up to 9.97.
Withers, Alex, Bockelman, Brian, Weitzel, Derek, Brown, Duncan, Gaynor, Jeff, Basney, Jim, Tannenbaum, Todd, Miller, Zach.  2018.  SciTokens: Capability-Based Secure Access to Remote Scientific Data. Proceedings of the Practice and Experience on Advanced Research Computing. :24:1–24:8.
The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.
Blanquer, Ignacio, Meira, Wagner.  2018.  EUBra-BIGSEA, A Cloud-Centric Big Data Scientific Research Platform. 2018 48th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :47–48.
This paper describes the achievements of project EUBra-BIGSEA, which has delivered programming models and data analytics tools for the development of distributed Big Data applications. As framework components, multiple data models are supported (e.g. data streams, multidimensional data, etc.) and efficient mechanisms to ensure privacy and security, on top of a QoS-aware layer for the smart and rapid provisioning of resources in a cloud-based environment.
Ocaña, Kary, Galheigo, Marcelo, Osthoff, Carla, Gadelha, Luiz, Gomes, Antônio Tadeu A., De Oliveira, Daniel, Porto, Fabio, Vasconcelos, Ana Tereza.  2019.  Towards a Science Gateway for Bioinformatics: Experiences in the Brazilian System of High Performance Computing. 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID). :638–647.
Science gateways bring out the possibility of reproducible science as they are integrated into reusable techniques, data and workflow management systems, security mechanisms, and high performance computing (HPC). We introduce BioinfoPortal, a science gateway that integrates a suite of different bioinformatics applications using HPC and data management resources provided by the Brazilian National HPC System (SINAPAD). BioinfoPortal follows the Software as a Service (SaaS) model and the web server is freely available for academic use. The goal of this paper is to describe the science gateway and its usage, addressing challenges of designing a multiuser computational platform for parallel/distributed executions of large-scale bioinformatics applications using the Brazilian HPC resources. We also present a study of performance and scalability of some bioinformatics applications executed in the HPC environments and perform machine learning analyses for predicting features for the HPC allocation/usage that could better perform the bioinformatics applications via BioinfoPortal.
Huang, Jingwei.  2018.  From Big Data to Knowledge: Issues of Provenance, Trust, and Scientific Computing Integrity. 2018 IEEE International Conference on Big Data (Big Data). :2197–2205.
This paper addresses the nature of data and knowledge, the relation between them, the variety of views as a characteristic of Big Data regarding that data may come from many different sources/views from different viewpoints, and the associated essential issues of data provenance, knowledge provenance, scientific computing integrity, and trust in the data science process. Towards the direction of data-intensive science and engineering, it is of paramount importance to ensure Scientific Computing Integrity (SCI). A failure of SCI may be caused by malicious attacks, natural environmental changes, faults of scientists, operations mistakes, faults of supporting systems, faults of processes, and errors in the data or theories on which a research relies. The complexity of scientific workflows and large provenance graphs as well as various causes for SCI failures make ensuring SCI extremely difficult. Provenance and trust play critical role in evaluating SCI. This paper reports our progress in building a model for provenance-based trust reasoning about SCI.
Trunov, Artem S., Voronova, Lilia I., Voronov, Vyacheslav I., Ayrapetov, Dmitriy P..  2018.  Container Cluster Model Development for Legacy Applications Integration in Scientific Software System. 2018 IEEE International Conference "Quality Management, Transport and Information Security, Information Technologies" (IT QM IS). :815–819.
Feature of modern scientific information systems is their integration with computing applications, providing distributed computer simulation and intellectual processing of Big Data using high-efficiency computing. Often these software systems include legacy applications in different programming languages, with non-standardized interfaces. To solve the problem of applications integration, containerization systems are using that allow to configure environment in the shortest time to deploy software system. However, there are no such systems for computer simulation systems with large number of nodes. The article considers the actual task of combining containers into a cluster, integrating legacy applications to manage the distributed software system MD-SLAG-MELT v.14, which supports high-performance computing and visualization of the computer experiments results. Testing results of the container cluster including automatic load sharing module for MD-SLAG-MELT system v.14. are given.
2019-10-23
Ali, Abdullah Ahmed, Zamri Murah, Mohd.  2018.  Security Assessment of Libyan Government Websites. 2018 Cyber Resilience Conference (CRC). :1-4.

Many governments organizations in Libya have started transferring traditional government services to e-government. These e-services will benefit a wide range of public. However, deployment of e-government bring many new security issues. Attackers would take advantages of vulnerabilities in these e-services and would conduct cyber attacks that would result in data loss, services interruptions, privacy loss, financial loss, and other significant loss. The number of vulnerabilities in e-services have increase due to the complexity of the e-services system, a lack of secure programming practices, miss-configuration of systems and web applications vulnerabilities, or not staying up-to-date with security patches. Unfortunately, there is a lack of study being done to assess the current security level of Libyan government websites. Therefore, this study aims to assess the current security of 16 Libyan government websites using penetration testing framework. In this assessment, no exploits were committed or tried on the websites. In penetration testing framework (pen test), there are four main phases: Reconnaissance, Scanning, Enumeration, Vulnerability Assessment and, SSL encryption evaluation. The aim of a security assessment is to discover vulnerabilities that could be exploited by attackers. We also conducted a Content Analysis phase for all websites. In this phase, we searched for security and privacy policies implementation information on the government websites. The aim is to determine whether the websites are aware of current accepted standard for security and privacy. From our security assessment results of 16 Libyan government websites, we compared the websites based on the number of vulnerabilities found and the level of security policies. We only found 9 websites with high and medium vulnerabilities. Many of these vulnerabilities are due to outdated software and systems, miss-configuration of systems and not applying the latest security patches. These vulnerabilities could be used by cyber hackers to attack the systems and caused damages to the systems. Also, we found 5 websites didn't implement any SSL encryption for data transactions. Lastly, only 2 websites have published security and privacy policies on their websites. This seems to indicate that these websites were not concerned with current standard in security and privacy. Finally, we classify the 16 websites into 4 safety categories: highly unsafe, unsafe, somewhat unsafe and safe. We found only 1 website with a highly unsafe ranking. Based on our finding, we concluded that the security level of the Libyan government websites are adequate, but can be further improved. However, immediate actions need to be taken to mitigate possible cyber attacks by fixing the vulnerabilities and implementing SSL encryption. Also, the websites need to publish their security and privacy policy so the users could trust their websites.

Madala, D S V, Jhanwar, Mahabir Prasad, Chattopadhyay, Anupam.  2018.  Certificate Transparency Using Blockchain. 2018 IEEE International Conference on Data Mining Workshops (ICDMW). :71-80.

The security of web communication via the SSL/TLS protocols relies on safe distributions of public keys associated with web domains in the form of X.509 certificates. Certificate authorities (CAs) are trusted third parties that issue these certificates. However, the CA ecosystem is fragile and prone to compromises. Starting with Google's Certificate Transparency project, a number of research works have recently looked at adding transparency for better CA accountability, effectively through public logs of all certificates issued by certification authorities, to augment the current X.509 certificate validation process into SSL/TLS. In this paper, leveraging recent progress in blockchain technology, we propose a novel system, called CTB, that makes it impossible for a CA to issue a certificate for a domain without obtaining consent from the domain owner. We further make progress to equip CTB with certificate revocation mechanism. We implement CTB using IBM's Hyperledger Fabric blockchain platform. CTB's smart contract, written in Go, is provided for complete reference.

Szalachowski, Pawel.  2018.  (Short Paper) Towards More Reliable Bitcoin Timestamps. 2018 Crypto Valley Conference on Blockchain Technology (CVCBT). :101-104.

Bitcoin provides freshness properties by forming a blockchain where each block is associated with its timestamp and the previous block. Due to these properties, the Bitcoin protocol is being used as a decentralized, trusted, and secure timestamping service. Although Bitcoin participants which create new blocks cannot modify their order, they can manipulate timestamps almost undetected. This undermines the Bitcoin protocol as a reliable timestamping service. In particular, a newcomer that synchronizes the entire blockchain has a little guarantee about timestamps of all blocks. In this paper, we present a simple yet powerful mechanism that increases the reliability of Bitcoin timestamps. Our protocol can provide evidence that a block was created within a certain time range. The protocol is efficient, backward compatible, and surprisingly, currently deployed SSL/TLS servers can act as reference time sources. The protocol has many applications and can be used for detecting various attacks against the Bitcoin protocol.

Chen, Jing, Yao, Shixiong, Yuan, Quan, He, Kun, Ji, Shouling, Du, Ruiying.  2018.  CertChain: Public and Efficient Certificate Audit Based on Blockchain for TLS Connections. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications. :2060-2068.

In recent years, real-world attacks against PKI take place frequently. For example, malicious domains' certificates issued by compromised CAs are widespread, and revoked certificates are still trusted by clients. In spite of a lot of research to improve the security of SSL/TLS connections, there are still some problems unsolved. On one hand, although log-based schemes provided certificate audit service to quickly detect CAs' misbehavior, the security and data consistency of log servers are ignored. On the other hand, revoked certificates checking is neglected due to the incomplete, insecure and inefficient certificate revocation mechanisms. Further, existing revoked certificates checking schemes are centralized which would bring safety bottlenecks. In this paper, we propose a blockchain-based public and efficient audit scheme for TLS connections, which is called Certchain. Specially, we propose a dependability-rank based consensus protocol in our blockchain system and a new data structure to support certificate forward traceability. Furthermore, we present a method that utilizes dual counting bloom filter (DCBF) with eliminating false positives to achieve economic space and efficient query for certificate revocation checking. The security analysis and experimental results demonstrate that CertChain is suitable in practice with moderate overhead.

Lee, Hojoon, Song, Chihyun, Kang, Brent Byunghoon.  2018.  Lord of the X86 Rings: A Portable User Mode Privilege Separation Architecture on X86. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :1441-1454.

Modern applications often involve processing of sensitive information. However, the lack of privilege separation within the user space leaves sensitive application secret such as cryptographic keys just as unprotected as a "hello world" string. Cutting-edge hardware-supported security features are being introduced. However, the features are often vendor-specific or lack compatibility with older generations of the processors. The situation leaves developers with no portable solution to incorporate protection for the sensitive application component. We propose LOTRx86, a fundamental and portable approach for user-space privilege separation. Our approach creates a more privileged user execution layer called PrivUser by harnessing the underused intermediate privilege levels on the x86 architecture. The PrivUser memory space, a set of pages within process address space that are inaccessible to user mode, is a safe place for application secrets and routines that access them. We implement the LOTRx86 ABI that exports the privcall interface to users to invoke secret handling routines in PrivUser. This way, sensitive application operations that involve the secrets are performed in a strictly controlled manner. The memory access control in our architecture is privilege-based, accessing the protected application secret only requires a change in the privilege, eliminating the need for costly remote procedure calls or change in address space. We evaluated our platform by developing a proof-of-concept LOTRx86-enabled web server that employs our architecture to securely access its private key during an SSL connection. We conducted a set of experiments including a performance measurement on the PoC on both Intel and AMD PCs, and confirmed that LOTRx86 incurs only a limited performance overhead.

Alshawish, Ali, Spielvogel, Korbinian, de Meer, Hermann.  2019.  A Model-Based Time-to-Compromise Estimator to Assess the Security Posture of Vulnerable Networks. 2019 International Conference on Networked Systems (NetSys). :1-3.

Several operational and economic factors impact the patching decisions of critical infrastructures. The constraints imposed by such factors could prevent organizations from fully remedying all of the vulnerabilities that expose their (critical) assets to risk. Therefore, an involved decision maker (e.g. security officer) has to strategically decide on the allocation of possible remediation efforts towards minimizing the inherent security risk. This, however, involves the use of comparative judgments to prioritize risks and remediation actions. Throughout this work, the security risk is quantified using the security metric Time-To-Compromise (TTC). Our main contribution is to provide a generic TTC estimator to comparatively assess the security posture of computer networks taking into account interdependencies between the network components, different adversary skill levels, and characteristics of (known and zero-day) vulnerabilities. The presented estimator relies on a stochastic TTC model and Monte Carlo simulation (MCS) techniques to account for the input data variability and inherent prediction uncertainties.

Isaeva, N. A..  2018.  Choice of Control Parameters of Complex System on the Basis of Estimates of the Risks. 2018 Eleventh International Conference "Management of Large-Scale System Development" (MLSD. :1-4.

The method of choice the control parameters of a complex system based on estimates of the risks is proposed. The procedure of calculating the estimates of risks intended for a choice of rational managing directors of influences by an allocation of the group of the operating factors for the set criteria factor is considered. The purpose of choice of control parameters of the complex system is the minimization of an estimate of the risk of the functioning of the system by mean of a solution of a problem of search of an extremum of the function of many variables. The example of a choice of the operating factors in the sphere of intangible assets is given.

Zieger, Andrej, Freiling, Felix, Kossakowski, Klaus-Peter.  2018.  The $\beta$-Time-to-Compromise Metric for Practical Cyber Security Risk Estimation. 2018 11th International Conference on IT Security Incident Management IT Forensics (IMF). :115-133.

To manage cybersecurity risks in practice, a simple yet effective method to assess suchs risks for individual systems is needed. With time-to-compromise (TTC), McQueen et al. (2005) introduced such a metric that measures the expected time that a system remains uncompromised given a specific threat landscape. Unlike other approaches that require complex system modeling to proceed, TTC combines simplicity with expressiveness and therefore has evolved into one of the most successful cybersecurity metrics in practice. We revisit TTC and identify several mathematical and methodological shortcomings which we address by embedding all aspects of the metric into the continuous domain and the possibility to incorporate information about vulnerability characteristics and other cyber threat intelligence into the model. We propose $\beta$-TTC, a formal extension of TTC which includes information from CVSS vectors as well as a continuous attacker skill based on a $\beta$-distribution. We show that our new metric (1) remains simple enough for practical use and (2) gives more realistic predictions than the original TTC by using data from a modern and productively used vulnerability database of a national CERT.

Bahirat, Kanchan, Shah, Umang, Cardenas, Alvaro A., Prabhakaran, Balakrishnan.  2018.  ALERT: Adding a Secure Layer in Decision Support for Advanced Driver Assistance System (ADAS). Proceedings of the 26th ACM International Conference on Multimedia. :1984-1992.

With the ever-increasing popularity of LiDAR (Light Image Detection and Ranging) sensors, a wide range of applications such as vehicle automation and robot navigation are developed utilizing the 3D LiDAR data. Many of these applications involve remote guidance - either for safety or for the task performance - of these vehicles and robots. Research studies have exposed vulnerabilities of using LiDAR data by considering different security attack scenarios. Considering the security risks associated with the improper behavior of these applications, it has become crucial to authenticate the 3D LiDAR data that highly influence the decision making in such applications. In this paper, we propose a framework, ALERT (Authentication, Localization, and Estimation of Risks and Threats), as a secure layer in the decision support system used in the navigation control of vehicles and robots. To start with, ALERT tamper-proofs 3D LiDAR data by employing an innovative mechanism for creating and extracting a dynamic watermark. Next, when tampering is detected (because of the inability to verify the dynamic watermark), ALERT then carries out cross-modal authentication for localizing the tampered region. Finally, ALERT estimates the level of risk and threat based on the temporal and spatial nature of the attacks on LiDAR data. This estimation of risk and threats can then be incorporated into the decision support system used by ADAS (Advanced Driver Assistance System). We carried out several experiments to evaluate the efficacy of the proposed ALERT for ADAS and the experimental results demonstrate the effectiveness of the proposed approach.