Visible to the public Biblio

Found 274 results

Filters: First Letter Of Title is L  [Clear All Filters]
A B C D E F G H I J K [L] M N O P Q R S T U V W X Y Z   [Show ALL]
L
Shen, Chao.  2020.  Laser-based high bit-rate visible light communications and underwater optical wireless network. 2020 Photonics North (PN). :1–1.
This talk presents an overview of the latest visible light communication (VLC) and underwater wireless optical communication (UWOC) research and development from the device to the system level. The utilization of laser-based devices and systems for LiFi and underwater Internet of Things (IoT) has been discussed.
Yao, Yuanshun, Li, Huiying, Zheng, Haitao, Zhao, Ben Y..  2019.  Latent Backdoor Attacks on Deep Neural Networks. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. :2041–2055.

Recent work proposed the concept of backdoor attacks on deep neural networks (DNNs), where misclassification rules are hidden inside normal models, only to be triggered by very specific inputs. However, these "traditional" backdoors assume a context where users train their own models from scratch, which rarely occurs in practice. Instead, users typically customize "Teacher" models already pretrained by providers like Google, through a process called transfer learning. This customization process introduces significant changes to models and disrupts hidden backdoors, greatly reducing the actual impact of backdoors in practice. In this paper, we describe latent backdoors, a more powerful and stealthy variant of backdoor attacks that functions under transfer learning. Latent backdoors are incomplete backdoors embedded into a "Teacher" model, and automatically inherited by multiple "Student" models through transfer learning. If any Student models include the label targeted by the backdoor, then its customization process completes the backdoor and makes it active. We show that latent backdoors can be quite effective in a variety of application contexts, and validate its practicality through real-world attacks against traffic sign recognition, iris identification of volunteers, and facial recognition of public figures (politicians). Finally, we evaluate 4 potential defenses, and find that only one is effective in disrupting latent backdoors, but might incur a cost in classification accuracy as tradeoff.

McCulley, Shane, Roussev, Vassil.  2018.  Latent Typing Biometrics in Online Collaboration Services. Proceedings of the 34th Annual Computer Security Applications Conference. :66–76.

The use of typing biometrics—the characteristic typing patterns of individual keyboard users—has been studied extensively in the context of enhancing multi-factor authentication services. The key starting point for such work has been the collection of high-fidelity local timing data, and the key (implicit) security assumption has been that such biometrics could not be obtained by other means. We show that the latter assumption to be false, and that it is entirely feasible to obtain useful typing biometric signatures from third-party timing logs. Specifically, we show that the logs produced by realtime collaboration services during their normal operation are of sufficient fidelity to successfully impersonate a user using remote data only. Since the logs are routinely shared as a byproduct of the services' operation, this creates an entirely new avenue of attack that few users would be aware of. As a proof of concept, we construct successful biometric attacks using only the log-based structure (complete editing history) of a shared Google Docs, or Zoho Writer, document which is readily available to all contributing parties. Using the largest available public data set of typing biometrics, we are able to create successful forgeries 100% of the time against a commercial biometric service. Our results suggest that typing biometrics are not robust against practical forgeries, and should not be given the same weight as other authentication factors. Another important implication is that the routine collection of detailed timing logs by various online services also inherently (and implicitly) contains biometrics. This not only raises obvious privacy concerns, but may also undermine the effectiveness of network anonymization solutions, such as ToR, when used with existing services.

Härtig, H., Roitzsch, M., Weinhold, C., Lackorzynski, A..  2017.  Lateral Thinking for Trustworthy Apps. 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS). :1890–1899.

The growing computerization of critical infrastructure as well as the pervasiveness of computing in everyday life has led to increased interest in secure application development. We observe a flurry of new security technologies like ARM TrustZone and Intel SGX, but a lack of a corresponding architectural vision. We are convinced that point solutions are not sufficient to address the overall challenge of secure system design. In this paper, we outline our take on a trusted component ecosystem of small individual building blocks with strong isolation. In our view, applications should no longer be designed as massive stacks of vertically layered frameworks, but instead as horizontal aggregates of mutually isolated components that collaborate across machine boundaries to provide a service. Lateral thinking is needed to make secure systems going forward.

Yan-Tao, Zhong.  2018.  Lattice Based Authenticated Key Exchange with Universally Composable Security. 2018 International Conference on Networking and Network Applications (NaNA). :86–90.

The Internet of things (IoT) has experienced rapid development these years, while its security and privacy remains a major challenge. One of the main security goals for the IoT is to build secure and authenticated channels between IoT nodes. A common way widely used to achieve this goal is using authenticated key exchange protocol. However, with the increasing progress of quantum computation, most authenticated key exchange protocols nowadays are threatened by the rise of quantum computers. In this study, we address this problem by using ring-SIS based KEM and hash function to construct an authenticated key exchange scheme so that we base the scheme on lattice based hard problems believed to be secure even with quantum attacks. We also prove the security of universal composability of our scheme. The scheme hence can keep security while runs in complicated environment.

Denning, Dorothy E..  1976.  A Lattice Model of Secure Information Flow. Commun. ACM. 19:236–243.
This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.
Denning, Dorothy E..  1976.  A Lattice Model of Secure Information Flow. Commun. ACM. 19:236–243.

This paper investigates mechanisms that guarantee secure information flow in a computer system. These mechanisms are examined within a mathematical framework suitable for formulating the requirements of secure information flow among security classes. The central component of the model is a lattice structure derived from the security classes and justified by the semantics of information flow. The lattice properties permit concise formulations of the security requirements of different existing systems and facilitate the construction of mechanisms that enforce security. The model provides a unifying view of all systems that restrict information flow, enables a classification of them according to security objectives, and suggests some new approaches. It also leads to the construction of automatic program certification mechanisms for verifying the secure flow of information through a program.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper.

The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

Howe, J., Moore, C., O'Neill, M., Regazzoni, F., Güneysu, T., Beeden, K..  2016.  Lattice-based Encryption Over Standard Lattices In Hardware. Proceedings of the 53rd Annual Design Automation Conference. :162:1–162:6.

Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations. Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with λ/2-bits of precision. Performance results are promising compared to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing stronger security proofs; generate 1272 encryptions per second and 4395 decryptions per second.

del Pino, Rafael, Lyubashevsky, Vadim, Seiler, Gregor.  2018.  Lattice-Based Group Signatures and Zero-Knowledge Proofs of Automorphism Stability. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :574–591.

We present a group signature scheme, based on the hardness of lattice problems, whose outputs are more than an order of magnitude smaller than the currently most efficient schemes in the literature. Since lattice-based schemes are also usually non-trivial to efficiently implement, we additionally provide the first experimental implementation of lattice-based group signatures demonstrating that our construction is indeed practical – all operations take less than half a second on a standard laptop. A key component of our construction is a new zero-knowledge proof system for proving that a committed value belongs to a particular set of small size. The sets for which our proofs are applicable are exactly those that contain elements that remain stable under Galois automorphisms of the underlying cyclotomic number field of our lattice-based protocol. We believe that these proofs will find applications in other settings as well. The motivation of the new zero-knowledge proof in our construction is to allow the efficient use of the selectively-secure signature scheme (i.e. a signature scheme in which the adversary declares the forgery message before seeing the public key) of Agrawal et al. (Eurocrypt 2010) in constructions of lattice-based group signatures and other privacy protocols. For selectively-secure schemes to be meaningfully converted to standard signature schemes, it is crucial that the size of the message space is not too large. Using our zero-knowledge proofs, we can strategically pick small sets for which we can provide efficient zero-knowledge proofs of membership.

Gennaro, Rosario, Minelli, Michele, Nitulescu, Anca, Orrù, Michele.  2018.  Lattice-Based Zk-SNARKs from Square Span Programs. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :556-573.

Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it. To this day, zk-SNARKs are being used for delegating computation, electronic cryptocurrencies, and anonymous credentials. However, all current SNARKs implementations rely on pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades. In this work, we introduce the first designated-verifier zk-SNARK based on lattice assumptions, which are believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt'13) to the SNARK of Danezis et al. (Asiacrypt'14) that is based on Square Span Programs (SSPs) and relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters as well as extensive benchmarks on a C implementation, showing that our construction is practically instantiable.

Lee, P., Tseng, C..  2019.  On the Layer Choice of the Image Style Transfer Using Convolutional Neural Networks. 2019 IEEE International Conference on Consumer Electronics - Taiwan (ICCE-TW). :1—2.

In this paper, the layer choices of the image style transfer method using the VGG-19 neural network are studied. The VGG-19 network is used to extract the feature maps which have their implicit meaning as a learning basis. If the layers for stylistic learning are not suitably chosen, the quality of style transferred image may not look good. After making experiments, it can be observed that the color information is concentrated on lower layers from conv1-1 to conv2-2, and texture information is concentrated on the middle layers from conv3-1 to conv4-4. As to the higher layers from conv5-1 to conv5-4, they seem to be able to depict image content well. Based on these observations, the methods of color transfer, texture transfer and style transfer are presented and make comparisons with conventional methods.

Ning, W., Zhi-Jun, L..  2018.  A Layer-Built Method to the Relevancy of Electronic Evidence. 2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC). :416–420.

T138 combat cyber crimes, electronic evidence have played an increasing role, but in judicial practice the electronic evidence were not highly applied because of the natural contradiction between the epistemic uncertainty of electronic evidence and the principle of discretionary evidence of judge in the court. in this paper, we put forward a layer-built method to analyze the relevancy of electronic evidence, and discussed their analytical process combined with the case study. The initial practice shows the model is feasible and has a consulting value in analyzing the relevancy of electronic evidence.

Liu, C., Singhal, A., Wijesekera, D..  2017.  A Layered Graphical Model for Mission Attack Impact Analysis. 2017 IEEE Conference on Communications and Network Security (CNS). :602–609.

Business or military missions are supported by hardware and software systems. Unanticipated cyber activities occurring in supporting systems can impact such missions. In order to quantify such impact, we describe a layered graphical model as an extension of forensic investigation. Our model has three layers: the upper layer models operational tasks that constitute the mission and their inter-dependencies. The middle layer reconstructs attack scenarios from available evidence to reconstruct their inter-relationships. In cases where not all evidence is available, the lower level reconstructs potentially missing attack steps. Using the three levels of graphs constructed in these steps, we present a method to compute the impacts of attack activities on missions. We use NIST National Vulnerability Database's (NVD)-Common Vulnerability Scoring System (CVSS) scores or forensic investigators' estimates in our impact computations. We present a case study to show the utility of our model.

Esiner, Ertem, Datta, Anwitaman.  2016.  Layered Security for Storage at the Edge: On Decentralized Multi-factor Access Control. Proceedings of the 17th International Conference on Distributed Computing and Networking. :9:1–9:10.

In this paper we propose a protocol that allows end-users in a decentralized setup (without requiring any trusted third party) to protect data shipped to remote servers using two factors - knowledge (passwords) and possession (a time based one time password generation for authentication) that is portable. The protocol also supports revocation and recreation of a new possession factor if the older possession factor is compromised, provided the legitimate owner still has a copy of the possession factor. Furthermore, akin to some other recent works, our approach naturally protects the outsourced data from the storage servers themselves, by application of encryption and dispersal of information across multiple servers. We also extend the basic protocol to demonstrate how collaboration can be supported even while the stored content is encrypted, and where each collaborator is still restrained from accessing the data through a multi-factor access mechanism. Such techniques achieving layered security is crucial to (opportunistically) harness storage resources from untrusted entities.

Zheng, Yuxin, Guo, Qi, Tung, Anthony K.H., Wu, Sai.  2016.  LazyLSH: Approximate Nearest Neighbor Search for Multiple Distance Functions with a Single Index. Proceedings of the 2016 International Conference on Management of Data. :2023–2037.

Due to the "curse of dimensionality" problem, it is very expensive to process the nearest neighbor (NN) query in high-dimensional spaces; and hence, approximate approaches, such as Locality-Sensitive Hashing (LSH), are widely used for their theoretical guarantees and empirical performance. Current LSH-based approaches target at the L1 and L2 spaces, while as shown in previous work, the fractional distance metrics (Lp metrics with 0 textless p textless 1) can provide more insightful results than the usual L1 and L2 metrics for data mining and multimedia applications. However, none of the existing work can support multiple fractional distance metrics using one index. In this paper, we propose LazyLSH that answers approximate nearest neighbor queries for multiple Lp metrics with theoretical guarantees. Different from previous LSH approaches which need to build one dedicated index for every query space, LazyLSH uses a single base index to support the computations in multiple Lp spaces, significantly reducing the maintenance overhead. Extensive experiments show that LazyLSH provides more accurate results for approximate kNN search under fractional distance metrics.

Tian, Dave Jing, Hernandez, Grant, Choi, Joseph I., Frost, Vanessa, Johnson, Peter C., Butler, Kevin R. B..  2019.  LBM: A Security Framework for Peripherals within the Linux Kernel. 2019 IEEE Symposium on Security and Privacy (SP). :967—984.

Modern computer peripherals are diverse in their capabilities and functionality, ranging from keyboards and printers to smartphones and external GPUs. In recent years, peripherals increasingly connect over a small number of standardized communication protocols, including USB, Bluetooth, and NFC. The host operating system is responsible for managing these devices; however, malicious peripherals can request additional functionality from the OS resulting in system compromise, or can craft data packets to exploit vulnerabilities within OS software stacks. Defenses against malicious peripherals to date only partially cover the peripheral attack surface and are limited to specific protocols (e.g., USB). In this paper, we propose Linux (e)BPF Modules (LBM), a general security framework that provides a unified API for enforcing protection against malicious peripherals within the Linux kernel. LBM leverages the eBPF packet filtering mechanism for performance and extensibility and we provide a high-level language to facilitate the development of powerful filtering functionality. We demonstrate how LBM can provide host protection against malicious USB, Bluetooth, and NFC devices; we also instantiate and unify existing defenses under the LBM framework. Our evaluation shows that the overhead introduced by LBM is within 1 μs per packet in most cases, application and system overhead is negligible, and LBM outperforms other state-of-the-art solutions. To our knowledge, LBM is the first security framework designed to provide comprehensive protection against malicious peripherals within the Linux kernel.

Inshi, S., Chowdhury, R., Elarbi, M., Ould-Slimane, H., Talhi, C..  2020.  LCA-ABE: Lightweight Context-Aware Encryption for Android Applications. 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1—6.

The evolving of context-aware applications are becoming more readily available as a major driver of the growth of future connected smart, autonomous environments. However, with the increasing of security risks in critical shared massive data capabilities and the increasing regulation requirements on privacy, there is a significant need for new paradigms to manage security and privacy compliances. These challenges call for context-aware and fine-grained security policies to be enforced in such dynamic environments in order to achieve efficient real-time authorization between applications and connected devices. We propose in this work a novel solution that aims to provide context-aware security model for Android applications. Specifically, our proposition provides automated context-aware access control model and leverages Attribute-Based Encryption (ABE) to secure data communications. Thorough experiments have been performed and the evaluation results demonstrate that the proposed solution provides an effective lightweight adaptable context-aware encryption model.

Pan, T., Xu, C., Lv, J., Shi, Q., Li, Q., Jia, C., Huang, T., Lin, X..  2019.  LD-ICN: Towards Latency Deterministic Information-Centric Networking. 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). :973–980.
Deterministic latency is the key challenge that must be addressed in numerous 5G applications such as AR/VR. However, it is difficult to make customized end-to-end resource reservation across multiple ISPs using IP-based QoS mechanisms. Information-Centric Networking (ICN) provides scalable and efficient content distribution at the Internet scale due to its in-network caching and native multicast capabilities, and the deterministic latency can promisingly be guaranteed by caching the relevant content objects in appropriate locations. Existing proposals formulate the ICN cache placement problem into numerous theoretical models. However, the underlying mechanisms to support such cache coordination are not discussed in detail. Especially, how to efficiently make cache reservation, how to avoid route oscillation when content cache is updated and how to conduct the real-time latency measurement? In this work, we propose Latency Deterministic Information-Centric Networking (LD-ICN). LD-ICN relies on source routing-based latency telemetry and leverages an on-path caching technique to avoid frequent route oscillation while still achieve the optimal cache placement under the SDN architecture. Extensive evaluation shows that under LD-ICN, 90.04% of the content requests are satisfied within the hard latency requirements.
Wang, Haiyan.  2019.  The LDPC Code and Rateless Code for Wireless Sensor Network. 2019 2nd International Conference on Safety Produce Informatization (IICSPI). :389–393.
This paper gives a concept of wireless sensor network and describe the encoding algorithm and decoding algorithm along with the implementation of LDPC code and Rateless code. Compare the performances of those two code in WSN environment by making simulation in a Rayleigh channel in matlab and derive results and conclusions from the simulation.
Abdessalem, Marwa Ben, Zribi, Amin, Matsumoto, Tadashi, Bouallègue, Ammar.  2018.  LDPC-based Joint Source-Channel-Network Coding for the Multiple Access Relay Channel. 2018 6th International Conference on Wireless Networks and Mobile Communications (WINCOM). :1–6.
In this work, we investigate the MARC (Multiple Access Relay Channel) setup, in which two Markov sources communicate to a single destination, aided by one relay, based on Joint Source Channel Network (JSCN) LDPC codes. In addition, the two source nodes compress the information sequences with an LDPC source code. The compressed symbols are directly transmitted to both a relay and a destination nodes in two transportation phases. Indeed, the relay performs the concatenation of the received compressed sequences to obtain a recovered sequence, which is encoded with an LDPC channel code, before being forwarded to the destination. At the receiver, we propose an iterative joint decoding algorithm that exploits the correlation between the two sources-relay data and takes into account the errors occurring in the sources-relay links to estimate the source data. We show based on simulation results that the JSCN coding and decoding scheme into a MARC setup achieves a good performance with a gain of about 5 dB compared to a conventional LDPC code.
Kwon, Yonghwi, Kim, Dohyeong, Sumner, William Nick, Kim, Kyungtae, Saltaformaggio, Brendan, Zhang, Xiangyu, Xu, Dongyan.  2016.  LDX: Causality Inference by Lightweight Dual Execution. Proceedings of the Twenty-First International Conference on Architectural Support for Programming Languages and Operating Systems. :503–515.

Causality inference, such as dynamic taint anslysis, has many applications (e.g., information leak detection). It determines whether an event e is causally dependent on a preceding event c during execution. We develop a new causality inference engine LDX. Given an execution, it spawns a slave execution, in which it mutates c and observes whether any change is induced at e. To preclude non-determinism, LDX couples the executions by sharing syscall outcomes. To handle path differences induced by the perturbation, we develop a novel on-the-fly execution alignment scheme that maintains a counter to reflect the progress of execution. The scheme relies on program analysis and compiler transformation. LDX can effectively detect information leak and security attacks with an average overhead of 6.08% while running the master and the slave concurrently on separate CPUs, much lower than existing systems that require instruction level monitoring. Furthermore, it has much better accuracy in causality inference.

Zhang, H., Liu, H., Deng, L., Wang, P., Rong, X., Li, Y., Li, B., Wang, H..  2018.  Leader Recognition and Tracking for Quadruped Robots. 2018 IEEE International Conference on Information and Automation (ICIA). :1438—1443.

To meet the high requirement of human-machine interaction, quadruped robots with human recognition and tracking capability are studied in this paper. We first introduce a marker recognition system which uses multi-thread laser scanner and retro-reflective markers to distinguish the robot's leader and other objects. When the robot follows leader autonomously, the variant A* algorithm which having obstacle grids extended virtually (EA*) is used to plan the path. But if robots need to track and follow the leader's path as closely as possible, it will trust that the path which leader have traveled is safe enough and uses the incremental form of EA* algorithm (IEA*) to reproduce the trajectory. The simulation and experiment results illustrate the feasibility and effectiveness of the proposed algorithms.

Shrishti, Burra, Manohar S., Maurya, Chanchal, Maity, Soumyadev.  2019.  Leakage Resilient Searchable Symmetric Encryption with Periodic Updation. {2019 3rd International Conference on Trends in Electronics and Informatics} (ICOEI).

Searchable symmetric encryption (SSE) scheme allows a data owner to perform search queries over encrypted documents using symmetric cryptography. SSE schemes are useful in cloud storage and data outsourcing. Most of the SSE schemes in existing literature have been proved to leak a substantial amount of information that can lead to an inference attack. This paper presents, a novel leakage resilient searchable symmetric encryption with periodic updation (LRSSEPU) scheme that minimizes extra information leakage, and prevents an untrusted cloud server from performing document mapping attack, query recovery attack and other inference attacks. In particular, the size of the keyword vector is fixed and the keywords are periodically permuted and updated to achieve minimum leakage. Furthermore, our proposed LRSSEPU scheme provides authentication of the query messages and restricts an adversary from performing a replay attack, forged query attack and denial of service attack. We employ a combination of identity-based cryptography (IBC) with symmetric key cryptography to reduce the computation cost and communication overhead. Our scheme is lightweight and easy to implement with very little communication overhead.

Wang, Wenhao, Chen, Guoxing, Pan, Xiaorui, Zhang, Yinqian, Wang, XiaoFeng, Bindschaedler, Vincent, Tang, Haixu, Gunter, Carl A..  2017.  Leaky Cauldron on the Dark Land: Understanding Memory Side-Channel Hazards in SGX. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2421–2434.

Side-channel risks of Intel SGX have recently attracted great attention. Under the spotlight is the newly discovered page-fault attack, in which an OS-level adversary induces page faults to observe the page-level access patterns of a protected process running in an SGX enclave. With almost all proposed defense focusing on this attack, little is known about whether such efforts indeed raise the bar for the adversary, whether a simple variation of the attack renders all protection ineffective, not to mention an in-depth understanding of other attack surfaces in the SGX system. In the paper, we report the first step toward systematic analyses of side-channel threats that SGX faces, focusing on the risks associated with its memory management. Our research identifies 8 potential attack vectors, ranging from TLB to DRAM modules. More importantly, we highlight the common misunderstandings about SGX memory side channels, demonstrating that high frequent AEXs can be avoided when recovering EdDSA secret key through a new page channel and fine-grained monitoring of enclave programs (at the level of 64B) can be done through combining both cache and cross-enclave DRAM channels. Our findings reveal the gap between the ongoing security research on SGX and its side-channel weaknesses, redefine the side-channel threat model for secure enclaves, and can provoke a discussion on when to use such a system and how to use it securely.

Koc, Ugur, Saadatpanah, Parsa, Foster, Jeffrey S., Porter, Adam A..  2017.  Learning a Classifier for False Positive Error Reports Emitted by Static Code Analysis Tools. Proceedings of the 1st ACM SIGPLAN International Workshop on Machine Learning and Programming Languages. :35–42.
The large scale and high complexity of modern software systems make perfectly precise static code analysis (SCA) infeasible. Therefore SCA tools often over-approximate, so not to miss any real problems. This, however, comes at the expense of raising false alarms, which, in practice, reduces the usability of these tools. To partially address this problem, we propose a novel learning process whose goal is to discover program structures that cause a given SCA tool to emit false error reports, and then to use this information to predict whether a new error report is likely to be a false positive as well. To do this, we first preprocess code to isolate the locations that are related to the error report. Then, we apply machine learning techniques to the preprocessed code to discover correlations and to learn a classifier. We evaluated this approach in an initial case study of a widely-used SCA tool for Java. Our results showed that for our dataset we could accurately classify a large majority of false positive error reports. Moreover, we identified some common coding patterns that led to false positive errors. We believe that SCA developers may be able to redesign their methods to address these patterns and reduce false positive error reports.