Visible to the public Biblio

Found 485 results

Filters: First Letter Of Last Name is D  [Clear All Filters]
A B C [D] E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
D
Dziembowski, Stefan, Faust, Sebastian, Standaert, François-Xavier.  2016.  Private Circuits III: Hardware Trojan-Resilience via Testing Amplification. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :142–153.

Security against hardware trojans is currently becoming an essential ingredient to ensure trust in information systems. A variety of solutions have been introduced to reach this goal, ranging from reactive (i.e., detection-based) to preventive (i.e., trying to make the insertion of a trojan more difficult for the adversary). In this paper, we show how testing (which is a typical detection tool) can be used to state concrete security guarantees for preventive approaches to trojan-resilience. For this purpose, we build on and formalize two important previous works which introduced ``input scrambling" and ``split manufacturing" as countermeasures to hardware trojans. Using these ingredients, we present a generic compiler that can transform any circuit into a trojan-resilient one, for which we can state quantitative security guarantees on the number of correct executions of the circuit thanks to a new tool denoted as ``testing amplification". Compared to previous works, our threat model covers an extended range of hardware trojans while we stick with the goal of minimizing the number of honest elements in our transformed circuits. Since transformed circuits essentially correspond to redundant multiparty computations of the target functionality, they also allow reasonably efficient implementations, which can be further optimized if specialized to certain cryptographic primitives and security goals.

Dziembowski, Stefan, Eckey, Lisa, Faust, Sebastian.  2018.  FairSwap: How To Fairly Exchange Digital Goods. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :967-984.

We introduce FairSwap – an efficient protocol for fair exchange of digital goods using smart contracts. A fair exchange protocol allows a sender S to sell a digital commodity x for a fixed price p to a receiver R. The protocol is said to be secure if R only pays if he receives the correct x. Our solution guarantees fairness by relying on smart contracts executed over decentralized cryptocurrencies, where the contract takes the role of an external judge that completes the exchange in case of disagreement. While in the past there have been several proposals for building fair exchange protocols over cryptocurrencies, our solution has two distinctive features that makes it particular attractive when users deal with large commodities. These advantages are: (1) minimizing the cost for running the smart contract on the blockchain, and (2) avoiding expensive cryptographic tools such as zero-knowledge proofs. In addition to our new protocols, we provide formal security definitions for smart contract based fair exchange, and prove security of our construction. Finally, we illustrate several applications of our basic protocol and evaluate practicality of our approach via a prototype implementation for fairly selling large files over the cryptocurrency Ethereum. This article is summarized in: the morning paper an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer

Dyyak, Ivan, Horlatch, Vitaliy, Shynkarenko, Heorhiy.  2019.  Formulation and Numerical Analysis of Acoustics Problems in Coupled Thermohydroelastic Systems. 2019 XXIVth International Seminar/Workshop on Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED). :168–171.
The coupled thermohydroelastic processes of acoustic wave and heat propagation in weak viscous fluid and elastic bodies form the basis of dissipative acoustics. The problems of dissipative acoustics have many applications in engineering practice, in particular in the development of appropriate medical equipment. This paper presents mathematical models for time and frequency domain problems in terms of unknown displacements and temperatures in both the fluid and the elastic body. Formulated corresponding variational problems and constructed numerical schemes for their solution based on the Galerkin approximations. The method of proving the well-posedness of the considered variational problems is proposed.
Dylan Wang, Melody Moh, Teng-Sheng Moh.  2020.  Using Deep Learning to Solve Google reCAPTCHA v2’s Image Challenges.

The most popular CAPTCHA service in use today is Google reCAPTCHA v2, whose main offering is an image-based CAPTCHA challenge. This paper looks into the security measures used in reCAPTCHA v2's image challenges and proposes a deep learning-based solution that can be used to automatically solve them. The proposed method is tested with both a custom object- detection deep learning model as well as Google's own Cloud Vision API, in conjunction with human mimicking mouse movements to bypass the challenges. The paper also suggests some potential defense measures to increase overall security and other additional attack directions for reCAPTCHA v2.

Dykstra, J..  2015.  Essential Cybersecurity Science: Build, Test, and Evaluate Secure Systems. :190.

If you’re involved in cybersecurity as a software developer, forensic investigator, or network administrator, this practical guide shows you how to apply the scientific method when assessing techniques for protecting your information systems. You’ll learn how to conduct scientific experiments on everyday tools and procedures, whether you’re evaluating corporate security systems, testing your own security product, or looking for bugs in a mobile game.

Once author Josiah Dykstra gets you up to speed on the scientific method, he helps you focus on standalone, domain-specific topics, such as cryptography, malware analysis, and system security engineering. The latter chapters include practical case studies that demonstrate how to use available tools to conduct domain-specific scientific experiments.

  • Learn the steps necessary to conduct scientific experiments in cybersecurity
  • Explore fuzzing to test how your software handles various inputs
  • Measure the performance of the Snort intrusion detection system
  • Locate malicious “needles in a haystack” in your network and IT environment
  • Evaluate cryptography design and application in IoT products
  • Conduct an experiment to identify relationships between similar malware binaries
  • Understand system-level security requirements for enterprise networks and web services
Dyer, K.P., Coull, S.E., Ristenpart, T., Shrimpton, T..  2012.  Peek-a-Boo, I Still See You: Why Efficient Traffic Analysis Countermeasures Fail. Security and Privacy (SP), 2012 IEEE Symposium on. :332-346.

We consider the setting of HTTP traffic over encrypted tunnels, as used to conceal the identity of websites visited by a user. It is well known that traffic analysis (TA) attacks can accurately identify the website a user visits despite the use of encryption, and previous work has looked at specific attack/countermeasure pairings. We provide the first comprehensive analysis of general-purpose TA countermeasures. We show that nine known countermeasures are vulnerable to simple attacks that exploit coarse features of traffic (e.g., total time and bandwidth). The considered countermeasures include ones like those standardized by TLS, SSH, and IPsec, and even more complex ones like the traffic morphing scheme of Wright et al. As just one of our results, we show that despite the use of traffic morphing, one can use only total upstream and downstream bandwidth to identify – with 98% accuracy - which of two websites was visited. One implication of what we find is that, in the context of website identification, it is unlikely that bandwidth-efficient, general-purpose TA countermeasures can ever provide the type of security targeted in prior work.

Dwork, Cynthia, Roth, Aaron.  2014.  The Algorithmic Foundations of Differential Privacy. Found. Trends Theor. Comput. Sci.. 9:211–407.

The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.

Dwivedi, A..  2018.  Implementing Cyber Resilient Designs through Graph Analytics Assisted Model Based Systems Engineering. 2018 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :607–616.
Model Based Systems Engineering (MBSE) adds efficiency during all phases of the design lifecycle. MBSE tools enforce design policies and rules to capture the design elements, inter-element relationships, and their attributes in a consistent manner. The system elements, and attributes are captured and stored in a centralized MBSE database for future retrieval. Systems that depend on computer networks can be designed using MBSE to meet cybersecurity and resilience requirements. At each step of a structured systems engineering methodology, decisions need to be made regarding the selection of architecture and designs that mitigate cyber risk and enhance cyber resilience. Detailed risk and decision analysis methods involve complex models and computations which are often characterized as a Big Data analytic problem. In this paper, we argue in favor of using graph analytic methods with model based systems engineering to support risk and decision analyses when engineering cyber resilient systems.
Duy, Phan The, Do Hoang, Hien, Thu Hien, Do Thi, Ba Khanh, Nguyen, Pham, Van-Hau.  2019.  SDNLog-Foren: Ensuring the Integrity and Tamper Resistance of Log Files for SDN Forensics using Blockchain. 2019 6th NAFOSTED Conference on Information and Computer Science (NICS). :416—421.

Despite bringing many benefits of global network configuration and control, Software Defined Networking (SDN) also presents potential challenges for both digital forensics and cybersecurity. In fact, there are various attacks targeting a range of vulnerabilities on vital elements of this paradigm such as controller, Northbound and Southbound interfaces. In addition to solutions of security enhancement, it is important to build mechanisms for digital forensics in SDN which provide the ability to investigate and evaluate the security of the whole network system. It should provide features of identifying, collecting and analyzing log files and detailed information about network devices and their traffic. However, upon penetrating a machine or device, hackers can edit, even delete log files to remove the evidences about their presence and actions in the system. In this case, securing log files with fine-grained access control in proper storage without any modification plays a crucial role in digital forensics and cybersecurity. This work proposes a blockchain-based approach to improve the security of log management in SDN for network forensics, called SDNLog-Foren. This model is also evaluated with different experiments to prove that it can help organizations keep sensitive log data of their network system in a secure way regardless of being compromised at some different components of SDN.

Dutta, Raj Gautam, Yu, Feng, Zhang, Teng, Hu, Yaodan, Jin, Yier.  2018.  Security for Safety: A Path Toward Building Trusted Autonomous Vehicles. Proceedings of the International Conference on Computer-Aided Design. :92:1-92:6.

Automotive systems have always been designed with safety in mind. In this regard, the functional safety standard, ISO 26262, was drafted with the intention of minimizing risk due to random hardware faults or systematic failure in design of electrical and electronic components of an automobile. However, growing complexity of a modern car has added another potential point of failure in the form of cyber or sensor attacks. Recently, researchers have demonstrated that vulnerability in vehicle's software or sensing units could enable them to remotely alter the intended operation of the vehicle. As such, in addition to safety, security should be considered as an important design goal. However, designing security solutions without the consideration of safety objectives could result in potential hazards. Consequently, in this paper we propose the notion of security for safety and show that by integrating safety conditions with our system-level security solution, which comprises of a modified Kalman filter and a Chi-squared detector, we can prevent potential hazards that could occur due to violation of safety objectives during an attack. Furthermore, with the help of a car-following case study, where the follower car is equipped with an adaptive-cruise control unit, we show that our proposed system-level security solution preserves the safety constraints and prevent collision between vehicle while under sensor attack.

Dutta, R. G., Guo, Xiaolong, Zhang, Teng, Kwiat, K., Kamhoua, C., Njilla, L., Jin, Y..  2017.  Estimation of safe sensor measurements of autonomous system under attack. 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC). :1–6.
The introduction of automation in cyber-physical systems (CPS) has raised major safety and security concerns. One attack vector is the sensing unit whose measurements can be manipulated by an adversary through attacks such as denial of service and delay injection. To secure an autonomous CPS from such attacks, we use a challenge response authentication (CRA) technique for detection of attack in active sensors data and estimate safe measurements using the recursive least square algorithm. For demonstrating effectiveness of our proposed approach, a car-follower model is considered where the follower vehicle's radar sensor measurements are manipulated in an attempt to cause a collision.
Dutt, Nikil, Jantsch, Axel, Sarma, Santanu.  2016.  Toward Smart Embedded Systems: A Self-aware System-on-Chip (SoC) Perspective. ACM Trans. Embed. Comput. Syst.. 15:22:1–22:27.

Embedded systems must address a multitude of potentially conflicting design constraints such as resiliency, energy, heat, cost, performance, security, etc., all in the face of highly dynamic operational behaviors and environmental conditions. By incorporating elements of intelligence, the hope is that the resulting “smart” embedded systems will function correctly and within desired constraints in spite of highly dynamic changes in the applications and the environment, as well as in the underlying software/hardware platforms. Since terms related to “smartness” (e.g., self-awareness, self-adaptivity, and autonomy) have been used loosely in many software and hardware computing contexts, we first present a taxonomy of “self-x” terms and use this taxonomy to relate major “smart” software and hardware computing efforts. A major attribute for smart embedded systems is the notion of self-awareness that enables an embedded system to monitor its own state and behavior, as well as the external environment, so as to adapt intelligently. Toward this end, we use a System-on-Chip perspective to show how the CyberPhysical System-on-Chip (CPSoC) exemplar platform achieves self-awareness through a combination of cross-layer sensing, actuation, self-aware adaptations, and online learning. We conclude with some thoughts on open challenges and research directions.

Dutson, Jonathan, Allen, Danny, Eggett, Dennis, Seamons, Kent.  2019.  Don't Punish all of us: Measuring User Attitudes about Two-Factor Authentication. 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :119–128.
Two-factor authentication (2FA) defends against password compromise by a remote attacker. We surveyed 4,275 students, faculty, and staff at Brigham Young University to measure user sentiment about Duo 2FA one year after the university adopted it. The results were mixed. A majority of the participants felt more secure using Duo and felt it was easy to use. About half of all participants reported at least one instance of being locked out of their university account because of an inability to authenticate with Duo. We found that students and faculty generally had more negative perceptions of Duo than staff. The survey responses reveal some pain points for Duo users. In response, we offer recommendations that reduce the frequency of 2FA for users. We also suggest UI changes that draw more attention to 2FA methods that do not require WiFi, the "Remember Me" setting, and the help utility.
Duta, Ionut C., Ionescu, Bogdan, Aizawa, Kiyoharu, Sebe, Nicu.  2017.  Simple, Efficient and Effective Encodings of Local Deep Features for Video Action Recognition. Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. :218–225.

For an action recognition system a decisive component is represented by the feature encoding part which builds the final representation that serves as input to a classifier. One of the shortcomings of the existing encoding approaches is the fact that they are built around hand-crafted features and they are not also highly competitive on encoding the current deep features, necessary in many practical scenarios. In this work we propose two solutions specifically designed for encoding local deep features, taking advantage of the nature of deep networks, focusing on capturing the highest feature response of the convolutional maps. The proposed approaches for deep feature encoding provide a solution to encapsulate the features extracted with a convolutional neural network over the entire video. In terms of accuracy our encodings outperform by a large margin the current most widely used and powerful encoding approaches, while being extremely efficient for the computational cost. Evaluated in the context of action recognition tasks, our pipeline obtains state-of-the-art results on three challenging datasets: HMDB51, UCF50 and UCF101.

Dürmuth, Markus, Oswald, David, Pastewka, Niklas.  2016.  Side-Channel Attacks on Fingerprint Matching Algorithms. Proceedings of the 6th International Workshop on Trustworthy Embedded Devices. :3–13.

Biometric authentication schemes are frequently used to establish the identity of a user. Often, a trusted hardware device is used to decide if a provided biometric feature is sufficiently close to the features stored by the legitimate user during enrollment. In this paper, we address the question whether the stored features can be extracted with side-channel attacks. We consider several models for types of leakage that are relevant specifically for fingerprint verification, and show results for attacks against the Bozorth3 and a custom matching algorithm. This work shows an interesting path for future research on the susceptibility of biometric algorithms towards side-channel attacks.

Durmus, Y., Langendoen, K..  2014.  Wifi authentication through social networks #x2014; A decentralized and context-aware approach. Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on. :532-538.

With the proliferation of WiFi-enabled devices, people expect to be able to use them everywhere, be it at work, while commuting, or when visiting friends. In the latter case, home owners are confronted with the burden of controlling the access to their WiFi router, and usually resort to simply sharing the password. Although convenient, this solution breaches basic security principles, and puts the burden on the friends who have to enter the password in each and every of their devices. The use of social networks, specifying the trust relations between people and devices, provides for a more secure and more friendly authentication mechanism. In this paper, we progress the state-of-the-art by abandoning the centralized solution to embed social networks in WiFi authentication; we introduce EAP-SocTLS, a decentralized approach for authentication and authorization of WiFi access points and other devices, exploiting the embedded trust relations. In particular, we address the (quadratic) search complexity when indirect trust relations, like the smartphone of a friend's kid, are involved. We show that the simple heuristic of limiting the search to friends and devices in physical proximity makes for a scalable solution. Our prototype implementation, which is based on WebID and EAP-TLS, uses WiFi probe requests to determine the pool of neighboring devices and was shown to reduce the search time from 1 minute for the naive policy down to 11 seconds in the case of granting access over an indirect friend.
 

Durgapu, Swetha, Kiran, L. Venkateshwara, Madhavi, Valli.  2019.  A Novel Approach on Mobile Devices Fast Authentication and Key Agreement. 2019 International Conference on Vision Towards Emerging Trends in Communication and Networking (ViTECoN). :1–4.
Mechanism to-Rube Goldberg invention accord is normal habituated to for apartment phones and Internet of Things. Agree and central knowledge are open to meet an unfailing turning between twosome gadgets. In ignoble fracas, factual methodologies many a time eon wait on a prefabricated solitarily pronunciation database and bear the ill effects of serene age rate. We verifiable GeneWave, a brusque gadget inspection and root assention convention for item cell phones. GeneWave mischievous accomplishes bidirectional ingenious inspection office on the physical reaction meantime between two gadgets. To evade the resolution of interim in compliance, we overshadow overseas time fragility on ware gadgets skim through steep flag location and excess time crossing out. At zigzag goal, we success out the elementary acoustic channel reaction for gadget verification. We combination an extraordinary coding pointing for virtual key assention while guaranteeing security. Consequently, two gadgets heart signal couple choice and safely concur on a symmetric key.
Durdi, Vinod B., Kulkarni, P. T., Sudha, K. L..  2016.  Cross Layer Approach Energy Efficient Transmission of Multimedia Data over Wireless Sensor Networks. Proceedings of the Second International Conference on Information and Communication Technology for Competitive Strategies. :85:1–85:6.

Multimedia transmission in wireless multimedia sensor networks is often energy constraints. In practice the bit rate resulting from all the multimedia digitization formats are substantially larger than the bit rates of transmission channels that are available with the networks associated with these applications. For the purpose of efficient of storage and transmission of the content, the popular compression technique MPEG4/H.264 has been made used. To achieve better coding efficiency video streaming protocols MPEG4/H.264 uses several techniques which is increasing the complexity involved in computation at the encoder prominently for wireless sensor network devices having lesser power abilities. In this paper we propose energy consumption reduction framework for transmission in wireless networks so that well-balanced quality of service (QoS) in multimedia network can be maintained. The experiment result demonstrate that the effectiveness of the proposed approach in energy efficiency in wireless sensor network where the energy is the critical parameter.

Durbeck, Lisa J. K., Athanas, Peter M., Macias, Nicholas J..  2014.  Secure-by-construction Composable Componentry for Network Processing. Proceedings of the 2014 Symposium and Bootcamp on the Science of Security. :27:1–27:2.

Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data[3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing. This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates[1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined[4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6]; (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing. Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).

Durante, L., Seno, L., Valenza, F., Valenzano, A..  2017.  A model for the analysis of security policies in service function chains. 2017 IEEE Conference on Network Softwarization (NetSoft). :1–6.

Two emerging architectural paradigms, i.e., Software Defined Networking (SDN) and Network Function Virtualization (NFV), enable the deployment and management of Service Function Chains (SFCs). A SFC is an ordered sequence of abstract Service Functions (SFs), e.g., firewalls, VPN-gateways, traffic monitors, that packets have to traverse in the route from source to destination. While this appealing solution offers significant advantages in terms of flexibility, it also introduces new challenges such as the correct configuration and ordering of SFs in the chain to satisfy overall security requirements. This paper presents a formal model conceived to enable the verification of correct policy enforcements in SFCs. Software tools based on the model can then be designed to cope with unwanted network behaviors (e.g., security flaws) deriving from incorrect interactions of SFs of the same SFC. 

Đ
Đuranec, A., Gruičić, S., Žagar, M..  2020.  Forensic analysis of Windows 10 Sandbox. 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO). :1224—1229.

With each Windows operating system Microsoft introduces new features to its users. Newly added features present a challenge to digital forensics examiners as they are not analyzed or tested enough. One of the latest features, introduced in Windows 10 version 1909 is Windows Sandbox; a lightweight, temporary, environment for running untrusted applications. Because of the temporary nature of the Sandbox and insufficient documentation, digital forensic examiners are facing new challenges when examining this newly added feature which can be used to hide different illegal activities. Throughout this paper, the focus will be on analyzing different Windows artifacts and event logs, with various tools, left behind as a result of the user interaction with the Sandbox feature on a clear virtual environment. Additionally, the setup of testing environment will be explained, the results of testing and interpretation of the findings will be presented, as well as open-source tools used for the analysis.

D
Durand, Arnaud, Gremaud, Pascal, Pasquier, Jacques.  2017.  Decentralized Web of Trust and Authentication for the Internet of Things. Proceedings of the Seventh International Conference on the Internet of Things. :27:1–27:2.

As the Internet of Thing (IoT) matures, a lot of concerns are being raised about security, privacy and interoperability. The Web of Things (WoT) model leverages web technologies to improve interoperability. Due to its distributed components, the web scaled well beyond initial expectations. Still, secure authentication and communication across organization boundaries rely on the Public Key Infrastructure (PKI) which is a non-transparent, centralized single point of failure. We can improve transparency and reduce the chain of trust–-thus significantly improving the IoT security–-by empowering blockchain technology and web security standards. In this paper, we build a scalable, decentralized IoT-centric PKI and discuss how we can combine it with the emerging web authentication and authorization framework for constrained environments.

Durak, F. Betül, DuBuisson, Thomas M., Cash, David.  2016.  What Else is Revealed by Order-Revealing Encryption? Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1155–1166.

The security of order-revealing encryption (ORE) has been unclear since its invention. Dataset characteristics for which ORE is especially insecure have been identified, such as small message spaces and low-entropy distributions. On the other hand, properties like one-wayness on uniformly-distributed datasets have been proved for ORE constructions. This work shows that more plaintext information can be extracted from ORE ciphertexts than was previously thought. We identify two issues: First, we show that when multiple columns of correlated data are encrypted with ORE, attacks can use the encrypted columns together to reveal more information than prior attacks could extract from the columns individually. Second, we apply known attacks, and develop new attacks, to show that the leakage of concrete ORE schemes on non-uniform data leads to more accurate plaintext recovery than is suggested by the security theorems which only dealt with uniform inputs.

Duraisamy, Karthi, Lu, Hao, Pande, Partha Pratim, Kalyanaraman, Ananth.  2017.  Accelerating Graph Community Detection with Approximate Updates via an Energy-Efficient NoC. Proceedings of the 54th Annual Design Automation Conference 2017. :89:1–89:6.

Community detection is an advanced graph operation that is used to reveal tightly-knit groups of vertices (aka. communities) in real-world networks. Given the intractability of the problem, efficient heuristics are used in practice. Yet, even the best of these state-of-the-art heuristics can become computationally demanding over large inputs and can generate workloads that exhibit inherent irregularity in data movement on manycore platforms. In this paper, we posit that effective acceleration of the graph community detection operation can be achieved by reducing the cost of data movement through a combined innovation at both software and hardware levels. More specifically, we first propose an efficient software-level parallelization of community detection that uses approximate updates to cleverly exploit a diminishing returns property of the algorithm. Secondly, as a way to augment this innovation at the software layer, we design an efficient Wireless Network on Chip (WiNoC) architecture that is suited to handle the irregular on-chip data movements exhibited by the community detection algorithm under both unicast- and broadcast-heavy cache coherence protocols. Experimental results show that our resulting WiNoC-enabled manycore platform achieves on average 52% savings in execution time, without compromising on the quality of the outputs, when compared to a traditional manycore platform designed with a wireline mesh NoC and running community detection without employing approximate updates.

Duque, Alexis, Stanica, Razvan, Rivano, Herve, Desportes, Adrien.  2016.  Unleashing the Power of LED-to-camera Communications for IoT Devices. Proceedings of the 3rd Workshop on Visible Light Communication Systems.