Visible to the public Biblio

Found 7302 results

Book
Merzdovnik, G., Huber, M., Buhov, D., Nikiforakis, N., Neuner, S., Schmiedecker, M., Weippl, E..  2017.  Block Me If You Can: A Large-Scale Study of Tracker-Blocking Tools - IEEE Conference Publication.

In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.
 

Robling Denning, Dorothy Elizabeth.  1982.  Cryptography and Data Security. :414.

Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data.

Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further.

Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

S. Petcy Carolin, M. Somasundaram.  2016.  Data loss protection and data security using agents for cloud environment - IEEE Conference Publication.

Cyber infrastructures are highly vulnerable to intrusions and other threats. The main challenges in cloud computing are failure of data centres and recovery of lost data and providing a data security system. This paper has proposed a Virtualization and Data Recovery to create a virtual environment and recover the lost data from data servers and agents for providing data security in a cloud environment. A Cloud Manager is used to manage the virtualization and to handle the fault. Erasure code algorithm is used to recover the data which initially separates the data into n parts and then encrypts and stores in data servers. The semi trusted third party and the malware changes made in data stored in data centres can be identified by Artificial Intelligent methods using agents. Java Agent Development Framework (JADE) is a tool to develop agents and facilitates the communication between agents and allows the computing services in the system. The framework designed and implemented in the programming language JAVA as gateway or firewall to recover the data loss.
 

Honggang, Zhao, Chen, Shi, Leyu, Zhai.  Submitted.  Design and Implementation of Lightweight 6LoWPAN Gateway Based on Contiki - IEEE Conference Publication.

6LoWPAN technology realizes the IPv6 packet transmission in the IEEE 802.15.4 based WSN. And 6LoWPAN is regarded as one of the ideal technologies to realize the interconnection between WSN and Internet, which is the key to build the IoT. Contiki is an open source and highly portable multitasking operating system, in which the 6LoWPAN has been implemented. In contiki, only several K Bytes of code and a few hundred bytes of memory are required to provide a multitasking environment and built-in TCP/IP support. This makes it especially suitable for memory constrained embedded platforms. In this paper, a lightweight 6LoWPAN gateway based on Contiki is designed and its designs of hardware and software are described. A complex experiment environment is presented, in which the gateway's capability of accessing the Internet is verified, and its performance about the average network delay and jitter are analyzed. The experimental results show that the gateway designed in this paper can not only realize the interconnection between 6LoWPAN networks and Internet, but also have good network adaptability and stability.

Dykstra, J..  2015.  Essential Cybersecurity Science: Build, Test, and Evaluate Secure Systems. :190.

If you’re involved in cybersecurity as a software developer, forensic investigator, or network administrator, this practical guide shows you how to apply the scientific method when assessing techniques for protecting your information systems. You’ll learn how to conduct scientific experiments on everyday tools and procedures, whether you’re evaluating corporate security systems, testing your own security product, or looking for bugs in a mobile game.

Once author Josiah Dykstra gets you up to speed on the scientific method, he helps you focus on standalone, domain-specific topics, such as cryptography, malware analysis, and system security engineering. The latter chapters include practical case studies that demonstrate how to use available tools to conduct domain-specific scientific experiments.

  • Learn the steps necessary to conduct scientific experiments in cybersecurity
  • Explore fuzzing to test how your software handles various inputs
  • Measure the performance of the Snort intrusion detection system
  • Locate malicious “needles in a haystack” in your network and IT environment
  • Evaluate cryptography design and application in IoT products
  • Conduct an experiment to identify relationships between similar malware binaries
  • Understand system-level security requirements for enterprise networks and web services
Ibrahim, Rosziati, Omotunde, Habeeb.  2017.  A Hybrid Threat Model for Software Security Requirement Specification - IEEE Conference Publication.

Security is often treated as secondary or a non- functional feature of software which influences the approach of vendors and developers when describing their products often in terms of what it can do (Use Cases) or offer customers. However, tides are beginning to change as more experienced customers are beginning to demand for more secure and reliable software giving priority to confidentiality, integrity and privacy while using these applications. This paper presents the MOTH (Modeling Threats with Hybrid Techniques) framework designed to help organizations secure their software assets from attackers in order to prevent any instance of SQL Injection Attacks (SQLIAs). By focusing on the attack vectors and vulnerabilities exploited by the attackers and brainstorming over possible attacks, developers and security experts can better strategize and specify security requirements required to create secure software impervious to SQLIAs. A live web application was considered in this research work as a case study and results obtained from the hybrid models extensively exposes the vulnerabilities deep within the application and proposed resolution plans for blocking those security holes exploited by SQLIAs.
 

Mada, Bharat B., Banik, Manoj, Wu, Bo Chen, Bein, Doina.  2016.  Intrusion Tolerant Multi-cloud Storage - IEEE Conference Publication.

Data generation and its utilization in important decision applications has been growing an extremely fast pace, which has made data a valuable resource that needs to be rigorously protected from attackers. Cloud storage systems claim to offer the promise of secure and elastic data storage services that can adapt to changing storage requirements. Despite diligent efforts being made to protect data, recent successful attacks highlight the need for going beyond the existing approaches centered on intrusion prevention, detection and recovery mechanisms. However, most security mechanisms have finite rate of failure, and with intrusion becoming more sophisticated and stealthy, the failure rate appears to be rising. In this paper we propose the use data fragmentation, followed by coding that introduces redundant fragments and dispersing fragments to multiple and independent cloud storage systems with each cloud handling only a single fragments. The paper proposes a multi-cloud fragmented cloud storage system architecture and design of the related software code. Probabilistic analysis is carried to quantify its intrusion tolerance abilities.
 

Parno, Bryan Jeffery.  2014.  Trust Extension As a Mechanism for Secure Code Execution on Commodity Computers.

From the Preface

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. 
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. 

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

Parno, Bryan Jeffery.  2014.  Trust Extension As a Mechanism for Secure Code Execution on Commodity Computers.

From the Preface

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. 
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. 

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

Book Chapter
Li, Bo, Vorobeychik, Yevgeniy.  2014.  Feature Cross-Substitution in Adversarial Classification. Advances in Neural Information Processing Systems 27. :2087–2095.

The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to evade the classifiers deployed to detect them. We investigate both the problem of modeling the objectives of such adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries. In particular, we demonstrate severe shortcomings of feature reduction in adversarial settings using several natural adversarial objective functions, an observation that is particularly pronounced when the adversary is able to substitute across similar features (for example, replace words with synonyms or replace letters in words). We offer a simple heuristic method for making learning more robust to feature cross-substitution attacks. We then present a more general approach based on mixed-integer linear programming with constraint generation, which implicitly trades off overfitting and feature selection in an adversarial setting using a sparse regularizer along with an evasion model. Our approach is the first method for combining an adversarial classification algorithm with a very general class of models of adversarial classifier evasion. We show that our algorithmic approach significantly outperforms state-of-the-art alternatives.

Waseem Abbas, Aron Laszka, Koutsoukos, Xenofon.  2015.  Resilient Wireless Sensor Networks for Cyber-Physical Systems. Cyber-Physical System Design with Sensor Networking Technologies.

Due to their low deployment costs, wireless sensor networks (WSN) may act as a key enabling technology for a variety of spatially-distributed cyber-physical system (CPS) applications, ranging from intelligent traffic control to smart grids. However, besides providing tremendous benefits in terms of deployment costs, they also open up new possibilities for malicious attackers, who aim to cause financial losses or physical damage. Since perfectly securing these spatially-distributed systems is either impossible or financially unattainable, we need to design them to be resilient to attacks: even if some parts of the system are compromised or unavailable due to the actions of an attacker, the system as a whole must continue to operate with minimal losses. In a CPS, control decisions affecting the physical process depend on the observed data from the sensor network. Any malicious activity in the sensor network can therefore severely impact the physical process, and consequently the overall CPS operations. These factors necessitate a deeper probe into the domain of resilient WSN for CPS. In this chapter, we provide an overview of various dimensions in this field, including objectives of WSN in CPS, attack scenarios and vulnerabilities, notion of attack-resilience in WSN for CPS, and solution approaches towards attaining resilience. We also highlight major challenges, recent developments, and future directions in this area.

Ross Koppel, University of Pennsylvania, Sean W. Smith, Dartmouth College, Jim Blythe, University of Southern California, Vijay Kothari, Dartmouth College.  2015.  Workarounds to Computer Access in Healthcare Organizations: You Want My Password or a Dead Patient? Studies in Health Technology and Informatics Driving Quality Informatics: Fulfilling the Promise . 208

Workarounds to computer access in healthcare are sufficiently common that they often go unnoticed. Clinicians focus on patient care, not cybersecurity. We argue and demonstrate that understanding workarounds to healthcare workers’ computer access requires not only analyses of computer rules, but also interviews and observations with clinicians. In addition, we illustrate the value of shadowing clinicians and conducing focus groups to understand their motivations and tradeoffs for circumvention. Ethnographic investigation of the medical workplace emerges as a critical method of research because in the inevitable conflict between even well-intended people versus the machines, it’s the people who are the more creative, flexible, and motivated. We conducted interviews and observations with hundreds of medical workers and with 19 cybersecurity experts, CIOs, CMIOs, CTO, and IT workers to obtain their perceptions of computer security. We also shadowed clinicians as they worked. We present dozens of ways workers ingeniously circumvent security rules. The clinicians we studied were not “black hat” hackers, but just professionals seeking to accomplish their work despite the security technologies and regulations.
 

Conference Paper
Schrenk, B., Pacher, C..  2018.  1 Gb/s All-LED Visible Light Communication System. 2018 Optical Fiber Communications Conference and Exposition (OFC). :1–3.
We evaluate the use of LEDs intended for illumination as low-cost filtered optical detectors. An optical wireless system that is exclusively based on commercial off-the-shelf 5-mm R/G/B LEDs is experimentally demonstrated for Gb/s close-proximity transmission.
Doerr, Carola, Lengler, Johannes.  2016.  The (1+1) Elitist Black-Box Complexity of LeadingOnes. Proceedings of the Genetic and Evolutionary Computation Conference 2016. :1131–1138.

One important goal of black-box complexity theory is the development of complexity models allowing to derive meaningful lower bounds for whole classes of randomized search heuristics. Complementing classical runtime analysis, black-box models help us understand how algorithmic choices such as the population size, the variation operators, or the selection rules influence the optimization time. One example for such a result is the Ω(n log n) lower bound for unary unbiased algorithms on functions with a unique global optimum [Lehre/Witt, GECCO 2010], which tells us that higher arity operators or biased sampling strategies are needed when trying to beat this bound. In lack of analyzing techniques, almost no non-trivial bounds are known for other restricted models. Proving such bounds therefore remains to be one of the main challenges in black-box complexity theory. With this paper we contribute to our technical toolbox for lower bound computations by proposing a new type of information-theoretic argument. We regard the permutation- and bit-invariant version of LeadingOnes and prove that its (1+1) elitist black-box complexity is Ω(n2), a bound that is matched by (1+1)-type evolutionary algorithms. The (1+1) elitist complexity of LeadingOnes is thus considerably larger than its unrestricted one, which is known to be of order n log log n [Afshani et al., 2013].

Whatmough, P. N., Lee, S. K., Lee, H., Rama, S., Brooks, D., Wei, G. Y..  2017.  14.3 A 28nm SoC with a 1.2GHz 568nJ/prediction sparse deep-neural-network engine with \#x003E;0.1 timing error rate tolerance for IoT applications. 2017 IEEE International Solid-State Circuits Conference (ISSCC). :242–243.

This paper presents a 28nm SoC with a programmable FC-DNN accelerator design that demonstrates: (1) HW support to exploit data sparsity by eliding unnecessary computations (4× energy reduction); (2) improved algorithmic error tolerance using sign-magnitude number format for weights and datapath computation; (3) improved circuit-level timing violation tolerance in datapath logic via timeborrowing; (4) combined circuit and algorithmic resilience with Razor timing violation detection to reduce energy via VDD scaling or increase throughput via FCLK scaling; and (5) high classification accuracy (98.36% for MNIST test set) while tolerating aggregate timing violation rates \textbackslashtextgreater10-1. The accelerator achieves a minimum energy of 0.36μJ/pred at 667MHz, maximum throughput at 1.2GHz and 0.57μJ/pred, or a 10%-margined operating point at 1GHz and 0.58μJ/pred.

Tatarenko, T..  2015.  1-recall reinforcement learning leading to an optimal equilibrium in potential games with discrete and continuous actions. 2015 54th IEEE Conference on Decision and Control (CDC). :6749–6754.

Game theory serves as a powerful tool for distributed optimization in multiagent systems in different applications. In this paper we consider multiagent systems that can be modeled as a potential game whose potential function coincides with a global objective function to be maximized. This approach renders the agents the strategic decision makers and the corresponding optimization problem the problem of learning an optimal equilibruim point in the designed game. In distinction from the existing works on the topic of payoff-based learning, we deal here with the systems where agents have neither memory nor ability for communication, and they base their decision only on the currently played action and the experienced payoff. Because of these restrictions, we use the methods of reinforcement learning, stochastic approximation, and learning automata extensively reviewed and analyzed in [3], [9]. These methods allow us to set up the agent dynamics that moves the game out of inefficient Nash equilibria and leads it close to an optimal one in both cases of discrete and continuous action sets.

Armin, J., Thompson, B., Ariu, D., Giacinto, G., Roli, F., Kijewski, P..  2015.  2020 Cybercrime Economic Costs: No Measure No Solution. 2015 10th International Conference on Availability, Reliability and Security. :701–710.

Governments needs reliable data on crime in order to both devise adequate policies, and allocate the correct revenues so that the measures are cost-effective, i.e., The money spent in prevention, detection, and handling of security incidents is balanced with a decrease in losses from offences. The analysis of the actual scenario of government actions in cyber security shows that the availability of multiple contrasting figures on the impact of cyber-attacks is holding back the adoption of policies for cyber space as their cost-effectiveness cannot be clearly assessed. The most relevant literature on the topic is reviewed to highlight the research gaps and to determine the related future research issues that need addressing to provide a solid ground for future legislative and regulatory actions at national and international levels.

Ahmadian, M. M., Shahriari, H. R..  2016.  2entFOX: A framework for high survivable ransomwares detection. 2016 13th International Iranian Society of Cryptology Conference on Information Security and Cryptology (ISCISC). :79–84.

Ransomwares have become a growing threat since 2012, and the situation continues to worsen until now. The lack of security mechanisms and security awareness are pushing the systems into mire of ransomware attacks. In this paper, a new framework called 2entFOX' is proposed in order to detect high survivable ransomwares (HSR). To our knowledge this framework can be considered as one of the first frameworks in ransomware detection because of little publicly-available research in this field. We analyzed Windows ransomwares' behaviour and we tried to find appropriate features which are particular useful in detecting this type of malwares with high detection accuracy and low false positive rate. After hard experimental analysis we extracted 20 effective features which due to two highly efficient ones we could achieve an appropriate set for HSRs detection. After proposing architecture based on Bayesian belief network, the final evaluation is done on some known ransomware samples and unknown ones based on six different scenarios. The result of this evaluations shows the high accuracy of 2entFox in detection of HSRs.

Jun, Jaehoon, Rhee, Cyuyeol, Kim, Suhwan.  2016.  A 386-\$\textbackslashmu\$W, 15.2-bit Programmable-Gain Embedded Delta-Sigma ADC for Sensor Applications. Proceedings of the 2016 International Symposium on Low Power Electronics and Design. :278–283.

A power-efficient programmable-gain control function embedded Delta-Sigma (ΔΣ) analog-to-digital converter (ADC) for various smart sensor applications is presented. It consists of a programmable-gain switched-capacitor ΔΣ modulator followed by a digital decimation filter for down-sampling. The programmable function is realized with programmable coefficients of a loop filter using a capacitor array. The coefficient control is accomplished with keeping the location of poles of a noise transfer function, so the stability of a designed closed-loop transfer function can be assured. The proposed gain control method helps ADC to optimize its performance with varying input signal magnitude. The gain controllability requires negligible additional energy consuming or area occupying block. The power efficient programmable-gain ADC (PGADC) is well-suited for sensor devices. The gain amplification can be optimized from 0 to 18 dB with a 6 dB step. Measurements show that the PGADC achieves 15.2-bit resolution and 12.4-bit noise free resolution with 99.9 % reliability. The chip operates with a 3.3 V analog supply and a 1.8 V digital supply, while consuming only 97 μA analog current and 37 μA digital current. The analog core area is 0.064 mm2 in a standard 0.18-μm CMOS process.

Zong, Fang, Yong, Ouyang, Gang, Liu.  2018.  3D Modeling Method Based on Deep Belief Networks (DBNs) and Interactive Evolutionary Algorithm (IEA). Proceedings of the 2018 International Conference on Big Data and Computing. :124-128.

3D modeling usually refers to be the use of 3D software to build production through the virtual 3D space model with 3D data. At present, most 3D modeling software such as 3dmax, FLAC3D and Midas all need adjust models to get a satisfactory model or by coding a precise modeling. There are many matters such as complicated steps, strong profession, the high modeling cost. Aiming at this problem, the paper presents a new 3D modeling methods which is based on Deep Belief Networks (DBN) and Interactive Evolutionary Algorithm (IEA). Following this method, firstly, extract characteristic vectors from vertex, normal, surfaces of the imported model samples. Secondly, use the evolution strategy, to extract feature vector for stochastic evolution by artificial grading control the direction of evolution, and in the process to extract the characteristics of user preferences. Then, use evolution function matrix to establish the fitness approximation evaluation model, and simulate subjective evaluation. Lastly, the user can control the whole machine simulation evaluation process at any time, and get a satisfactory model. The experimental results show that the method in this paper is feasible.

Vakilinia, I., Tosh, D. K., Sengupta, S..  2017.  3-Way game model for privacy-preserving cybersecurity information exchange framework. MILCOM 2017 - 2017 IEEE Military Communications Conference (MILCOM). :829–834.

With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.

Carmer, Brent, Malozemoff, Alex J., Raykova, Mariana.  2017.  5Gen-C: Multi-Input Functional Encryption and Program Obfuscation for Arithmetic Circuits. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :747–764.

Program obfuscation is a powerful security primitive with many applications. White-box cryptography studies a particular subset of program obfuscation targeting keyed pseudorandom functions (PRFs), a core component of systems such as mobile payment and digital rights management. Although the white-box obfuscators currently used in practice do not come with security proofs and are thus routinely broken, recent years have seen an explosion of cryptographic techniques for obfuscation, with the goal of avoiding this build-and-break cycle. In this work, we explore in detail cryptographic program obfuscation and the related primitive of multi-input functional encryption (MIFE). In particular, we extend the 5Gen framework (CCS 2016) to support circuit-based MIFE and program obfuscation, implementing both existing and new constructions. We then evaluate and compare the efficiency of these constructions in the context of PRF obfuscation. As part of this work we (1) introduce a novel instantiation of MIFE that works directly on functions represented as arithmetic circuits, (2) use a known transformation from MIFE to obfuscation to give us an obfuscator that performs better than all prior constructions, and (3) develop a compiler for generating circuits optimized for our schemes. Finally, we provide detailed experiments, demonstrating, among other things, the ability to obfuscate a PRF with a 64-bit key and 12 bits of input (containing 62k gates) in under 4 hours, with evaluation taking around 1 hour. This is by far the most complex function obfuscated to date.

Su, Jinshu, Chen, Shuhui, Han, Biao, Xu, Chengcheng, Wang, Xin.  2016.  A 60Gbps DPI Prototype Based on Memory-Centric FPGA. Proceedings of the 2016 ACM SIGCOMM Conference. :627–628.

Deep packet inspection (DPI) is widely used in content-aware network applications to detect string features. It is of vital importance to improve the DPI performance due to the ever-increasing link speed. In this demo, we propose a novel DPI architecture with a hierarchy memory structure and parallel matching engines based on memory-centric FPGA. The implemented DPI prototype is able to provide up to 60Gbps full-text string matching throughput and fast rules update speed.