Visible to the public Biblio

Found 11883 results

Book
Merzdovnik, G., Huber, M., Buhov, D., Nikiforakis, N., Neuner, S., Schmiedecker, M., Weippl, E..  2017.  Block Me If You Can: A Large-Scale Study of Tracker-Blocking Tools - IEEE Conference Publication.

In this paper, we quantify the effectiveness of third-party tracker blockers on a large scale. First, we analyze the architecture of various state-of-the-art blocking solutions and discuss the advantages and disadvantages of each method. Second, we perform a two-part measurement study on the effectiveness of popular tracker-blocking tools. Our analysis quantifies the protection offered against trackers present on more than 100,000 popular websites and 10,000 popular Android applications. We provide novel insights into the ongoing arms race between trackers and developers of blocking tools as well as which tools achieve the best results under what circumstances. Among others, we discover that rule-based browser extensions outperform learning-based ones, trackers with smaller footprints are more successful at avoiding being blocked, and CDNs pose a major threat towards the future of tracker-blocking tools. Overall, the contributions of this paper advance the field of web privacy by providing not only the largest study to date on the effectiveness of tracker-blocking tools, but also by highlighting the most pressing challenges and privacy issues of third-party tracking.
 

Robling Denning, Dorothy Elizabeth.  1982.  Cryptography and Data Security. :414.

Electronic computers have evolved from exiguous experimental enterprises in the 1940s to prolific practical data processing systems in the 1980s. As we have come to rely on these systems to process and store data, we have also come to wonder about their ability to protect valuable data.

Data security is the science and study of methods of protecting data in computer and communication systems from unauthorized disclosure and modification. The goal of this book is to introduce the mathematical principles of data security and to show how these principles apply to operating systems, database systems, and computer networks. The book is for students and professionals seeking an introduction to these principles. There are many references for those who would like to study specific topics further.

Data security has evolved rapidly since 1975. We have seen exciting developments in cryptography: public-key encryption, digital signatures, the Data Encryption Standard (DES), key safeguarding schemes, and key distribution protocols. We have developed techniques for verifying that programs do not leak confidential data, or transmit classified data to users with lower security clearances. We have found new controls for protecting data in statistical databases--and new methods of attacking these databases. We have come to a better understanding of the theoretical and practical limitations to security.

This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.

S. Petcy Carolin, M. Somasundaram.  2016.  Data loss protection and data security using agents for cloud environment - IEEE Conference Publication.

Cyber infrastructures are highly vulnerable to intrusions and other threats. The main challenges in cloud computing are failure of data centres and recovery of lost data and providing a data security system. This paper has proposed a Virtualization and Data Recovery to create a virtual environment and recover the lost data from data servers and agents for providing data security in a cloud environment. A Cloud Manager is used to manage the virtualization and to handle the fault. Erasure code algorithm is used to recover the data which initially separates the data into n parts and then encrypts and stores in data servers. The semi trusted third party and the malware changes made in data stored in data centres can be identified by Artificial Intelligent methods using agents. Java Agent Development Framework (JADE) is a tool to develop agents and facilitates the communication between agents and allows the computing services in the system. The framework designed and implemented in the programming language JAVA as gateway or firewall to recover the data loss.
 

Honggang, Zhao, Chen, Shi, Leyu, Zhai.  2018.  Design and Implementation of Lightweight 6LoWPAN Gateway Based on Contiki - IEEE Conference Publication.

6LoWPAN technology realizes the IPv6 packet transmission in the IEEE 802.15.4 based WSN. And 6LoWPAN is regarded as one of the ideal technologies to realize the interconnection between WSN and Internet, which is the key to build the IoT. Contiki is an open source and highly portable multitasking operating system, in which the 6LoWPAN has been implemented. In contiki, only several K Bytes of code and a few hundred bytes of memory are required to provide a multitasking environment and built-in TCP/IP support. This makes it especially suitable for memory constrained embedded platforms. In this paper, a lightweight 6LoWPAN gateway based on Contiki is designed and its designs of hardware and software are described. A complex experiment environment is presented, in which the gateway's capability of accessing the Internet is verified, and its performance about the average network delay and jitter are analyzed. The experimental results show that the gateway designed in this paper can not only realize the interconnection between 6LoWPAN networks and Internet, but also have good network adaptability and stability.

Tsuyoshi Arai, Yasuo Okabe, Yoshinori Matsumoto, Koji Kawamura.  2020.  Detection of Bots in CAPTCHA as a Cloud Service Utilizing Machine Learning.

In recent years, the damage caused by unauthorized access using bots has increased. Compared with attacks on conventional login screens, the success rate is higher and detection of them is more difficult. CAPTCHA is commonly utilized as a technology for avoiding attacks by bots. But user's experience declines as the difficulty of CAPTCHA becomes higher corresponding to the advancement of the bot. As a solution, adaptive difficulty setting of CAPTCHA combining with bot detection technologies is considered. In this research, we focus on Capy puzzle CAPTCHA, which is widely used in commercial service. We use a supervised machine learning approach to detect bots. As a training data, we use access logs to several Web services, and add flags to attacks by bots detected in the past. We have extracted vectors fields like HTTP-User-Agent and some information from IP address (e.g. geographical information) from the access logs, and the dataset is investigated using supervised learning. By using XGBoost and LightGBM, we have achieved high ROC-AUC score more than 0.90, and further have detected suspicious accesses from some ISPs that has no bot discrimination flag.

Dykstra, J..  2015.  Essential Cybersecurity Science: Build, Test, and Evaluate Secure Systems. :190.

If you’re involved in cybersecurity as a software developer, forensic investigator, or network administrator, this practical guide shows you how to apply the scientific method when assessing techniques for protecting your information systems. You’ll learn how to conduct scientific experiments on everyday tools and procedures, whether you’re evaluating corporate security systems, testing your own security product, or looking for bugs in a mobile game.

Once author Josiah Dykstra gets you up to speed on the scientific method, he helps you focus on standalone, domain-specific topics, such as cryptography, malware analysis, and system security engineering. The latter chapters include practical case studies that demonstrate how to use available tools to conduct domain-specific scientific experiments.

  • Learn the steps necessary to conduct scientific experiments in cybersecurity
  • Explore fuzzing to test how your software handles various inputs
  • Measure the performance of the Snort intrusion detection system
  • Locate malicious “needles in a haystack” in your network and IT environment
  • Evaluate cryptography design and application in IoT products
  • Conduct an experiment to identify relationships between similar malware binaries
  • Understand system-level security requirements for enterprise networks and web services
Ibrahim, Rosziati, Omotunde, Habeeb.  2017.  A Hybrid Threat Model for Software Security Requirement Specification - IEEE Conference Publication.

Security is often treated as secondary or a non- functional feature of software which influences the approach of vendors and developers when describing their products often in terms of what it can do (Use Cases) or offer customers. However, tides are beginning to change as more experienced customers are beginning to demand for more secure and reliable software giving priority to confidentiality, integrity and privacy while using these applications. This paper presents the MOTH (Modeling Threats with Hybrid Techniques) framework designed to help organizations secure their software assets from attackers in order to prevent any instance of SQL Injection Attacks (SQLIAs). By focusing on the attack vectors and vulnerabilities exploited by the attackers and brainstorming over possible attacks, developers and security experts can better strategize and specify security requirements required to create secure software impervious to SQLIAs. A live web application was considered in this research work as a case study and results obtained from the hybrid models extensively exposes the vulnerabilities deep within the application and proposed resolution plans for blocking those security holes exploited by SQLIAs.
 

Mada, Bharat B., Banik, Manoj, Wu, Bo Chen, Bein, Doina.  2016.  Intrusion Tolerant Multi-cloud Storage - IEEE Conference Publication.

Data generation and its utilization in important decision applications has been growing an extremely fast pace, which has made data a valuable resource that needs to be rigorously protected from attackers. Cloud storage systems claim to offer the promise of secure and elastic data storage services that can adapt to changing storage requirements. Despite diligent efforts being made to protect data, recent successful attacks highlight the need for going beyond the existing approaches centered on intrusion prevention, detection and recovery mechanisms. However, most security mechanisms have finite rate of failure, and with intrusion becoming more sophisticated and stealthy, the failure rate appears to be rising. In this paper we propose the use data fragmentation, followed by coding that introduces redundant fragments and dispersing fragments to multiple and independent cloud storage systems with each cloud handling only a single fragments. The paper proposes a multi-cloud fragmented cloud storage system architecture and design of the related software code. Probabilistic analysis is carried to quantify its intrusion tolerance abilities.
 

[Anonymous].  Submitted.  Natural Language Processing Characterization of Recurring Calls in Public Security Services.
Extracting knowledge from unstructured data silos, a legacy of old applications, is mandatory for improving the governance of today's cities and fostering the creation of smart cities. Texts in natural language often compose such data. Nevertheless, the inference of useful information from a linguistic-computational analysis of natural language data is an open challenge. In this paper, we propose a clustering method to analyze textual data employing the unsupervised machine learning algorithms k-means and hierarchical clustering. We assess different vector representation methods for text, similarity metrics, and the number of clusters that best matches the data. We evaluate the methods using a real database of a public record service of security occurrences. The results show that the k-means algorithm using Euclidean distance extracts non-trivial knowledge, reaching up to 93% accuracy in a set of test samples while identifying the 12 most prevalent occurrence patterns.
[Anonymous].  2019.  PhishFarm: A Scalable Framework for Measuring the Effectiveness of Evasion Techniques against Browser Phishing Blacklists - IEEE Conference Publication.

Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.

Parno, Bryan Jeffery.  2014.  Trust Extension As a Mechanism for Secure Code Execution on Commodity Computers.

From the Preface

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. 
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. 

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

Parno, Bryan Jeffery.  2014.  Trust Extension As a Mechanism for Secure Code Execution on Commodity Computers.

From the Preface

As society rushes to digitize sensitive information and services, it is imperative that we adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom adopted [Karger et al. 1991, Gold et al. 1984, Ames 1981].

In this book, a revised version of my doctoral dissertation, originally written while studying at Carnegie Mellon University, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems.We support this premise over the course of the following chapters.

Introduction. This chapter introduces the notion of bootstrapping trust from one device or service to another and gives an overview of how the subsequent chapters fit together.
Background and related work. This chapter focuses on existing techniques for bootstrapping trust in commodity computers, specifically by conveying information about a computer's current execution environment to an interested party. This would, for example, enable a user to verify that her computer is free of malware, or that a remote web server will handle her data responsibly. 
Bootstrapping trust in a commodity computer. At a high level, this chapter develops techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. While the problem is simply stated, finding a solution that is both secure and usable with existing hardware proves quite difficult.
On-demand secure code execution. Rather than entrusting a user's data to the mountain of buggy code likely running on her computer, in this chapter, we construct an on-demand secure execution environment which can perform security sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Using trustworthy host data in the network. Having established an environment for secure code execution on an individual computer, this chapter shows how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on end hosts and trust the results within the network.
Secure code execution on untrusted hardware. Lastly, this chapter extends the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud).We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner. 

Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.

Dylan Wang, Melody Moh, Teng-Sheng Moh.  2020.  Using Deep Learning to Solve Google reCAPTCHA v2’s Image Challenges.

The most popular CAPTCHA service in use today is Google reCAPTCHA v2, whose main offering is an image-based CAPTCHA challenge. This paper looks into the security measures used in reCAPTCHA v2's image challenges and proposes a deep learning-based solution that can be used to automatically solve them. The proposed method is tested with both a custom object- detection deep learning model as well as Google's own Cloud Vision API, in conjunction with human mimicking mouse movements to bypass the challenges. The paper also suggests some potential defense measures to increase overall security and other additional attack directions for reCAPTCHA v2.

Vaibhavi Deshmukh, Swarnima Deshmukh, Shivani Deosatwar, Reva Sarda, Lalit Kulkarni.  2020.  Versatile CAPTCHA Generation Using Machine Learning and Image Processing.

Due to the significant increase in the size of the internet and the number of users on this platform there has been a tremendous increase in load on various websites and web-based applications. This load is from the user end which causes unforeseen conditions which leads to unacceptable consequences such as crash or a data loss scenario at the webserver end. Therefore, there is a need to reduce the load on the server as well as the chances of network attacks that increase with the increased user base. The undue consequences such as data loss and server crash are caused due to two main reasons: the first one being an overload of users and the second due to an increased number of automatic programs or robots. A technique can be utilized to overcome this scenario by introducing a delay in the operation speed on the user end through the use of a CAPTCHA mechanism. Most of the classical approaches use a single method for the generation of the CAPTCHA, to overcome this proposed model uses the versatile image CAPTCHA generation mechanism. We have introduced a system that utilizes manualbased, face detection-based, colour based and random object insertion technique to generate 4 different random types of CAPTCHA. The proposed methodology implements a region of interest and convolutional neural networks to achieve the generation of the CAPTCHA effectively.

Book Chapter
Chuchu Fan, Sayan Mitra.  2019.  Data-Driven Safety Verification of Complex Cyber-Physical Systems. Design Automation of Cyber-Physical Systems. :107–142.

Data-driven verification methods utilize execution data together with models for establishing safety requirements. These are often the only tools available for analyzing complex, nonlinear cyber-physical systems, for which purely model-based analysis is currently infeasible. In this chapter, we outline the key concepts and algorithmic approaches for data-driven verification and discuss the guarantees they provide. We introduce some of the software tools that embody these ideas and present several practical case studies demonstrating their application in safety analysis of autonomous vehicles, advanced driver assist systems (ADAS), satellite control, and engine control systems.

Li, Bo, Vorobeychik, Yevgeniy.  2014.  Feature Cross-Substitution in Adversarial Classification. Advances in Neural Information Processing Systems 27. :2087–2095.

The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to evade the classifiers deployed to detect them. We investigate both the problem of modeling the objectives of such adversaries, as well as the algorithmic problem of accounting for rational, objective-driven adversaries. In particular, we demonstrate severe shortcomings of feature reduction in adversarial settings using several natural adversarial objective functions, an observation that is particularly pronounced when the adversary is able to substitute across similar features (for example, replace words with synonyms or replace letters in words). We offer a simple heuristic method for making learning more robust to feature cross-substitution attacks. We then present a more general approach based on mixed-integer linear programming with constraint generation, which implicitly trades off overfitting and feature selection in an adversarial setting using a sparse regularizer along with an evasion model. Our approach is the first method for combining an adversarial classification algorithm with a very general class of models of adversarial classifier evasion. We show that our algorithmic approach significantly outperforms state-of-the-art alternatives.

Waseem Abbas, Aron Laszka, Koutsoukos, Xenofon.  2015.  Resilient Wireless Sensor Networks for Cyber-Physical Systems. Cyber-Physical System Design with Sensor Networking Technologies.

Due to their low deployment costs, wireless sensor networks (WSN) may act as a key enabling technology for a variety of spatially-distributed cyber-physical system (CPS) applications, ranging from intelligent traffic control to smart grids. However, besides providing tremendous benefits in terms of deployment costs, they also open up new possibilities for malicious attackers, who aim to cause financial losses or physical damage. Since perfectly securing these spatially-distributed systems is either impossible or financially unattainable, we need to design them to be resilient to attacks: even if some parts of the system are compromised or unavailable due to the actions of an attacker, the system as a whole must continue to operate with minimal losses. In a CPS, control decisions affecting the physical process depend on the observed data from the sensor network. Any malicious activity in the sensor network can therefore severely impact the physical process, and consequently the overall CPS operations. These factors necessitate a deeper probe into the domain of resilient WSN for CPS. In this chapter, we provide an overview of various dimensions in this field, including objectives of WSN in CPS, attack scenarios and vulnerabilities, notion of attack-resilience in WSN for CPS, and solution approaches towards attaining resilience. We also highlight major challenges, recent developments, and future directions in this area.

Ross Koppel, University of Pennsylvania, Sean W. Smith, Dartmouth College, Jim Blythe, University of Southern California, Vijay Kothari, Dartmouth College.  2015.  Workarounds to Computer Access in Healthcare Organizations: You Want My Password or a Dead Patient? Studies in Health Technology and Informatics Driving Quality Informatics: Fulfilling the Promise . 208

Workarounds to computer access in healthcare are sufficiently common that they often go unnoticed. Clinicians focus on patient care, not cybersecurity. We argue and demonstrate that understanding workarounds to healthcare workers’ computer access requires not only analyses of computer rules, but also interviews and observations with clinicians. In addition, we illustrate the value of shadowing clinicians and conducing focus groups to understand their motivations and tradeoffs for circumvention. Ethnographic investigation of the medical workplace emerges as a critical method of research because in the inevitable conflict between even well-intended people versus the machines, it’s the people who are the more creative, flexible, and motivated. We conducted interviews and observations with hundreds of medical workers and with 19 cybersecurity experts, CIOs, CMIOs, CTO, and IT workers to obtain their perceptions of computer security. We also shadowed clinicians as they worked. We present dozens of ways workers ingeniously circumvent security rules. The clinicians we studied were not “black hat” hackers, but just professionals seeking to accomplish their work despite the security technologies and regulations.
 

Conference Paper
Bost, Raphael.  2016.  ∑O\$\textbackslashphi\$Oς: Forward Secure Searchable Encryption. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1143–1154.

Searchable Symmetric Encryption aims at making possible searching over an encrypted database stored on an untrusted server while keeping privacy of both the queries and the data, by allowing some small controlled leakage to the server. Recent work shows that dynamic schemes – in which the data is efficiently updatable – leaking some information on updated keywords are subject to devastating adaptative attacks breaking the privacy of the queries. The only way to thwart this attack is to design forward private schemes whose update procedure does not leak if a newly inserted element matches previous search queries. This work proposes Sophos as a forward private SSE scheme with performance similar to existing less secure schemes, and that is conceptually simpler (and also more efficient) than previous forward private constructions. In particular, it only relies on trapdoor permutations and does not use an ORAM-like construction. We also explain why Sophos is an optimal point of the security/performance tradeoff for SSE. Finally, an implementation and evaluation results demonstrate its practical efficiency.

Schrenk, B., Pacher, C..  2018.  1 Gb/s All-LED Visible Light Communication System. 2018 Optical Fiber Communications Conference and Exposition (OFC). :1–3.
We evaluate the use of LEDs intended for illumination as low-cost filtered optical detectors. An optical wireless system that is exclusively based on commercial off-the-shelf 5-mm R/G/B LEDs is experimentally demonstrated for Gb/s close-proximity transmission.
Doerr, Carola, Lengler, Johannes.  2016.  The (1+1) Elitist Black-Box Complexity of LeadingOnes. Proceedings of the Genetic and Evolutionary Computation Conference 2016. :1131–1138.

One important goal of black-box complexity theory is the development of complexity models allowing to derive meaningful lower bounds for whole classes of randomized search heuristics. Complementing classical runtime analysis, black-box models help us understand how algorithmic choices such as the population size, the variation operators, or the selection rules influence the optimization time. One example for such a result is the Ω(n log n) lower bound for unary unbiased algorithms on functions with a unique global optimum [Lehre/Witt, GECCO 2010], which tells us that higher arity operators or biased sampling strategies are needed when trying to beat this bound. In lack of analyzing techniques, almost no non-trivial bounds are known for other restricted models. Proving such bounds therefore remains to be one of the main challenges in black-box complexity theory. With this paper we contribute to our technical toolbox for lower bound computations by proposing a new type of information-theoretic argument. We regard the permutation- and bit-invariant version of LeadingOnes and prove that its (1+1) elitist black-box complexity is Ω(n2), a bound that is matched by (1+1)-type evolutionary algorithms. The (1+1) elitist complexity of LeadingOnes is thus considerably larger than its unrestricted one, which is known to be of order n log log n [Afshani et al., 2013].

Whatmough, P. N., Lee, S. K., Lee, H., Rama, S., Brooks, D., Wei, G. Y..  2017.  14.3 A 28nm SoC with a 1.2GHz 568nJ/prediction sparse deep-neural-network engine with \#x003E;0.1 timing error rate tolerance for IoT applications. 2017 IEEE International Solid-State Circuits Conference (ISSCC). :242–243.

This paper presents a 28nm SoC with a programmable FC-DNN accelerator design that demonstrates: (1) HW support to exploit data sparsity by eliding unnecessary computations (4× energy reduction); (2) improved algorithmic error tolerance using sign-magnitude number format for weights and datapath computation; (3) improved circuit-level timing violation tolerance in datapath logic via timeborrowing; (4) combined circuit and algorithmic resilience with Razor timing violation detection to reduce energy via VDD scaling or increase throughput via FCLK scaling; and (5) high classification accuracy (98.36% for MNIST test set) while tolerating aggregate timing violation rates \textbackslashtextgreater10-1. The accelerator achieves a minimum energy of 0.36μJ/pred at 667MHz, maximum throughput at 1.2GHz and 0.57μJ/pred, or a 10%-margined operating point at 1GHz and 0.58μJ/pred.