Visible to the public Biblio

Found 1300 results

Filters: First Letter Of Title is S  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R [S] T U V W X Y Z   [Show ALL]
S
Zhang, B., Ye, J., Feng, C., Tang, C..  2017.  S2F: Discover Hard-to-Reach Vulnerabilities by Semi-Symbolic Fuzz Testing. 2017 13th International Conference on Computational Intelligence and Security (CIS). :548–552.
Fuzz testing is a popular program testing technique. However, it is difficult to find hard-to-reach vulnerabilities that are nested with complex branches. In this paper, we propose semi-symbolic fuzz testing to discover hard-to-reach vulnerabilities. Our method groups inputs into high frequency and low frequency ones. Then symbolic execution is utilized to solve only uncovered branches to mitigate the path explosion problem. Especially, in order to play the advantages of fuzz testing, our method locates critical branch for each low frequency input and corrects the generated test cases to comfort the branch condition. We also implemented a prototype\textbackslashtextbarS2F, and the experimental results show that S2F can gain 17.70% coverage performance and discover more hard-to-reach vulnerabilities than other vulnerability detection tools for our benchmark.
Hoang, Thang, Ozkaptan, Ceyhun D., Yavuz, Attila A., Guajardo, Jorge, Nguyen, Tam.  2017.  S3ORAM: A Computation-Efficient and Constant Client Bandwidth Blowup ORAM with Shamir Secret Sharing. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :491–505.

Oblivious Random Access Machine (ORAM) enables a client to access her data without leaking her access patterns. Existing client-efficient ORAMs either achieve O(log N) client-server communication blowup without heavy computation, or O(1) blowup but with expensive homomorphic encryptions. It has been shown that O(log N) bandwidth blowup might not be practical for certain applications, while schemes with O(1) communication blowup incur even more delay due to costly homomorphic operations. In this paper, we propose a new distributed ORAM scheme referred to as Shamir Secret Sharing ORAM (S3ORAM), which achieves O(1) client-server bandwidth blowup and O(1) blocks of client storage without relying on costly partial homomorphic encryptions. S3ORAM harnesses Shamir Secret Sharing, tree-based ORAM structure and a secure multi-party multiplication protocol to eliminate costly homomorphic operations and, therefore, achieves O(1) client-server bandwidth blowup with a high computational efficiency. We conducted comprehensive experiments to assess the performance of S3ORAM and its counterparts on actual cloud environments, and showed that S3ORAM achieves three orders of magnitude lower end-to-end delay compared to alternatives with O(1) client communication blowup (Onion-ORAM), while it is one order of magnitude faster than Path-ORAM for a network with a moderate bandwidth quality. We have released the implementation of S3ORAM for further improvement and adaptation.

Vliegen, Jo, Rabbani, Md Masoom, Conti, Mauro, Mentens, Nele.  2019.  SACHa: Self-Attestation of Configurable Hardware. 2019 Design, Automation Test in Europe Conference Exhibition (DATE). :746–751.
Device attestation is a procedure to verify whether an embedded device is running the intended application code. This way, protection against both physical attacks and remote attacks on the embedded software is aimed for. With the wide adoption of Field-Programmable Gate Arrays or FPGAs, hardware also became configurable, and hence susceptible to attacks (just like software). In addition, an upcoming trend for hardware-based attestation is the use of configurable FPGA hardware. Therefore, in order to attest a whole system that makes use of FPGAs, the status of both the software and the hardware needs to be verified, without the availability of a tamper-resistant hardware module.In this paper, we propose a solution in which a prover core on the FPGA performs an attestation of the entire FPGA, including a self-attestation. This way, the FPGA can be used as a tamper-resistant hardware module to perform hardware-based attestation of a processor, resulting in a protection of the entire hardware/software system against malicious code updates.
Procter, Sam, Vasserman, Eugene Y., Hatcliff, John.  2017.  SAFE and Secure: Deeply Integrating Security in a New Hazard Analysis. Proceedings of the 12th International Conference on Availability, Reliability and Security. :66:1–66:10.

Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis. 

Khatchadourian, R., Tang, Y., Bagherzadeh, M., Ahmed, S..  2019.  Safe Automated Refactoring for Intelligent Parallelization of Java 8 Streams. 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). :619-630.

Streaming APIs are becoming more pervasive in mainstream Object-Oriented programming languages. For example, the Stream API introduced in Java 8 allows for functional-like, MapReduce-style operations in processing both finite and infinite data structures. However, using this API efficiently involves subtle considerations like determining when it is best for stream operations to run in parallel, when running operations in parallel can be less efficient, and when it is safe to run in parallel due to possible lambda expression side-effects. In this paper, we present an automated refactoring approach that assists developers in writing efficient stream code in a semantics-preserving fashion. The approach, based on a novel data ordering and typestate analysis, consists of preconditions for automatically determining when it is safe and possibly advantageous to convert sequential streams to parallel and unorder or de-parallelize already parallel streams. The approach was implemented as a plug-in to the Eclipse IDE, uses the WALA and SAFE analysis frameworks, and was evaluated on 11 Java projects consisting of ?642K lines of code. We found that 57 of 157 candidate streams (36.31%) were refactorable, and an average speedup of 3.49 on performance tests was observed. The results indicate that the approach is useful in optimizing stream code to their full potential.

van der Linden, Dirk, Rashid, Awais, Williams, Emma, Warinschi, Bogdan.  2018.  Safe Cryptography for All: Towards Visual Metaphor Driven Cryptography Building Blocks. Proceedings of the 1st International Workshop on Security Awareness from Design to Deployment. :41-44.

In this vision paper, we focus on a key aspect of the modern software developer's potential to write secure software: their (lack of) success in securely using cryptography APIs. In particular, we note that most ongoing research tends to focus on identifying concrete problems software developers experience, and providing workable solutions, but that such solutions still require developers to identify the appropriate API calls to make and, worse, to be familiar with and configure sometimes obscure parameters of such calls. In contrast, we envision identifying and employing targeted visual metaphors to allow developers to simply select the most appropriate cryptographic functionality they need.

Gang Han, Haibo Zeng, Yaping Li, Wenhua Dou.  2014.  SAFE: Security-Aware FlexRay Scheduling Engine. Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014. :1-4.

In this paper, we propose SAFE (Security Aware FlexRay scheduling Engine), to provide a problem definition and a design framework for FlexRay static segment schedule to address the new challenge on security. From a high level specification of the application, the architecture and communication middleware are synthesized to satisfy security requirements, in addition to extensibility, costs, and end-to-end latencies. The proposed design process is applied to two industrial case studies consisting of a set of active safety functions and an X-by-wire system respectively.

Sheff, Isaac, Magrino, Tom, Liu, Jed, Myers, Andrew C., van Renesse, Robbert.  2016.  Safe Serializable Secure Scheduling: Transactions and the Trade-Off Between Security and Consistency. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :229–241.

Modern applications often operate on data in multiple administrative domains. In this federated setting, participants may not fully trust each other. These distributed applications use transactions as a core mechanism for ensuring reliability and consistency with persistent data. However, the coordination mechanisms needed for transactions can both leak confidential information and allow unauthorized influence. By implementing a simple attack, we show these side channels can be exploited. However, our focus is on preventing such attacks. We explore secure scheduling of atomic, serializable transactions in a federated setting. While we prove that no protocol can guarantee security and liveness in all settings, we establish conditions for sets of transactions that can safely complete under secure scheduling. Based on these conditions, we introduce \textbackslashti\staged commit\, a secure scheduling protocol for federated transactions. This protocol avoids insecure information channels by dividing transactions into distinct stages. We implement a compiler that statically checks code to ensure it meets our conditions, and a system that schedules these transactions using the staged commit protocol. Experiments on this implementation demonstrate that realistic federated transactions can be scheduled securely, atomically, and efficiently.

Stein, Benno, Clapp, Lazaro, Sridharan, Manu, Chang, Bor-Yuh Evan.  2018.  Safe Stream-Based Programming with Refinement Types. Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. :565-576.

In stream-based programming, data sources are abstracted as a stream of values that can be manipulated via callback functions. Stream-based programming is exploding in popularity, as it provides a powerful and expressive paradigm for handling asynchronous data sources in interactive software. However, high-level stream abstractions can also make it difficult for developers to reason about control- and data-flow relationships in their programs. This is particularly impactful when asynchronous stream-based code interacts with thread-limited features such as UI frameworks that restrict UI access to a single thread, since the threading behavior of streaming constructs is often non-intuitive and insufficiently documented. In this paper, we present a type-based approach that can statically prove the thread-safety of UI accesses in stream-based software. Our key insight is that the fluent APIs of stream-processing frameworks enable the tracking of threads via type-refinement, making it possible to reason automatically about what thread a piece of code runs on – a difficult problem in general. We implement the system as an annotation-based Java typechecker for Android programs built upon the popular ReactiveX framework and evaluate its efficacy by annotating and analyzing 8 open-source apps, where we find 33 instances of unsafe UI access while incurring an annotation burden of only one annotation per 186 source lines of code. We also report on our experience applying the typechecker to two much larger apps from the Uber Technologies, Inc. codebase, where it currently runs on every code change and blocks changes that introduce potential threading bugs.

Huang, Shiyou, Guo, Jianmei, Li, Sanhong, Li, Xiang, Qi, Yumin, Chow, Kingsum, Huang, Jeff.  2019.  SafeCheck: Safety Enhancement of Java Unsafe API. 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). :889–899.

Java is a safe programming language by providing bytecode verification and enforcing memory protection. For instance, programmers cannot directly access the memory but have to use object references. Yet, the Java runtime provides an Unsafe API as a backdoor for the developers to access the low- level system code. Whereas the Unsafe API is designed to be used by the Java core library, a growing community of third-party libraries use it to achieve high performance. The Unsafe API is powerful, but dangerous, which leads to data corruption, resource leaks and difficult-to-diagnose JVM crash if used improperly. In this work, we study the Unsafe crash patterns and propose a memory checker to enforce memory safety, thus avoiding the JVM crash caused by the misuse of the Unsafe API at the bytecode level. We evaluate our technique on real crash cases from the openJDK bug system and real-world applications from AJDK. Our tool reduces the efforts from several days to a few minutes for the developers to diagnose the Unsafe related crashes. We also evaluate the runtime overhead of our tool on projects using intensive Unsafe operations, and the result shows that our tool causes a negligible perturbation to the execution of the applications.

Multari, Nicholas J., Singhal, Anoop, Manz, David O..  2016.  SafeConfig'16: Testing and Evaluation for Active and Resilient Cyber Systems. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1871–1872.

The premise of this year's SafeConfig Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners.

Multari, Nicholas J., Singhal, Anoop, Manz, David O., Cowles, Robert, Cuellar, Jorge, Oehmen, Christopher, Shannon, Gregory.  2016.  SafeConfig'16: Testing and Evaluation for Active & Resilient Cyber Systems Panel Verification of Active and Resilient Systems: Practical or Utopian? Proceedings of the 2016 ACM Workshop on Automated Decision Making for Active Cyber Defense. :53–53.

The premise of the SafeConfig'16 Workshop is existing tools and methods for security assessments are necessary but insufficient for scientifically rigorous testing and evaluation of resilient and active cyber systems. The objective for this workshop is the exploration and discussion of scientifically sound testing regimen(s) that will continuously and dynamically probe, attack, and "test" the various resilient and active technologies. This adaptation and change in focus necessitates at the very least modification, and potentially, wholesale new developments to ensure that resilient- and agile-aware security testing is available to the research community. All testing, validation and experimentation must also be repeatable, reproducible, subject to scientific scrutiny, measurable and meaningful to both researchers and practitioners. The workshop will convene a panel of experts to explore this concept. The topic will be discussed from three different perspectives. One perspective is that of the practitioner. We will explore whether active and resilient technologies are or are planned for deployment and whether the verification methodology affects that decision. The second perspective will be that of the research community. We will address the shortcomings of current approaches and the research directions needed to address the practitioner's concerns. The third perspective is that of the policy community. Specifically, we will explore the dynamics between technology, verification, and policy.

Ahmad, Muhammad Aminu, Woodhead, Steve, Gan, Diane.  2016.  A Safeguard Against Fast Self-propagating Malware. Proceedings of the 6th International Conference on Communication and Network Security. :65–69.

This paper presents a detection and containment mechanism for fast self-propagating network worm malware. The detection part of the mechanism uses two categories of network host activities to identify worm behaviour in a network. Upon an identified worm activity in a network, a data-link containment system is used to isolate the internal source of infection, and a network level containment system is used to block inbound worm datagrams. The mechanism has been demonstrated using a software prototype. A number of worm experiments have been conducted to evaluate the prototype. The empirical results show the effectiveness of the developed mechanism in containing fast network worm malware at an early stage with almost no false positives.

Jansen, Rob, Johnson, Aaron.  2016.  Safely Measuring Tor. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1553–1567.

Tor is a popular network for anonymous communication. The usage and operation of Tor is not well-understood, however, because its privacy goals make common measurement approaches ineffective or risky. We present PrivCount, a system for measuring the Tor network designed with user privacy as a primary goal. PrivCount securely aggregates measurements across Tor relays and over time to produce differentially private outputs. PrivCount improves on prior approaches by enabling flexible exploration of many diverse kinds of Tor measurements while maintaining accuracy and privacy for each. We use PrivCount to perform a measurement study of Tor of sufficient breadth and depth to inform accurate models of Tor users and traffic. Our results indicate that Tor has 710,000 users connected but only 550,000 active at a given time, that Web traffic now constitutes 91% of data bytes on Tor, and that the strictness of relays' connection policies significantly affects the type of application data they forward.

Rauscher, Julia, Bauer, Bernhard.  2018.  Safety and Security Architecture Analyses Framework for the Internet of Things of Medical Devices. 2018 IEEE 20th International Conference on e-Health Networking, Applications and Services (Healthcom). :1–3.
Internet of Things (IoT) is spreading increasingly in different areas of application. Accordingly, IoT also gets deployed in health care including ambient assisted living, telemedicine or medical smart homes. However, IoT also involves risks. Next to increased security issues also safety concerns are occurring. Deploying health care sensors and utilizing medical data causes a high need for IoT architectures free of vulnerabilities in order to identify weak points as early as possible. To address this, we are developing a safety and security analysis approach including a standardized meta model and an IoT safety and security framework comprising a customizable analysis language.
Hui Lin, University of Illinois at Urbana-Champaign, Homa Alemzadeh, IBM TJ Watson, Daniel Chen, University of Illinois at Urbana-Champagin, Zbigniew Kalbarczyk, University of Illinois at Urbana-Champaign, Ravishankar K. Iyer, University of Illinois at Urbana-Champaign.  2016.  Safety-critical Cyber-physical Attacks: Analysis, Detection, and Mitigation. Symposium and Bootcamp for the Science of Security (HotSoS 2016).

Today's cyber-physical systems (CPSs) can have very different characteristics in terms of control algorithms, configurations, underlying infrastructure, communication protocols, and real-time requirements. Despite these variations, they all face the threat of malicious attacks that exploit the vulnerabilities in the cyber domain as footholds to introduce safety violations in the physical processes. In this paper, we focus on a class of attacks that impact the physical processes without introducing anomalies in the cyber domain. We present the common challenges in detecting this type of attacks in the contexts of two very different CPSs (i.e., power grids and surgical robots). In addition, we present a general principle for detecting such cyber-physical attacks, which combine the knowledge of both cyber and physical domains to estimate the adverse consequences of malicious activities in a timely manner.

Srisopha, Kamonphop, Phonsom, Chukiat, Lin, Keng, Boehm, Barry.  2019.  Same App, Different Countries: A Preliminary User Reviews Study on Most Downloaded iOS Apps. 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME). :76—80.
Prior work on mobile app reviews has demonstrated that user reviews contain a wealth of information and are seen as a potential source of requirements. However, most of the studies done in this area mainly focused on mining and analyzing user reviews from the US App Store, leaving reviews of users from other countries unexplored. In this paper, we seek to understand if the perception of the same apps between users from other countries and that from the US differs through analyzing user reviews. We retrieve 300,643 user reviews of the 15 most downloaded iOS apps of 2018, published directly by Apple, from nine English-speaking countries over the course of 5 months. We manually classify 3,358 reviews into several software quality and improvement factors. We leverage a random forest based algorithm to identify factors that can be used to differentiate reviews between the US and other countries. Our preliminary results show that all countries have some factors that are proportionally inconsistent with the US.
Tran, D. T., Waris, M. A., Gabbouj, M., Iosifidis, A..  2017.  Sample-Based Regularization for Support Vector Machine Classification. 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA). :1–6.

In this paper, we propose a new regularization scheme for the well-known Support Vector Machine (SVM) classifier that operates on the training sample level. The proposed approach is motivated by the fact that Maximum Margin-based classification defines decision functions as a linear combination of the selected training data and, thus, the variations on training sample selection directly affect generalization performance. We show that the exploitation of the proposed regularization scheme is well motivated and intuitive. Experimental results show that the proposed regularization scheme outperforms standard SVM in human action recognition tasks as well as classical recognition problems.

Haah, Jeongwan, Harrow, Aram W., Ji, Zhengfeng, Wu, Xiaodi, Yu, Nengkun.  2016.  Sample-optimal Tomography of Quantum States. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :913–925.

It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. This is the quantum analogue of the problem of estimating a probability distribution given some number of samples. Previously, it was known only that estimating states to error є in trace distance required O(dr2/є2) copies for a d-dimensional density matrix of rank r. Here, we give a measurement scheme (POVM) that uses O( (dr/ δ ) ln(d/δ) ) copies to estimate ρ to error δ in infidelity. This implies O( (dr / є2)· ln(d/є) ) copies suffice to achieve error є in trace distance. For fixed d, our measurement can be implemented on a quantum computer in time polynomial in n. We also use the Holevo bound from quantum information theory to prove a lower bound of Ω(dr/є2)/ log(d/rє) copies needed to achieve error є in trace distance. This implies a lower bound Ω(dr/δ)/log(d/rδ) for the estimation error δ in infidelity. These match our upper bounds up to log factors. Our techniques can also show an Ω(r2d/δ) lower bound for measurement strategies in which each copy is measured individually and then the outcomes are classically post-processed to produce an estimate. This matches the known achievability results and proves for the first time that such “product” measurements have asymptotically suboptimal scaling with d and r.

Haah, Jeongwan, Harrow, Aram W., Ji, Zhengfeng, Wu, Xiaodi, Yu, Nengkun.  2016.  Sample-optimal Tomography of Quantum States. Proceedings of the Forty-eighth Annual ACM Symposium on Theory of Computing. :913–925.

It is a fundamental problem to decide how many copies of an unknown mixed quantum state are necessary and sufficient to determine the state. This is the quantum analogue of the problem of estimating a probability distribution given some number of samples. Previously, it was known only that estimating states to error є in trace distance required O(dr2/є2) copies for a d-dimensional density matrix of rank r. Here, we give a measurement scheme (POVM) that uses O( (dr/ δ ) ln(d/δ) ) copies to estimate ρ to error δ in infidelity. This implies O( (dr / є2)· ln(d/є) ) copies suffice to achieve error є in trace distance. For fixed d, our measurement can be implemented on a quantum computer in time polynomial in n. We also use the Holevo bound from quantum information theory to prove a lower bound of Ω(dr/є2)/ log(d/rє) copies needed to achieve error є in trace distance. This implies a lower bound Ω(dr/δ)/log(d/rδ) for the estimation error δ in infidelity. These match our upper bounds up to log factors. Our techniques can also show an Ω(r2d/δ) lower bound for measurement strategies in which each copy is measured individually and then the outcomes are classically post-processed to produce an estimate. This matches the known achievability results and proves for the first time that such “product” measurements have asymptotically suboptimal scaling with d and r.

Wang, Hui, Yan, Qiurong, Li, Bing, Yuan, Chenglong, Wang, Yuhao.  2019.  Sampling Time Adaptive Single-Photon Compressive Imaging. IEEE Photonics Journal. 11:1–10.
We propose a time-adaptive sampling method and demonstrate a sampling-time-adaptive single-photon compressive imaging system. In order to achieve self-adapting adjustment of sampling time, the theory of threshold of light intensity estimation accuracy is deduced. According to this threshold, a sampling control module, based on field-programmable gate array, is developed. Finally, the advantage of the time-adaptive sampling method is proved experimentally. Imaging performance experiments show that the time-adaptive sampling method can automatically adjust the sampling time for the change of light intensity of image object to obtain an image with better quality and avoid speculative selection of sampling time.
Ambrosin, Moreno, Conti, Mauro, Ibrahim, Ahmad, Neven, Gregory, Sadeghi, Ahmad-Reza, Schunter, Matthias.  2016.  SANA: Secure and Scalable Aggregate Network Attestation. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :731–742.

Large numbers of smart connected devices, also named as the Internet of Things (IoT), are permeating our environments (homes, factories, cars, and also our body - with wearable devices) to collect data and act on the insight derived. Ensuring software integrity (including OS, apps, and configurations) on such smart devices is then essential to guarantee both privacy and safety. A key mechanism to protect the software integrity of these devices is remote attestation: A process that allows a remote verifier to validate the integrity of the software of a device. This process usually makes use of a signed hash value of the actual device's software, generated by dedicated hardware. While individual device attestation is a well-established technique, to date integrity verification of a very large number of devices remains an open problem, due to scalability issues. In this paper, we present SANA, the first secure and scalable protocol for efficient attestation of large sets of devices that works under realistic assumptions. SANA relies on a novel signature scheme to allow anyone to publicly verify a collective attestation in constant time and space, for virtually an unlimited number of devices. We substantially improve existing swarm attestation schemes by supporting a realistic trust model where: (1) only the targeted devices are required to implement attestation; (2) compromising any device does not harm others; and (3) all aggregators can be untrusted. We implemented SANA and demonstrated its efficiency on tiny sensor devices. Furthermore, we simulated SANA at large scale, to assess its scalability. Our results show that SANA can provide efficient attestation of networks of 1,000,000 devices, in only 2.5 seconds.

Hassan, Galal, Rashwan, Abdulmonem M., Hassanein, Hossam S..  2019.  SandBoxer: A Self-Contained Sensor Architecture for Sandboxing the Industrial Internet of Things. 2019 IEEE International Conference on Communications Workshops (ICC Workshops). :1–6.
The Industrial Internet-of-Things (IIoT) has gained significant interest from both the research and industry communities. Such interest came with a vision towards enabling automation and intelligence for futuristic versions of our day to day devices. However, such a vision demands the need for accelerated research and development of IIoT systems, in which sensor integration, due to their diversity, impose a significant roadblock. Such roadblocks are embodied in both the cost and time to develop an IIoT platform, imposing limits on the innovation of sensor manufacturers, as a result of the demand to maintain interface compatibility for seamless integration and low development costs. In this paper, we propose an IIoT system architecture (SandBoxer) tailored for sensor integration, that utilizes a collaborative set of efforts from various technologies and research fields. The paper introduces the concept of ”development-sandboxing” as a viable choice towards building the foundation for enabling true-plug-and-play IIoT. We start by outlining the key characteristics desired to create an architecture that catalyzes IIoT research and development. We then present our vision of the architecture through the use of a sensor-hosted EEPROM and scripting to ”sandbox” the sensors, which in turn accelerates sensor integration for developers and creates a broader innovation path for sensor manufacturers. We also discuss multiple design alternative, challenges, and use cases in both the research and industry.