Visible to the public Biblio

Found 164 results

Filters: Keyword is Standards  [Clear All Filters]
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z   [Show ALL]
Á
Ádám, Norbert, Madoš, Branislav, Baláž, Anton, Pavlik, Tomáš.  2017.  Artificial Neural Network Based IDS. 2017 IEEE 15th International Symposium on Applied Machine Intelligence and Informatics (SAMI). :000159–000164.

The Network Intrusion Detection Systems (NIDS) are either signature based or anomaly based. In this paper presented NIDS system belongs to anomaly based Neural Network Intrusion Detection System (NNIDS). The proposed NNIDS is able to successfully recognize learned malicious activities in a network environment. It was tested for the SYN flood attack, UDP flood attack, nMap scanning attack, and also for non-malicious communication.

A
Afanasev, M. Y., Krylova, A. A., Shorokhov, S. A., Fedosov, Y. V., Sidorenko, A. S..  2018.  A Design of Cyber-Physical Production System Prototype Based on an Ethereum Private Network. 2018 22nd Conference of Open Innovations Association (FRUCT). :3–11.

The concept of cyber-physical production systems is highly discussed amongst researchers and industry experts, however, the implementation options for these systems rely mainly on obsolete technologies. Despite the fact that the blockchain is most often associated with cryptocurrency, it is fundamentally wrong to deny the universality of this technology and the prospects for its application in other industries. For example, in the insurance sector or in a number of identity verification services. This article discusses the deployment of the CPPS backbone network based on the Ethereum private blockchain system. The structure of the network is described as well as its interaction with the help of smart contracts, based on the consumption of cryptocurrency for various operations.

Aksu, M. U., Dilek, M. H., Tatlı, E. İ, Bicakci, K., Dirik, H. İ, Demirezen, M. U., Aykır, T..  2017.  A Quantitative CVSS-Based Cyber Security Risk Assessment Methodology for IT Systems. 2017 International Carnahan Conference on Security Technology (ICCST). :1–8.

IT system risk assessments are indispensable due to increasing cyber threats within our ever-growing IT systems. Moreover, laws and regulations urge organizations to conduct risk assessments regularly. Even though there exist several risk management frameworks and methodologies, they are in general high level, not defining the risk metrics, risk metrics values and the detailed risk assessment formulas for different risk views. To address this need, we define a novel risk assessment methodology specific to IT systems. Our model is quantitative, both asset and vulnerability centric and defines low and high level risk metrics. High level risk metrics are defined in two general categories; base and attack graph-based. In our paper, we provide a detailed explanation of formulations in each category and make our implemented software publicly available for those who are interested in applying the proposed methodology to their IT systems.

Al-Haj, Ali, Aziz, Benjamin.  2019.  Enforcing Multilevel Security Policies in Database-Defined Networks using Row-Level Security. 2019 International Conference on Networked Systems (NetSys). :1-6.

Despite the wide of range of research and technologies that deal with the problem of routing in computer networks, there remains a gap between the level of network hardware administration and the level of business requirements and constraints. Not much has been accomplished in literature in order to have a direct enforcement of such requirements on the network. This paper presents a new solution in specifying and directly enforcing security policies to control the routing configuration in a software-defined network by using Row-Level Security checks which enable fine-grained security policies on individual rows in database tables. We show, as a first step, how a specific class of such policies, namely multilevel security policies, can be enforced on a database-defined network, which presents an abstraction of a network's configuration as a set of database tables. We show that such policies can be used to control the flow of data in the network either in an upward or downward manner.

Al-Salhi, Y. E. A., Lu, S..  2017.  New Steganography Scheme to Conceal a Large Amount of Secret Messages Using an Improved-AMBTC Algorithm Based on Hybrid Adaptive Neural Networks. 2017 Ieee 3rd International Conference on Big Data Security on Cloud (Bigdatasecurity), Ieee International Conference on High Performance and Smart Computing (Hpsc), and Ieee International Conference on Intelligent Data and Security (Ids). :112–121.

The term steganography was used to conceal thesecret message into other media file. In this paper, a novel imagesteganography is proposed, based on adaptive neural networkswith recycling the Improved Absolute Moment Block TruncationCoding algorithm, and by employing the enhanced five edgedetection operators with an optimal target of the ANNS. Wepropose a new scheme of an image concealing using hybridadaptive neural networks based on I-AMBTC method by thehelp of two approaches, the relevant edge detection operators andimage compression methods. Despite that, many processes in ourscheme are used, but still the quality of concealed image lookinggood according to the HVS and PVD systems. The final simulationresults are discussed and compared with another related researchworks related to the image steganography system.

Allawi, M. A. A., Hadi, A., Awajan, A..  2015.  MLDED: Multi-layer Data Exfiltration Detection System. 2015 Fourth International Conference on Cyber Security, Cyber Warfare, and Digital Forensic (CyberSec). :107–112.

Due to the growing advancement of crime ware services, the computer and network security becomes a crucial issue. Detecting sensitive data exfiltration is a principal component of each information protection strategy. In this research, a Multi-Level Data Exfiltration Detection (MLDED) system that can handle different types of insider data leakage threats with staircase difficulty levels and their implications for the organization environment has been proposed, implemented and tested. The proposed system detects exfiltration of data outside an organization information system, where the main goal is to use the detection results of a MLDED system for digital forensic purposes. MLDED system consists of three major levels Hashing, Keywords Extraction and Labeling. However, it is considered only for certain type of documents such as plain ASCII text and PDF files. In response to the challenging issue of identifying insider threats, a forensic readiness data exfiltration system is designed that is capable of detecting and identifying sensitive information leaks. The results show that the proposed system has an overall detection accuracy of 98.93%.

Aranha, Helder, Masi, Massimiliano, Pavleska, Tanja, Sellitto, Giovanni Paolo.  2019.  Enabling Security-by-Design in Smart Grids: An Architecture-Based Approach. 2019 15th European Dependable Computing Conference (EDCC). :177–179.
Energy Distribution Grids are considered critical infrastructure, hence the Distribution System Operators (DSOs) have developed sophisticated engineering practices to improve their resilience. Over the last years, due to the "Smart Grid" evolution, this infrastructure has become a distributed system where prosumers (the consumers who produce and share surplus energy through the grid) can plug in distributed energy resources (DERs) and manage a bi-directional flow of data and power enabled by an advanced IT and control infrastructure. This introduces new challenges, as the prosumers possess neither the skills nor the knowledge to assess the risk or secure the environment from cyber-threats. We propose a simple and usable approach based on the Reference Model of Information Assurance & Security (RMIAS), to support the prosumers in the selection of cybesecurity measures. The purpose is to reduce the risk of being directly targeted and to establish collective responsibility among prosumers as grid gatekeepers. The framework moves from a simple risk analysis based on security goals to providing guidelines for the users for adoption of adequate security countermeasures. One of the greatest advantages of the approach is that it does not constrain the user to a specific threat model.
Arivazhagan, S., Jebarani, W. S. L., Kalyani, S. V., Abinaya, A. Deiva.  2017.  Mixed chaotic maps based encryption for high crypto secrecy. 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN). :1–6.

In recent years, the chaos based cryptographic algorithms have enabled some new and efficient ways to develop secure image encryption techniques. In this paper, we propose a new approach for image encryption based on chaotic maps in order to meet the requirements of secure image encryption. The chaos based image encryption technique uses simple chaotic maps which are very sensitive to original conditions. Using mixed chaotic maps which works based on simple substitution and transposition techniques to encrypt the original image yields better performance with less computation complexity which in turn gives high crypto-secrecy. The initial conditions for the chaotic maps are assigned and using that seed only the receiver can decrypt the message. The results of the experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for image encryption.

Ashfaq, Qirat, Khan, Rimsha, Farooq, Sehrish.  2019.  A Comparative Analysis of Static Code Analysis Tools That Check Java Code Adherence to Java Coding Standards. 2019 2nd International Conference on Communication, Computing and Digital Systems (C-CODE). :98–103.

Java programming language is considered highly important due to its extensive use in the development of web, desktop as well as handheld devices applications. Implementing Java Coding standards on Java code has great importance as it creates consistency and as a result better development and maintenance. Finding bugs and standard's violations is important at an early stage of software development than at a later stage when the change becomes impossible or too expensive. In the paper, some tools and research work done on Coding Standard Analyzers is reviewed. These tools are categorized based on the type of rules they cheeked, namely: style, concurrency, exceptions, and quality, security, dependency and general methods of static code analysis. Finally, list of Java Coding Standards Enforcing Tools are analyzed against certain predefined parameters that are limited by the scope of research paper under study. This review will provide the basis for selecting a static code analysis tool that enforce International Java Coding Standards such as the Rule of Ten and the JPL Coding Standards. Such tools have great importance especially in the development of mission/safety critical system. This work can be very useful for developers in selecting a good tool for Java code analysis, according to their requirements.

B
B. Boyadjis, C. Bergeron, S. Lecomte.  2015.  "Auto-synchronized selective encryption of video contents for an improved transmission robustness over error-prone channels". 2015 IEEE International Conference on Image Processing (ICIP). :2969-2973.

Selective encryption designates a technique that aims at scrambling a message content while preserving its syntax. Such an approach allows encryption to be transparent towards middle-box and/or end user devices, and to easily fit within existing pipelines. In this paper, we propose to apply this property to a real-time diffusion scenario - or broadcast - over a RTP session. The main challenge of such problematic is the preservation of the synchronization between encryption and decryption. Our solution is based on the Advanced Encryption Standard in counter mode which has been modified to fit our auto-synchronization requirement. Setting up the proposed synchronization scheme does not induce any latency, and requires no additional bandwidth in the RTP session (no additional information is sent). Moreover, its parallel structure allows to start decryption on any given frame of the video while leaving a lot of room for further optimization purposes.

Bakhtin, Vadim V., Isaeva, Ekaterina V..  2019.  New TSBuilder: Shifting towards Cognition. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :179–181.
The paper reviews a project on the automation of term system construction. TSBuilder (Term System Builder) was developed in 2014 as a multilayer Rosenblatt's perceptron for supervised machine learning, namely 1-3 word terms identification in natural language texts and their rigid categorization. The program is being modified to reduce the rigidity of categorization which will bring text mining more in line with human thinking.We are expanding the range of parameters (semantical, morphological, and syntactical) for categorization, removing the restriction of the term length of three words, using convolution on a continuous sequence of terms, and present the probabilities of a term falling into different categories. The neural network will not assign a single category to a term but give N answers (where N is the number of predefined classes), each of which O ∈ [0, 1] is the probability of the term to belong to a given class.
Bardia, Vivek, Kumar, C.R.S..  2017.  Process trees amp; service chains can serve us to mitigate zero day attacks better. 2017 International Conference on Data Management, Analytics and Innovation (ICDMAI). :280–284.
With technology at our fingertips waiting to be exploited, the past decade saw the revolutionizing Human Computer Interactions. The ease with which a user could interact was the Unique Selling Proposition (USP) of a sales team. Human Computer Interactions have many underlying parameters like Data Visualization and Presentation as some to deal with. With the race, on for better and faster presentations, evolved many frameworks to be widely used by all software developers. As the need grew for user friendly applications, more and more software professionals were lured into the front-end sophistication domain. Application frameworks have evolved to such an extent that with just a few clicks and feeding values as per requirements we are able to produce a commercially usable application in a few minutes. These frameworks generate quantum lines of codes in minutes which leaves a contrail of bugs to be discovered in the future. We have also succumbed to the benchmarking in Software Quality Metrics and have made ourselves comfortable with buggy software's to be rectified in future. The exponential evolution in the cyber domain has also attracted attackers equally. Average human awareness and knowledge has also improved in the cyber domain due to the prolonged exposure to technology for over three decades. As the attack sophistication grows and zero day attacks become more popular than ever, the suffering end users only receive remedial measures in spite of the latest Antivirus, Intrusion Detection and Protection Systems installed. We designed a software to display the complete services and applications running in users Operating System in the easiest perceivable manner aided by Computer Graphics and Data Visualization techniques. We further designed a study by empowering the fence sitter users with tools to actively participate in protecting themselves from threats. The designed threats had impressions from the complete threat canvas in some form or other restricted to systems functioning. Network threats and any sort of packet transfer to and from the system in form of threat was kept out of the scope of this experiment. We discovered that end users had a good idea of their working environment which can be used exponentially enhances machine learning for zero day threats and segment the unmarked the vast threat landscape faster for a more reliable output.
Barrere, M., Steiner, R. V., Mohsen, R., Lupu, E. C..  2017.  Tracking the Bad Guys: An Efficient Forensic Methodology to Trace Multi-Step Attacks Using Core Attack Graphs. 2017 13th International Conference on Network and Service Management (CNSM). :1–7.

In this paper, we describe an efficient methodology to guide investigators during network forensic analysis. To this end, we introduce the concept of core attack graph, a compact representation of the main routes an attacker can take towards specific network targets. Such compactness allows forensic investigators to focus their efforts on critical nodes that are more likely to be part of attack paths, thus reducing the overall number of nodes (devices, network privileges) that need to be examined. Nevertheless, core graphs also allow investigators to hierarchically explore the graph in order to retrieve different levels of summarised information. We have evaluated our approach over different network topologies varying parameters such as network size, density, and forensic evaluation threshold. Our results demonstrate that we can achieve the same level of accuracy provided by standard logical attack graphs while significantly reducing the exploration rate of the network.

Ben, Yongming, Han, Yanni, Cai, Ning, An, Wei, Xu, Zhen.  2019.  An Online System Dependency Graph Anomaly Detection based on Extended Weisfeiler-Lehman Kernel. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–6.
Modern operating systems are typical multitasking systems: Running multiple tasks at the same time. Therefore, a large number of system calls belonging to different processes are invoked at the same time. By associating these invocations, one can construct the system dependency graph. In rapidly evolving system dependency graphs, how to quickly find outliers is an urgent issue for intrusion detection. Clustering analysis based on graph similarity will help solve this problem. In this paper, an extended Weisfeiler-Lehman(WL) kernel is proposed. Firstly, an embedded vector with indefinite dimensions is constructed based on the original dependency graph. Then, the vector is compressed with Simhash to generate a fingerprint. Finally, anomaly detection based on clustering is carried out according to these fingerprints. Our scheme can achieve prominent detection with high efficiency. For validation, we choose StreamSpot, a relevant prior work, to act as benchmark, and use the same data set as it to carry out evaluations. Experiments show that our scheme can achieve the highest detection precision of 98% while maintaining a perfect recall performance. Moreover, both quantitative and visual comparisons demonstrate the outperforming clustering effect of our scheme than StreamSpot.
Besson, Frédéric, Dang, Alexandre, Jensen, Thomas.  2019.  Information-Flow Preservation in Compiler Optimisations. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :230–23012.
Correct compilers perform program transformations preserving input/output behaviours of programs. Yet, correctness does not prevent program optimisations from introducing information-flow leaks that would make the target program more vulnerable to side-channel attacks than the source program. To tackle this problem, we propose a notion of Information-Flow Preserving (IFP) program transformation which ensures that a target program is no more vulnerable to passive side-channel attacks than a source program. To protect against a wide range of attacks, we model an attacker who is granted arbitrary memory accesses for a pre-defined set of observation points. We propose a compositional proof principle for proving that a transformation is IFP. Using this principle, we show how a translation validation technique can be used to automatically verify and even close information-flow leaks introduced by standard compiler passes such as dead-store elimination and register allocation. The technique has been experimentally validated on the CompCert C compiler.
Bhavani, Y., Puppala, Sai Srikar, Krishna, B.Jaya, Madarapu, Srija.  2019.  Modified AES using Dynamic S-Box and DNA Cryptography. 2019 Third International conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC). :164–168.
Today the frequency of technological transformations is very high. In order to cope up with these, there is a demand for fast processing and secured algorithms should be proposed for data exchange. In this paper, Advanced Encryption Standard (AES) is modified using DNA cryptography for fast processing and dynamic S-boxes are introduced to develop an attack resistant algorithm. This is strengthened by combining symmetric and asymmetric algorithms. Diffie-Hellman key exchange is used for AES key generation and also for secret number generation used for creation of dynamic S-boxes. The proposed algorithm is fast in computation and can resist cryptographic attacks like linear and differential cryptanalysis attacks.
Bi, Q., Huang, Y..  2018.  A Self-organized Shape Formation Method for Swarm Controlling. 2018 37th Chinese Control Conference (CCC). :7205–7209.
This paper presents a new approach for the shape formation based on the artificial method. It refers to the basic concept in the swarm intelligence: complex behaviors of the swarm can be formed with simple rules designed in the agents. In the framework, the distance image is used to generate not only an attraction field to keep all the agents in the given shape, but also repulsive force field among the agents to make them distribute uniformly. Compared to the traditional methods based on centralized control, the algorithm has properties of distributed and simple computation, convergence and robustness, which is very suitable for the swarm robots in the real world considering the limitation of communication, collision avoidance and calculation problems. We also show that some initial sensitive method can be improved in the similar way. The simulation results prove the proposed approach is suitable for convex. non-convex and line shapes.
Birnstill, P., Haas, C., Hassler, D., Beyerer, J..  2017.  Introducing Remote Attestation and Hardware-Based Cryptography to OPC UA. 2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). :1–8.

In this paper we investigate whether and how hardware-based roots of trust, namely Trusted Platform Modules (TPMs) can improve the security of the communication protocol OPC UA (Open Platform Communications Unified Architecture) under reasonable assumptions, i.e. the Dolev-Yao attacker model. Our analysis shows that TPMs may serve for generating (RNG) and securely storing cryptographic keys, as cryptocoprocessors for weak systems, as well as for remote attestation. We propose to include these TPM functions into OPC UA via so-called ConformanceUnits, which can serve as building blocks of profiles that are used by clients and servers for negotiating the parameters of a session. Eventually, we present first results regarding the performance of a client-server communication including an additional OPC UA server providing remote attestation of other OPC UA servers.

Boyarinov, K., Hunter, A..  2017.  Security and trust for surveillance cameras. 2017 IEEE Conference on Communications and Network Security (CNS). :384–385.

We address security and trust in the context of a commercial IP camera. We take a hands-on approach, as we not only define abstract vulnerabilities, but we actually implement the attacks on a real camera. We then discuss the nature of the attacks and the root cause; we propose a formal model of trust that can be used to address the vulnerabilities by explicitly constraining compositionality for trust relationships.

Boykov, Y., Isack, H., Olsson, C., Ayed, I. B..  2015.  Volumetric Bias in Segmentation and Reconstruction: Secrets and Solutions. 2015 IEEE International Conference on Computer Vision (ICCV). :1769–1777.

Many standard optimization methods for segmentation and reconstruction compute ML model estimates for appearance or geometry of segments, e.g. Zhu-Yuille [23], Torr [20], Chan-Vese [6], GrabCut [18], Delong et al. [8]. We observe that the standard likelihood term in these formu-lations corresponds to a generalized probabilistic K-means energy. In learning it is well known that this energy has a strong bias to clusters of equal size [11], which we express as a penalty for KL divergence from a uniform distribution of cardinalities. However, this volumetric bias has been mostly ignored in computer vision. We demonstrate signif- icant artifacts in standard segmentation and reconstruction methods due to this bias. Moreover, we propose binary and multi-label optimization techniques that either (a) remove this bias or (b) replace it by a KL divergence term for any given target volume distribution. Our general ideas apply to continuous or discrete energy formulations in segmenta- tion, stereo, and other reconstruction problems.

Brito, J. P., López, D. R., Aguado, A., Abellán, C., López, V., Pastor-Perales, A., la Iglesia, F. de, Martín, V..  2019.  Quantum Services Architecture in Softwarized Infrastructures. 2019 21st International Conference on Transparent Optical Networks (ICTON). :1–4.
Quantum computing is posing new threats on our security infrastructure. This has triggered a new research field on quantum-safe methods, and those that rely on the application of quantum principles are commonly referred as quantum cryptography. The most mature development in the field of quantum cryptography is called Quantum Key Distribution (QKD). QKD is a key exchange primitive that can replace existing mechanisms that can become obsolete in the near future. Although QKD has reached a high level of maturity, there is still a long path for a mass market implementation. QKD shall overcome issues such as miniaturization, network integration and the reduction of production costs to make the technology affordable. In this direction, we foresee that QKD systems will evolve following the same path as other networking technologies, where systems will run on specific network cards, integrable in commodity chassis. This work describes part of our activity in the EU H2020 project CiViQ in which quantum technologies, as QKD systems or quantum random number generators (QRNG), will become a single network element that we define as Quantum Switch. This allows for quantum resources (keys or random numbers) to be provided as a service, while the different components are integrated to cooperate for providing the most random and secure bit streams. Furthermore, with the purpose of making our proposal closer to current networking technology, this work also proposes an abstraction logic for making our Quantum Switch suitable to become part of software-defined networking (SDN) architectures. The model fits in the architecture of the SDN quantum node architecture, that is being under standardization by the European Telecommunications Standards Institute. It permits to operate an entire quantum network using a logically centralized SDN controller, and quantum switches to generate and to forward key material and random numbers across the entire network. This scheme, demonstrated for the first time at the Madrid Quantum Network, will allow for a faster and seamless integration of quantum technologies in the telecommunications infrastructure.
Brodeur, S., Rouat, J..  2017.  Optimality of inference in hierarchical coding for distributed object-based representations. 2017 15th Canadian Workshop on Information Theory (CWIT). :1–5.

Hierarchical approaches for representation learning have the ability to encode relevant features at multiple scales or levels of abstraction. However, most hierarchical approaches exploit only the last level in the hierarchy, or provide a multiscale representation that holds a significant amount of redundancy. We argue that removing redundancy across the multiple levels of abstraction is important for an efficient representation of compositionality in object-based representations. With the perspective of feature learning as a data compression operation, we propose a new greedy inference algorithm for hierarchical sparse coding. Convolutional matching pursuit with a L0-norm constraint was used to encode the input signal into compact and non-redundant codes distributed across levels of the hierarchy. Simple and complex synthetic datasets of temporal signals were created to evaluate the encoding efficiency and compare with the theoretical lower bounds on the information rate for those signals. Empirical evidence have shown that the algorithm is able to infer near-optimal codes for simple signals. However, it failed for complex signals with strong overlapping between objects. We explain the inefficiency of convolutional matching pursuit that occurred in such case. This brings new insights about the NP-hard optimization problem related to using L0-norm constraint in inferring optimally compact and distributed object-based representations.

Brunner, M., Huber, M., Sauerwein, C., Breu, R..  2017.  Towards an Integrated Model for Safety and Security Requirements of Cyber-Physical Systems. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :334–340.

Increasing interest in cyber-physical systems with integrated computational and physical capabilities that can interact with humans can be identified in research and practice. Since these systems can be classified as safety- and security-critical systems the need for safety and security assurance and certification will grow. Moreover, these systems are typically characterized by fragmentation, interconnectedness, heterogeneity, short release cycles, cross organizational nature and high interference between safety and security requirements. These properties combined with the assurance of compliance to multiple standards, carrying out certification and re-certification, and the lack of an approach to model, document and integrate safety and security requirements represent a major challenge. In order to address this gap we developed a domain agnostic approach to model security and safety requirements in an integrated view to support certification processes during design and run-time phases of cyber-physical systems.

Brunner, M., Sillaber, C., Breu, R..  2017.  Towards Automation in Information Security Management Systems. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :160–167.

Establishing and operating an Information Security Management System (ISMS) to protect information values and information systems is in itself a challenge for larger enterprises and small and medium sized businesses alike. A high level of automation is required to reduce operational efforts to an acceptable level when implementing an ISMS. In this paper we present the ADAMANT framework to increase automation in information security management as a whole by establishing a continuous risk-driven and context-aware ISMS that not only automates security controls but considers all highly interconnected information security management tasks. We further illustrate how ADAMANT is suited to establish an ISO 27001 compliant ISMS for small and medium-sized enterprises and how not only the monitoring of security controls but a majority of ISMS related activities can be supported through automated process execution and workflow enactment.

Buda, A., Främling, K., Borgman, J., Madhikermi, M., Mirzaeifar, S., Kubler, S..  2015.  Data supply chain in Industrial Internet. 2015 IEEE World Conference on Factory Communication Systems (WFCS). :1–7.

The Industrial Internet promises to radically change and improve many industry's daily business activities, from simple data collection and processing to context-driven, intelligent and pro-active support of workers' everyday tasks and life. The present paper first provides insight into a typical industrial internet application architecture, then it highlights one fundamental arising contradiction: “Who owns the data is often not capable of analyzing it”. This statement is explained by imaging a visionary data supply chain that would realize some of the Industrial Internet promises. To concretely implement such a system, recent standards published by The Open Group are presented, where we highlight the characteristics that make them suitable for Industrial Internet applications. Finally, we discuss comparable solutions and concludes with new business use cases.