Visible to the public Biblio

Filters: Keyword is Policy Based Governance  [Clear All Filters]
2020-12-01
Kalyanaraman, A., Halappanavar, M..  2018.  Guest Editorial: Advances in Parallel Graph Processing: Algorithms, Architectures, and Application Frameworks. IEEE Transactions on Multi-Scale Computing Systems. 4:188—189.

The papers in this special section explore recent advancements in parallel graph processing. In the sphere of modern data science and data-driven applications, graph algorithms have achieved a pivotal place in advancing the state of scientific discovery and knowledge. Nearly three centuries of ideas have made graph theory and its applications a mature area in computational sciences. Yet, today we find ourselves at a crossroads between theory and application. Spurred by the digital revolution, data from a diverse range of high throughput channels and devices, from across internet-scale applications, are starting to mark a new era in data-driven computing and discovery. Building robust graph models and implementing scalable graph application frameworks in the context of this new era are proving to be significant challenges. Concomitant to the digital revolution, we have also experienced an explosion in computing architectures, with a broad range of multicores, manycores, heterogeneous platforms, and hardware accelerators (CPUs, GPUs) being actively developed and deployed within servers and multinode clusters. Recent advances have started to show that in more than one way, these two fields—graph theory and architectures–are capable of benefiting and in fact spurring new research directions in one another. This special section is aimed at introducing some of the new avenues of cutting-edge research happening at the intersection of graph algorithm design and their implementation on advanced parallel architectures.

Xu, W., Peng, Y..  2018.  SharaBLE: A Software Framework for Shared Usage of BLE Devices over the Internet. 2018 IEEE 29th Annual International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC). :381—385.

With the development of Internet of Things, numerous IoT devices have been brought into our daily lives. Bluetooth Low Energy (BLE), due to the low energy consumption and generic service stack, has become one of the most popular wireless communication technologies for IoT. However, because of the short communication range and exclusive connection pattern, a BLE-equipped device can only be used by a single user near the device. To fully explore the benefits of BLE and make BLE-equipped devices truly accessible over the Internet as IoT devices, in this paper, we propose a cloud-based software framework that can enable multiple users to interact with various BLE IoT devices over the Internet. This framework includes an agent program, a suite of services hosting in cloud, and a set of RESTful APIs exposed to Internet users. Given the availability of this framework, the access to BLE devices can be extended from local to the Internet scale without any software or hardware changes to BLE devices, and more importantly, shared usage of remote BLE devices over the Internet is also made available.

Li, W., Guo, D., Li, K., Qi, H., Zhang, J..  2018.  iDaaS: Inter-Datacenter Network as a Service. IEEE Transactions on Parallel and Distributed Systems. 29:1515—1529.

Increasing number of Internet-scale applications, such as video streaming, incur huge amount of wide area traffic. Such traffic over the unreliable Internet without bandwidth guarantee suffers unpredictable network performance. This result, however, is unappealing to the application providers. Fortunately, Internet giants like Google and Microsoft are increasingly deploying their private wide area networks (WANs) to connect their global datacenters. Such high-speed private WANs are reliable, and can provide predictable network performance. In this paper, we propose a new type of service-inter-datacenter network as a service (iDaaS), where traditional application providers can reserve bandwidth from those Internet giants to guarantee their wide area traffic. Specifically, we design a bandwidth trading market among multiple iDaaS providers and application providers, and concentrate on the essential bandwidth pricing problem. The involved challenging issue is that the bandwidth price of each iDaaS provider is not only influenced by other iDaaS providers, but also affected by the application providers. To address this issue, we characterize the interaction between iDaaS providers and application providers using a Stackelberg game model, and analyze the existence and uniqueness of the equilibrium. We further present an efficient bandwidth pricing algorithm by blending the advantage of a geometrical Nash bargaining solution and the demand segmentation method. For comparison, we present two bandwidth reservation algorithms, where each iDaaS provider's bandwidth is reserved in a weighted fair manner and a max-min fair manner, respectively. Finally, we conduct comprehensive trace-driven experiments. The evaluation results show that our proposed algorithms not only ensure the revenue of iDaaS providers, but also provide bandwidth guarantee for application providers with lower bandwidth price per unit.

Yang, R., Ouyang, X., Chen, Y., Townend, P., Xu, J..  2018.  Intelligent Resource Scheduling at Scale: A Machine Learning Perspective. 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE). :132—141.

Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.

Garbo, A., Quer, S..  2018.  A Fast MPEG’s CDVS Implementation for GPU Featured in Mobile Devices. IEEE Access. 6:52027—52046.
The Moving Picture Experts Group's Compact Descriptors for Visual Search (MPEG's CDVS) intends to standardize technologies in order to enable an interoperable, efficient, and cross-platform solution for internet-scale visual search applications and services. Among the key technologies within CDVS, we recall the format of visual descriptors, the descriptor extraction process, and the algorithms for indexing and matching. Unfortunately, these steps require precision and computation accuracy. Moreover, they are very time-consuming, as they need running times in the order of seconds when implemented on the central processing unit (CPU) of modern mobile devices. In this paper, to reduce computation times and maintain precision and accuracy, we re-design, for many-cores embedded graphical processor units (GPUs), all main local descriptor extraction pipeline phases of the MPEG's CDVS standard. To reach this goal, we introduce new techniques to adapt the standard algorithm to parallel processing. Furthermore, to reduce memory accesses and efficiently distribute the kernel workload, we use new approaches to store and retrieve CDVS information on proper GPU data structures. We present a complete experimental analysis on a large and standard test set. Our experiments show that our GPU-based approach is remarkably faster than the CPU-based reference implementation of the standard, and it maintains a comparable precision in terms of true and false positive rates.
Zhang, Y., Deng, L., Chen, M., Wang, P..  2018.  Joint Bidding and Geographical Load Balancing for Datacenters: Is Uncertainty a Blessing or a Curse? IEEE/ACM Transactions on Networking. 26:1049—1062.

We consider the scenario where a cloud service provider (CSP) operates multiple geo-distributed datacenters to provide Internet-scale service. Our objective is to minimize the total electricity and bandwidth cost by jointly optimizing electricity procurement from wholesale markets and geographical load balancing (GLB), i.e., dynamically routing workloads to locations with cheaper electricity. Under the ideal setting where exact values of market prices and workloads are given, this problem reduces to a simple linear programming and is easy to solve. However, under the realistic setting where only distributions of these variables are available, the problem unfolds into a non-convex infinite-dimensional one and is challenging to solve. One of our main contributions is to develop an algorithm that is proven to solve the challenging problem optimally, by exploring the full design space of strategic bidding. Trace-driven evaluations corroborate our theoretical results, demonstrate fast convergence of our algorithm, and show that it can reduce the cost for the CSP by up to 20% as compared with baseline alternatives. This paper highlights the intriguing role of uncertainty in workloads and market prices, measured by their variances. While uncertainty in workloads deteriorates the cost-saving performance of joint electricity procurement and GLB, counter-intuitively, uncertainty in market prices can be exploited to achieve a cost reduction even larger than the setting without price uncertainty.

Kathiravelu, P., Chiesa, M., Marcos, P., Canini, M., Veiga, L..  2018.  Moving Bits with a Fleet of Shared Virtual Routers. 2018 IFIP Networking Conference (IFIP Networking) and Workshops. :1—9.

The steady decline of IP transit prices in the past two decades has helped fuel the growth of traffic demands in the Internet ecosystem. Despite the declining unit pricing, bandwidth costs remain significant due to ever-increasing scale and reach of the Internet, combined with the price disparity between the Internet's core hubs versus remote regions. In the meantime, cloud providers have been auctioning underutilized computing resources in their marketplace as spot instances for a much lower price, compared to their on-demand instances. This state of affairs has led the networking community to devote extensive efforts to cloud-assisted networks - the idea of offloading network functionality to cloud platforms, ultimately leading to more flexible and highly composable network service chains.We initiate a critical discussion on the economic and technological aspects of leveraging cloud-assisted networks for Internet-scale interconnections and data transfers. Namely, we investigate the prospect of constructing a large-scale virtualized network provider that does not own any fixed or dedicated resources and runs atop several spot instances. We construct a cloud-assisted overlay as a virtual network provider, by leveraging third-party cloud spot instances. We identify three use case scenarios where such approach will not only be economically and technologically viable but also provide performance benefits compared to current commercial offerings of connectivity and transit providers.

Sunny, S. M. N. A., Liu, X., Shahriar, M. R..  2018.  Remote Monitoring and Online Testing of Machine Tools for Fault Diagnosis and Maintenance Using MTComm in a Cyber-Physical Manufacturing Cloud. 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). :532—539.

Existing systems allow manufacturers to acquire factory floor data and perform analysis with cloud applications for machine health monitoring, product quality prediction, fault diagnosis and prognosis etc. However, they do not provide capabilities to perform testing of machine tools and associated components remotely, which is often crucial to identify causes of failure. This paper presents a fault diagnosis system in a cyber-physical manufacturing cloud (CPMC) that allows manufacturers to perform diagnosis and maintenance of manufacturing machine tools through remote monitoring and online testing using Machine Tool Communication (MTComm). MTComm is an Internet scale communication method that enables both monitoring and operation of heterogeneous machine tools through RESTful web services over the Internet. It allows manufacturers to perform testing operations from cloud applications at both machine and component level for regular maintenance and fault diagnosis. This paper describes different components of the system and their functionalities in CPMC and techniques used for anomaly detection and remote online testing using MTComm. It also presents the development of a prototype of the proposed system in a CPMC testbed. Experiments were conducted to evaluate its performance to diagnose faults and test machine tools remotely during various manufacturing scenarios. The results demonstrated excellent feasibility to detect anomaly during manufacturing operations and perform testing operations remotely from cloud applications using MTComm.

Shahriar, M. R., Sunny, S. M. N. A., Liu, X., Leu, M. C., Hu, L., Nguyen, N..  2018.  MTComm Based Virtualization and Integration of Physical Machine Operations with Digital-Twins in Cyber-Physical Manufacturing Cloud. 2018 5th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2018 4th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). :46—51.

Digital-Twins simulate physical world objects by creating 'as-is' virtual images in a cyberspace. In order to create a well synchronized digital-twin simulator in manufacturing, information and activities of a physical machine need to be virtualized. Many existing digital-twins stream read-only data of machine sensors and do not incorporate operations of manufacturing machines through Internet. In this paper, a new method of virtualization is proposed to integrate machining data and operations into the digital-twins using Internet scale machine tool communication method. A fully functional digital-twin is implemented in CPMC testbed using MTComm and several manufacturing application scenarios are developed to evaluate the proposed method and system. Performance analysis shows that it is capable of providing data-driven visual monitoring of a manufacturing process and performing manufacturing operations through digital twins over the Internet. Results of the experiments also shows that the MTComm based digital twins have an excellent efficiency.

Shaikh, F., Bou-Harb, E., Neshenko, N., Wright, A. P., Ghani, N..  2018.  Internet of Malicious Things: Correlating Active and Passive Measurements for Inferring and Characterizing Internet-Scale Unsolicited IoT Devices. IEEE Communications Magazine. 56:170—177.

Advancements in computing, communication, and sensing technologies are making it possible to embed, control, and gather vital information from tiny devices that are being deployed and utilized in practically every aspect of our modernized society. From smart home appliances to municipal water and electric industrial facilities to our everyday work environments, the next Internet frontier, dubbed IoT, is promising to revolutionize our lives and tackle some of our nations' most pressing challenges. While the seamless interconnection of IoT devices with the physical realm is envisioned to bring a plethora of critical improvements in many aspects and diverse domains, it will undoubtedly pave the way for attackers that will target and exploit such devices, threatening the integrity of their data and the reliability of critical infrastructure. Further, such compromised devices will undeniably be leveraged as the next generation of botnets, given their increased processing capabilities and abundant bandwidth. While several demonstrations exist in the literature describing the exploitation procedures of a number of IoT devices, the up-to-date inference, characterization, and analysis of unsolicited IoT devices that are currently deployed "in the wild" is still in its infancy. In this article, we address this imperative task by leveraging active and passive measurements to report on unsolicited Internet-scale IoT devices. This work describes a first step toward exploring the utilization of passive measurements in combination with the results of active measurements to shed light on the Internet-scale insecurities of the IoT paradigm. By correlating results of Internet-wide scanning with Internet background radiation traffic, we disclose close to 14,000 compromised IoT devices in diverse sectors, including critical infrastructure and smart home appliances. To this end, we also analyze their generated traffic to create effective mitigation signatures that could be deployed in local IoT realms. To support largescale empirical data analytics in the context of IoT, we make available the inferred and extracted IoT malicious raw data through an authenticated front-end service. The outcomes of this work confirm the existence of such compromised devices on an Internet scale, while the generated inferences and insights are postulated to be employed for inferring other similarly compromised IoT devices, in addition to contributing to IoT cyber security situational awareness.

2020-04-03
Zhao, Hui, Li, Zhihui, Wei, Hansheng, Shi, Jianqi, Huang, Yanhong.  2019.  SeqFuzzer: An Industrial Protocol Fuzzing Framework from a Deep Learning Perspective. 2019 12th IEEE Conference on Software Testing, Validation and Verification (ICST). :59—67.

Industrial networks are the cornerstone of modern industrial control systems. Performing security checks of industrial communication processes helps detect unknown risks and vulnerabilities. Fuzz testing is a widely used method for performing security checks that takes advantage of automation. However, there is a big challenge to carry out security checks on industrial network due to the increasing variety and complexity of industrial communication protocols. In this case, existing approaches usually take a long time to model the protocol for generating test cases, which is labor-intensive and time-consuming. This becomes even worse when the target protocol is stateful. To help in addressing this problem, we employed a deep learning model to learn the structures of protocol frames and deal with the temporal features of stateful protocols. We propose a fuzzing framework named SeqFuzzer which automatically learns the protocol frame structures from communication traffic and generates fake but plausible messages as test cases. For proving the usability of our approach, we applied SeqFuzzer to widely-used Ethernet for Control Automation Technology (EtherCAT) devices and successfully detected several security vulnerabilities.

Lipp, Benjamin, Blanchet, Bruno, Bhargavan, Karthikeyan.  2019.  A Mechanised Cryptographic Proof of the WireGuard Virtual Private Network Protocol. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :231—246.

WireGuard is a free and open source Virtual Private Network (VPN) that aims to replace IPsec and OpenVPN. It is based on a new cryptographic protocol derived from the Noise Protocol Framework. This paper presents the first mechanised cryptographic proof of the protocol underlying WireGuard, using the CryptoVerif proof assistant. We analyse the entire WireGuard protocol as it is, including transport data messages, in an ACCE-style model. We contribute proofs for correctness, message secrecy, forward secrecy, mutual authentication, session uniqueness, and resistance against key compromise impersonation, identity mis-binding, and replay attacks. We also discuss the strength of the identity hiding provided by WireGuard. Our work also provides novel theoretical contributions that are reusable beyond WireGuard. First, we extend CryptoVerif to account for the absence of public key validation in popular Diffie-Hellman groups like Curve25519, which is used in many modern protocols including WireGuard. To our knowledge, this is the first mechanised cryptographic proof for any protocol employing such a precise model. Second, we prove several indifferentiability lemmas that are useful to simplify the proofs for sequences of key derivations.

Aires Urquiza, Abraão, AlTurki, Musab A., Kanovich, Max, Ban Kirigin, Tajana, Nigam, Vivek, Scedrov, Andre, Talcott, Carolyn.  2019.  Resource-Bounded Intruders in Denial of Service Attacks. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :382—38214.

Denial of Service (DoS) attacks have been a serious security concern, as no service is, in principle, protected against them. Although a Dolev-Yao intruder with unlimited resources can trivially render any service unavailable, DoS attacks do not necessarily have to be carried out by such (extremely) powerful intruders. It is useful in practice and more challenging for formal protocol verification to determine whether a service is vulnerable even to resource-bounded intruders that cannot generate or intercept arbitrary large volumes of traffic. This paper proposes a novel, more refined intruder model where the intruder can only consume at most some specified amount of resources in any given time window. Additionally, we propose protocol theories that may contain timeouts and specify service resource usage during protocol execution. In contrast to the existing resource-conscious protocol verification models, our model allows finer and more subtle analysis of DoS problems. We illustrate the power of our approach by representing a number of classes of DoS attacks, such as, Slow, Asymmetric and Amplification DoS attacks, exhausting different types of resources of the target, such as, number of workers, processing power, memory, and network bandwidth. We show that the proposed DoS problem is undecidable in general and is PSPACE-complete for the class of resource-bounded, balanced systems. Finally, we implemented our formal verification model in the rewriting logic tool Maude and analyzed a number of DoS attacks in Maude using Rewriting Modulo SMT in an automated fashion.

Nandi, Giann Spilere, Pereira, David, Vigil, Martín, Moraes, Ricardo, Morales, Analúcia Schiaffino, Araújo, Gustavo.  2019.  Security in Wireless Sensor Networks: A formal verification of protocols. 2019 IEEE 17th International Conference on Industrial Informatics (INDIN). 1:425—431.

The increase of the digitalization taking place in various industrial domains is leading developers towards the design and implementation of more and more complex networked control systems (NCS) supported by Wireless Sensor Networks (WSN). This naturally raises new challenges for the current WSN technology, namely in what concerns improved guarantees of technical aspects such as real-time communications together with safe and secure transmissions. Notably, in what concerns security aspects, several cryptographic protocols have been proposed. Since the design of these protocols is usually error-prone, security breaches can still be exposed and MALICIOUSly exploited unless they are rigorously analyzed and verified. In this paper we formally verify, using ProVerif, three cryptographic protocols used in WSN, regarding the security properties of secrecy and authenticity. The security analysis performed in this paper is more robust than the ones performed in related work. Our contributions involve analyzing protocols that were modeled considering an unbounded number of participants and actions, and also the use of a hierarchical system to classify the authenticity results. Our verification shows that the three analyzed protocols guarantee secrecy, but can only provide authenticity in specific scenarios.

Fattahi, Jaouhar, Mejri, Mohamed, Pricop, Emil.  2019.  On the Security of Cryptographic Protocols Using the Little Theorem of Witness Functions. 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). :1—5.

In this paper, we show how practical the little theorem of witness functions is in detecting security flaws in some categories of cryptographic protocols. We convey a formal analysis of the Needham-Schroeder symmetric-key protocol in the theory of witness functions. We show how it helps to warn about a security vulnerability in a given step of this protocol where the value of security of a sensitive ticket in a sent message unexpectedly decreases compared with its value when received. This vulnerability may be exploited by an intruder to mount a replay attack as described by Denning and Sacco.

Künnemann, Robert, Esiyok, Ilkan, Backes, Michael.  2019.  Automated Verification of Accountability in Security Protocols. 2019 IEEE 32nd Computer Security Foundations Symposium (CSF). :397—39716.

Accountability is a recent paradigm in security protocol design which aims to eliminate traditional trust assumptions on parties and hold them accountable for their misbehavior. It is meant to establish trust in the first place and to recognize and react if this trust is violated. In this work, we discuss a protocol-agnostic definition of accountability: a protocol provides accountability (w.r.t. some security property) if it can identify all misbehaving parties, where misbehavior is defined as a deviation from the protocol that causes a security violation. We provide a mechanized method for the verification of accountability and demonstrate its use for verification and attack finding on various examples from the accountability and causality literature, including Certificate Transparency and Krollˆ\textbackslashtextbackslashprimes Accountable Algorithms protocol. We reach a high degree of automation by expressing accountability in terms of a set of trace properties and show their soundness and completeness.

Pothong, Kruakae, Pschetz, Larissa, Watson, Jeremy, Gbadamosi, James, Asaturyan, Andre.  2019.  Making IoT security policies relevant, inclusive and practical for people: A multi-dimensional method. Living in the Internet of Things (IoT 2019). :1—11.

Growing amounts of research on IoT and its implications for security, privacy, economy and society has been carried out to inform policies and design. However, ordinary people who are citizens and users of these emerging technologies have rarely been involved in the processes that inform these policies, governance mechanisms and design due to the institutionalised processes that prioritise objective knowledge over subjective ones. People's subjective experiences are often discarded. This priority is likely to further widen the gap between people, technology policies and design as technologies advance towards delegated human agencies, which decreases human interfaces in technology-mediated relationships with objects, systems, services, trade and other (often) unknown third-party beneficiaries. Such a disconnection can have serious implications for policy implementation, especially when it involves human limitations. To address this disconnection, we argue that a space for people to meaningfully contribute their subjective knowledge — experience- to complex technology policies that, in turn, shape their experience and well-being needs to be constructed. To this end, our paper contributes the design and pilot implementation of a method to reconnect and involve people in IoT security policymaking and development.

Ayache, Meryeme, Khoumsi, Ahmed, Erradi, Mohammed.  2019.  Managing Security Policies within Cloud Environments Using Aspect-Oriented State Machines. 2019 International Conference on Advanced Communication Technologies and Networking (CommNet). :1—10.

Cloud Computing is the most suitable environment for the collaboration of multiple organizations via its multi-tenancy architecture. However, due to the distributed management of policies within these collaborations, they may contain several anomalies, such as conflicts and redundancies, which may lead to both safety and availability problems. On the other hand, current cloud computing solutions do not offer verification tools to manage access control policies. In this paper, we propose a cloud policy verification service (CPVS), that facilitates to users the management of there own security policies within Openstack cloud environment. Specifically, the proposed cloud service offers a policy verification approach to dynamically choose the adequate policy using Aspect-Oriented Finite State Machines (AO-FSM), where pointcuts and advices are used to adopt Domain-Specific Language (DSL) state machine artifacts. The pointcuts define states' patterns representing anomalies (e.g., conflicts) that may occur in a security policy, while the advices define the actions applied at the selected pointcuts to remove the anomalies. In order to demonstrate the efficiency of our approach, we provide time and space complexities. The approach was implemented as middleware service within Openstack cloud environment. The implementation results show that the middleware can detect and resolve different policy anomalies in an efficient manner.

Gerking, Christopher, Schubert, David.  2019.  Component-Based Refinement and Verification of Information-Flow Security Policies for Cyber-Physical Microservice Architectures. 2019 IEEE International Conference on Software Architecture (ICSA). :61—70.

Since cyber-physical systems are inherently vulnerable to information leaks, software architects need to reason about security policies to define desired and undesired information flow through a system. The microservice architectural style requires the architects to refine a macro-level security policy into micro-level policies for individual microservices. However, when policies are refined in an ill-formed way, information leaks can emerge on composition of microservices. Related approaches to prevent such leaks do not take into account characteristics of cyber-physical systems like real-time behavior or message passing communication. In this paper, we enable the refinement and verification of information-flow security policies for cyber-physical microservice architectures. We provide architects with a set of well-formedness rules for refining a macro-level policy in a way that enforces its security restrictions. Based on the resulting micro-level policies, we present a verification technique to check if the real-time message passing of microservices is secure. In combination, our contributions prevent information leaks from emerging on composition. We evaluate the accuracy of our approach using an extension of the CoCoME case study.

Belim, S.V., S.Yu, Belim.  2019.  Implementation of Discretionary Security Policy in the Distributed Systems Based on the Secret Sharing Scheme. 2019 International Multi-Conference on Industrial Engineering and Modern Technologies (FarEastCon). :1—4.

In this article the combination of secret sharing schemes and the requirement of discretionary security policy is considered. Secret sharing schemes of Shamir and Blakley are investigated. Conditions for parameters of schemes the providing forbidden information channels are received. Ways for concealment of the forbidden channels are suggested. Three modifications of the Shamir's scheme and two modifications of the Blakley's scheme are suggested. Transition from polynoms to exponential functions for formation the parts of a secret is carried out. The problem of masking the presence of the forbidden information channels is solved. Several approaches with the complete and partial concealment are suggested.

Kozlov, Aleksandr, Noga, Nikolai.  2019.  The Method of Assessing the Level of Compliance of Divisions of the Complex Network for the Corporate Information Security Policy Indicators. 2019 Twelfth International Conference "Management of large-scale system development" (MLSD). :1—5.

The method of assessment of degree of compliance of divisions of the complex distributed corporate information system to a number of information security indicators is offered. As a result of the methodology implementation a comparative assessment of compliance level of each of the divisions for the corporate information security policy requirements may be given. This assessment may be used for the purpose of further decision-making by the management of the corporation on measures to minimize risks as a result of possible implementation of threats to information security.

Al-Haj, Ali, Aziz, Benjamin.  2019.  Enforcing Multilevel Security Policies in Database-Defined Networks using Row-Level Security. 2019 International Conference on Networked Systems (NetSys). :1-6.

Despite the wide of range of research and technologies that deal with the problem of routing in computer networks, there remains a gap between the level of network hardware administration and the level of business requirements and constraints. Not much has been accomplished in literature in order to have a direct enforcement of such requirements on the network. This paper presents a new solution in specifying and directly enforcing security policies to control the routing configuration in a software-defined network by using Row-Level Security checks which enable fine-grained security policies on individual rows in database tables. We show, as a first step, how a specific class of such policies, namely multilevel security policies, can be enforced on a database-defined network, which presents an abstraction of a network's configuration as a set of database tables. We show that such policies can be used to control the flow of data in the network either in an upward or downward manner.

Luo, Xueting, Lu, Yueming.  2019.  A Method of Conflict Detection for Security Policy Based on B+ Tree. 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC). :466-472.

Security policy is widely used in network management systems to ensure network security. It is necessary to detect and resolve conflicts in security policies. This paper analyzes the shortcomings of existing security policy conflict detection methods and proposes a B+ tree-based security policy conflict detection method. First, the security policy is dimensioned to make each attribute corresponds to one dimension. Then, a layer of B+ tree index is constructed at each dimension level. Each rule will be uniquely mapped by multiple layers of nested indexes. This method can greatly improve the efficiency of conflict detection. The experimental results show that the method has very stable performance which can effectively prevent conflicts, the type of policy conflict can be detected quickly and accurately.

Singi, Kapil, Kaulgud, Vikrant, Bose, R.P. Jagadeesh Chandra, Podder, Sanjay.  2019.  CAG: Compliance Adherence and Governance in Software Delivery Using Blockchain. 2019 IEEE/ACM 2nd International Workshop on Emerging Trends in Software Engineering for Blockchain (WETSEB). :32—39.

The software development life cycle (SDLC) starts with business and functional specifications signed with a client. In addition to this, the specifications also capture policy / procedure / contractual / regulatory / legislation / standard compliances with respect to a given client industry. The SDLC must adhere to service level agreements (SLAs) while being compliant to development activities, processes, tools, frameworks, and reuse of open-source software components. In today's world, global software development happens across geographically distributed (autonomous) teams consuming extraordinary amounts of open source components drawn from a variety of disparate sources. Although this is helping organizations deal with technical and economic challenges, it is also increasing unintended risks, e.g., use of a non-complaint license software might lead to copyright issues and litigations, use of a library with vulnerabilities pose security risks etc. Mitigation of such risks and remedial measures is a challenge due to lack of visibility and transparency of activities across these distributed teams as they mostly operate in silos. We believe a unified model that non-invasively monitors and analyzes the activities of distributed teams will help a long way in building software that adhere to various compliances. In this paper, we propose a decentralized CAG - Compliance Adherence and Governance framework using blockchain technologies. Our framework (i) enables the capturing of required data points based on compliance specifications, (ii) analyzes the events for non-conformant behavior through smart contracts, (iii) provides real-time alerts, and (iv) records and maintains an immutable audit trail of various activities.

Mishra, Menaka, Upadhyay, A.K..  2019.  Need of Private and Public Sector Information Security. 2019 9th International Conference on Cloud Computing, Data Science Engineering (Confluence). :168—173.

In this research paper author surveys the need of data protection from intelligent systems in the private and public sectors. For this, she identifies that the Smart Information Security Intel processes needs to be the suggestive key policy for both sectors of governance either public or private. The information is very sensitive for any organization. When the government offices are concerned, information needs to be abstracted and encapsulated so that there is no information stealing. For this purposes, the art of skill set and new optimized technology needs to be stationed. Author identifies that digital bar-coded air port like security using conveyor belts and digital bar-coded conveyor boxes to scan switched ON articles like internet of things needs to be placed. As otherwise, there can potentially be data, articles or information stealing from the operational sites where access is unauthorized. Such activities shall need to be scrutinized, minutely. The biometric such as fingerprints, iris, voice and face recognition pattern updates in the virtual data tables must be taken to keep data entry-exit log up to-date. The information technicians of the sentinel systems must help catch the anomalies in the professional working time in private and public sectors if there is red flag as indicator. The author in this research paper shall discuss in detail what we shall station, how we shall station and what all measures we might need to undertake to safeguard the stealing of sensitive information from the organizations like administration buildings, government buildings, educational schools, hospitals, courts, private buildings, banks and all other offices nation-wide. The TO-BE new processes shall make the AS-IS office system more information secured, data protected and personnel security stronger.