Visible to the public Biblio

Found 154 results

Filters: Keyword is Computing Theory  [Clear All Filters]
2021-07-28
Vinzamuri, Bhanukiran, Khabiri, Elham, Bhamidipaty, Anuradha, Mckim, Gregory, Gandhi, Biren.  2020.  An End-to-End Context Aware Anomaly Detection System. 2020 IEEE International Conference on Big Data (Big Data). :1689—1698.
Anomaly detection (AD) is very important across several real-world problems in the heavy industries and Internet-of-Things (IoT) domains. Traditional methods so far have categorized anomaly detection into (a) unsupervised, (b) semi-supervised and (c) supervised techniques. A relatively unexplored direction is the development of context aware anomaly detection systems which can build on top of any of these three techniques by using side information. Context can be captured from a different modality such as semantic graphs encoding grouping of sensors governed by the physics of the asset. Process flow diagrams of an operational plant depicting causal relationships between sensors can also provide useful context for ML algorithms. Capturing such semantics by itself can be pretty challenging, however, our paper mainly focuses on, (a) designing and implementing effective anomaly detection pipelines using sparse Gaussian Graphical Models with various statistical distance metrics, and (b) differentiating these pipelines by embedding contextual semantics inferred from graphs so as to obtain better KPIs in practice. The motivation for the latter of these two has been explained above, and the former in particular is well motivated by the relatively mediocre performance of highly parametric deep learning methods for small tabular datasets (compared to images) such as IoT sensor data. In contrast to such traditional automated deep learning (AutoAI) techniques, our anomaly detection system is based on developing semantics-driven industry specific ML pipelines which perform scalable computation evaluating several models to identify the best model. We benchmark our AD method against state-of-the-art AD techniques on publicly available UCI datasets. We also conduct a case study on IoT sensor and semantic data procured from a large thermal energy asset to evaluate the importance of semantics in enhancing our pipelines. In addition, we also provide explainable insights for our model which provide a complete perspective to a reliability engineer.
Grimsman, David, Hespanha, João P., Marden, Jason R..  2020.  Stackelberg Equilibria for Two-Player Network Routing Games on Parallel Networks. 2020 American Control Conference (ACC). :5364—5369.
We consider a two-player zero-sum network routing game in which a router wants to maximize the amount of legitimate traffic that flows from a given source node to a destination node and an attacker wants to block as much legitimate traffic as possible by flooding the network with malicious traffic. We address scenarios with asymmetric information, in which the router must reveal its policy before the attacker decides how to distribute the malicious traffic among the network links, which is naturally modeled by the notion of Stackelberg equilibria. The paper focuses on parallel networks, and includes three main contributions: we show that computing the optimal attack policy against a given routing policy is an NP-hard problem; we establish conditions under which the Stackelberg equilibria lead to no regret; and we provide a metric that can be used to quantify how uncertainty about the attacker's capabilities limits the router's performance.
Alsmadi, Izzat, Zarrad, Anis, Yassine, Abdulrahmane.  2020.  Mutation Testing to Validate Networks Protocols. 2020 IEEE International Systems Conference (SysCon). :1—8.
As networks continue to grow in complexity using wired and wireless technologies, efficient testing solutions should accommodate such changes and growth. Network simulators provide a network-independent environment to provide different types of network testing. This paper is motivated by the observation that, in many cases in the literature, the success of developed network protocols is very sensitive to the initial conditions and assumptions of the testing scenarios. Network services are deployed in complex environments; results of testing and simulation can vary from one environment to another and sometimes in the same environment at different times. Our goal is to propose mutation-based integration testing that can be deployed with network protocols and serve as Built-in Tests (BiT).This paper proposes an integrated mutation testing framework to achieve systematic test cases' generation for different scenario types. Scenario description and variables' setting should be consistent with the protocol specification and the simulation environment. We focused on creating test cases for critical scenarios rather than preliminary or simplified scenarios. This will help users to report confident simulation results and provide credible protocol analysis. The criticality is defined as a combination of network performance metrics and critical functions' coverage. The proposed solution is experimentally proved to obtain accurate evaluation results with less testing effort by generating high-quality testing scenarios. Generated test scenarios will serve as BiTs for the network simulator. The quality of the test scenarios is evaluated from three perspectives: (i) code coverage, (ii) mutation score and (iii) testing effort. In this work, we implemented the testing framework in NS2, but it can be extended to any other simulation environment.
ISSN: 2472-9647
Mell, Peter, Gueye, Assane.  2020.  A Suite of Metrics for Calculating the Most Significant Security Relevant Software Flaw Types. 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC). :511—516.
The Common Weakness Enumeration (CWE) is a prominent list of software weakness types. This list is used by vulnerability databases to describe the underlying security flaws within analyzed vulnerabilities. This linkage opens the possibility of using the analysis of software vulnerabilities to identify the most significant weaknesses that enable those vulnerabilities. We accomplish this through creating mashup views combining CWE weakness taxonomies with vulnerability analysis data. The resulting graphs have CWEs as nodes, edges derived from multiple CWE taxonomies, and nodes adorned with vulnerability analysis information (propagated from children to parents). Using these graphs, we develop a suite of metrics to identify the most significant weakness types (using the perspectives of frequency, impact, exploitability, and overall severity).
Wang, Wenhui, Chen, Liandong, Han, Longxi, Zhou, Zhihong, Xia, Zhengmin, Chen, Xiuzhen.  2020.  Vulnerability Assessment for ICS system Based on Zero-day Attack Graph. 2020 International Conference on Intelligent Computing, Automation and Systems (ICICAS). :1—5.
The numerous attacks on ICS systems have made severe threats to critical infrastructure. Extensive studies have focussed on the risk assessment of discovering vulnerabilities. However, to identify Zero-day vulnerabilities is challenging because they are unknown to defenders. Here we sought to measure ICS system zero-day risk by building an enhanced attack graph for expected attack path exploiting zero-day vulnerability. In this study, we define the security metrics of Zero-day vulnerability for an ICS. Then we created a Zero-day attack graph to guide how to harden the system by measuring attack paths that exploiting zero-day vulnerabilities. Our studies identify the vulnerability assessment method on ICS systems considering Zero-day Vulnerability by zero-day attack graph. Together, our work is essential to ICS systems security. By assessing unknown vulnerability risk to close the imbalance between attackers and defenders.
Aigner, Andreas, Khelil, Abdelmajid.  2020.  A Semantic Model-Based Security Engineering Framework for Cyber-Physical Systems. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1826—1833.
The coupling of safety-relevant embedded- and cyber-space components to build Cyber-Physical Systems (CPS) extends the functionality and quality in many business domains, while also creating new ones. Prime examples like Internet of Things and Industry 4.0 enable new technologies and extend the service capabilities of physical entities by building a universe of connected devices. In addition to higher complexity, the coupling of these heterogeneous systems results in many new challenges, which should be addressed by engineers and administrators. Here, security represents a major challenge, which may be well addressed in cyber-space engineering, but less in embedded system or CPS design. Although model-based engineering provides significant benefits for system architects, like reducing complexity and automated analysis, as well as being considered as standard methodology in embedded systems design, the aspect of security may not have had a major role in traditional engineering concepts. Especially the characteristics of CPS, as well as the coupling of safety-relevant (physical) components with high-scalable entities of the cyber-space domain have an enormous impact on the overall level of security, based on the introduced side effects and uncertainties. Therefore, we aim to define a model-based security-engineering framework, which is tailored to the needs of CPS engineers. Hereby, we focus on the actual modeling process, the evaluation of security, as well as quantitatively expressing security of a deployed CPS. Overall and in contrast to other approaches, we shift the engineering concepts on a semantic level, which allows to address the proposed challenges in CPS in the most efficient way.
Aigner, Andreas, Khelil, Abdelmajid.  2020.  A Scoring System to Efficiently Measure Security in Cyber-Physical Systems. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1141—1145.
The importance of Cyber-Physical Systems (CPS) gains more and more weight in our daily business and private life. Although CPS build the backbone for major trends, like Industry 4.0 and connected vehicles, they also propose many new challenges. One major challenge can be found in achieving a high level of security within such highly connected environments, in which an unpredictable number of heterogeneous systems with often-distinctive characteristics interact with each other. In order to develop high-level security solutions, system designers must eventually know the current level of security of their specification. To this end, security metrics and scoring frameworks are essential, as they quantitatively express security of a given design or system. However, existing solutions may not be able to handle the proposed challenges of CPS, as they mainly focus on one particular system and one specific attack. Therefore, we aim to elaborate a security scoring mechanism, which can efficiently be used in CPS, while considering all essential information. We break down each system within the CPS into its core functional blocks and analyze a variety of attacks in terms of exploitability, scalability of attacks, as well as potential harm to targeted assets. With this approach, we get an overall assessment of security for the whole CPS, as it integrates the security-state of all interacting systems. This allows handling the presented complexity in CPS in a more efficient way, than existing solutions.
2021-07-27
Shabbir, Mudassir, Li, Jiani, Abbas, Waseem, Koutsoukos, Xenofon.  2020.  Resilient Vector Consensus in Multi-Agent Networks Using Centerpoints. 2020 American Control Conference (ACC). :4387–4392.
In this paper, we study the resilient vector consensus problem in multi-agent networks and improve resilience guarantees of existing algorithms. In resilient vector consensus, agents update their states, which are vectors in ℝd, by locally interacting with other agents some of which might be adversarial. The main objective is to ensure that normal (non-adversarial) agents converge at a common state that lies in the convex hull of their initial states. Currently, resilient vector consensus algorithms, such as approximate distributed robust convergence (ADRC) are based on the idea that to update states in each time step, every normal node needs to compute a point that lies in the convex hull of its normal neighbors' states. To compute such a point, the idea of Tverberg partition is typically used, which is computationally hard. Approximation algorithms for Tverberg partition negatively impact the resilience guarantees of consensus algorithm. To deal with this issue, we propose to use the idea of centerpoint, which is an extension of median in higher dimensions, instead of Tverberg partition. We show that the resilience of such algorithms to adversarial nodes is improved if we use the notion of centerpoint. Furthermore, using centerpoint provides a better characterization of the necessary and sufficient conditions guaranteeing resilient vector consensus. We analyze these conditions in two, three, and higher dimensions separately. We also numerically evaluate the performance of our approach.
Ruiz-Martin, Cristina, Wainer, Gabriel, Lopez-Paredes, Adolfo.  2020.  Studying Communications Resiliency in Emergency Plans. 2020 Spring Simulation Conference (SpringSim). :1–12.
Recent disasters have shown that hazards can be unpredictable and can have catastrophic consequences. Emergency plans are key to dealing with these situations and communications play a key role in emergency management. In this paper, we provide a formalism to design resilient emergency plans in terms of communications. We exemplify how to use the formalism using a case study of a Nuclear Emergency Plan.
Sinha, Ayush, Chakrabarti, Sourin, Vyas, O.P..  2020.  Distributed Grid restoration based on graph theory. 2020 IEEE International Symposium on Sustainable Energy, Signal Processing and Cyber Security (iSSSC). :1–6.
With the emergence of smart grids as the primary means of distribution across wide areas, the importance of improving its resilience to faults and mishaps is increasing. The reliability of a distribution system depends upon its tolerance to attacks and the efficiency of restoration after an attack occurs. This paper proposes a unique approach to the restoration of smart grids under attack by impostors or due to natural calamities via optimal islanding of the grid with primary generators and distributed generators(DGs) into sub-grids minimizing the amount of load shed which needs to be incurred and at the same time minimizing the number of switching operations via graph theory. The minimum load which needs to be shed is computed in the first stage followed by selecting the nodes whose load needs to be shed to achieve such a configuration and then finally deriving the sequence of switching operations required to achieve the configuration. The proposed method is tested against standard IEEE 37-bus and a 1069-bus grid system and the minimum load shed along with the sequencing steps to optimal configuration and time to achieve such a configuration are presented which demonstrates the effectiveness of the method when compared to the existing methods in the field. Moreover, the proposed algorithm can be easily modified to incorporate any other constraints which might arise due to any operational configuration of the grid.
Yang, Chien-Sheng, Avestimehr, A. Salman.  2020.  Coded Computing for Boolean Functions. 2020 International Symposium on Information Theory and Its Applications (ISITA). :141–145.
The growing size of modern datasets necessitates splitting a large scale computation into smaller computations and operate in a distributed manner for improving overall performance. However, adversarial servers in a distributed computing system deliberately send erroneous data in order to affect the computation for their benefit. Computing Boolean functions is the key component of many applications of interest, e.g., classification problem, verification functions in the blockchain and the design of cryptographic algorithm. In this paper, we consider the problem of computing a Boolean function in which the computation is carried out distributively across several workers with particular focus on security against Byzantine workers. We note that any Boolean function can be modeled as a multivariate polynomial which can have high degree in general. Hence, the recently proposed Lagrange Coded Computing (LCC) can be used to simultaneously provide resiliency, security, and privacy. However, the security threshold (i.e., the maximum number of adversarial workers that can be tolerated) provided by LCC can be extremely low if the degree of the polynomial is high. Our goal is to design an efficient coding scheme which achieves the optimal security threshold. We propose two novel schemes called coded Algebraic normal form (ANF) and coded Disjunctive normal form (DNF). Instead of modeling the Boolean function as a general polynomial, the key idea of the proposed schemes is to model it as the concatenation of some linear functions and threshold functions. The proposed coded ANF and coded DNF outperform LCC by providing the security threshold which is independent of the polynomial's degree.
Nweke, Livinus Obiora, Wolthusen, Stephen D..  2020.  Resilience Analysis of Software-Defined Networks Using Queueing Networks. 2020 International Conference on Computing, Networking and Communications (ICNC). :536–542.
Software-Defined Networks (SDN) are being adopted widely and are also likely to be deployed as the infrastructure of systems with critical real-time properties such as Industrial Control Systems (ICS). This raises the question of what security and performance guarantees can be given for the data plane of such critical systems and whether any control plane actions will adversely affect these guarantees, particularly for quality of service in real-time systems. In this paper we study the existing literature on the analysis of SDN using queueing networks and show ways in which models need to be extended to study attacks that are based on arrival rates and service time distributions of flows in SDN.
Loreti, Daniela, Artioli, Marcello, Ciampolini, Anna.  2020.  Solving Linear Systems on High Performance Hardware with Resilience to Multiple Hard Faults. 2020 International Symposium on Reliable Distributed Systems (SRDS). :266–275.
As large-scale linear equation systems are pervasive in many scientific fields, great efforts have been done over the last decade in realizing efficient techniques to solve such systems, possibly relying on High Performance Computing (HPC) infrastructures to boost the performance. In this framework, the ever-growing scale of supercomputers inevitably increases the frequency of faults, making it a crucial issue of HPC application development.A previous study [1] investigated the possibility to enhance the Inhibition Method (IMe) -a linear systems solver for dense unstructured matrices-with fault tolerance to single hard errors, i.e. failures causing one computing processor to stop.This article extends [1] by proposing an efficient technique to obtain fault tolerance to multiple hard errors, which may occur concurrently on different processors belonging to the same or different machines. An improved parallel implementation is also proposed, which is particularly suitable for HPC environments and moves towards the direction of a complete decentralization. The theoretical analysis suggests that the technique (which does not require check pointing, nor rollback) is able to provide fault tolerance to multiple faults at the price of a small overhead and a limited number of additional processors to store the checksums. Experimental results on a HPC architecture validate the theoretical study, showing promising performance improvements w.r.t. a popular fault-tolerant solving technique.
Beyza, Jesus, Bravo, Victor M., Garcia-Paricio, Eduardo, Yusta, Jose M., Artal-Sevil, Jesus S..  2020.  Vulnerability and Resilience Assessment of Power Systems: From Deterioration to Recovery via a Topological Model based on Graph Theory. 2020 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC). 4:1–6.
Traditionally, vulnerability is the level of degradation caused by failures or disturbances, and resilience is the ability to recover after a high-impact event. This paper presents a topological procedure based on graph theory to evaluate the vulnerability and resilience of power grids. A cascading failures model is developed by eliminating lines both deliberately and randomly, and four restoration strategies inspired by the network approach are proposed. In the two cases, the degradation and recovery of the electrical infrastructure are quantified through four centrality measures. Here, an index called flow-capacity is proposed to measure the level of network overload during the iterative processes. The developed sequential framework was tested on a graph of 600 nodes and 1196 edges built from the 400 kV high-voltage power system in Spain. The conclusions obtained show that the statistical graph indices measure different topological aspects of the network, so it is essential to combine the results to obtain a broader view of the structural behaviour of the infrastructure.
MacDermott, Áine, Carr, John, Shi, Qi, Baharon, Mohd Rizuan, Lee, Gyu Myoung.  2020.  Privacy Preserving Issues in the Dynamic Internet of Things (IoT). 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.
Convergence of critical infrastructure and data, including government and enterprise, to the dynamic Internet of Things (IoT) environment and future digital ecosystems exhibit significant challenges for privacy and identity in these interconnected domains. There are an increasing variety of devices and technologies being introduced, rendering existing security tools inadequate to deal with the dynamic scale and varying actors. The IoT is increasingly data driven with user sovereignty being essential - and actors in varying scenarios including user/customer, device, manufacturer, third party processor, etc. Therefore, flexible frameworks and diverse security requirements for such sensitive environments are needed to secure identities and authenticate IoT devices and their data, protecting privacy and integrity. In this paper we present a review of the principles, techniques and algorithms that can be adapted from other distributed computing paradigms. Said review will be used in application to the development of a collaborative decision-making framework for heterogeneous entities in a distributed domain, whilst simultaneously highlighting privacy preserving issues in the IoT. In addition, we present our trust-based privacy preserving schema using Dempster-Shafer theory of evidence. While still in its infancy, this application could help maintain a level of privacy and nonrepudiation in collaborative environments such as the IoT.
Lu, Tao, Xu, Hongyun, Tian, Kai, Tian, Cenxi, Jiang, Rui.  2020.  Semantic Location Privacy Protection Algorithm Based on Edge Cluster Graph. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1304–1309.
With the development of positioning technology and the popularity of mobile devices, location-based services have been widely deployed. To use the services, users must provide the server accurate location information, during which the attacker tends to infer sensitive information from intercepting queries. In this paper, we model the road network as an edge cluster graph with its location semantics considered. Then, we propose the Circle First Structure Optimization (CFSO) algorithm which generates an anonymous set by adding optimal adjacent locations. Furthermore, we introduce controllable randomness and propose the Attack-Resilient (AR) algorithm to enhance the anti-attack ability. Meanwhile, to reduce the system overhead, our algorithms build the anonymous set quickly and take the structure of the anonymous set into account. Finally, we conduct experiments on a real map and the results demonstrate a higher anonymity success rate and a stronger anti-attack capability with less system overhead.
Van Vu, Thi, Luong, The Dung, Hoang, Van Quan.  2020.  An Elliptic Curve-based Protocol for Privacy Preserving Frequency Computation in 2-Part Fully Distributed Setting. 2020 12th International Conference on Knowledge and Systems Engineering (KSE). :91–96.
Privacy-preserving frequency computation is critical to privacy-preserving data mining in 2-Part Fully Distributed Setting (such as association rule analysis, clustering, and classification analysis) and has been investigated in many researches. However, these solutions are based on the Elgamal Cryptosystem, making computation and communication efficiency low. Therefore, this paper proposes an improved protocol using an Elliptic Curve Cryptosystem. The theoretical and experimental analysis shows that the proposed method is effective in both computing and communication compared to other methods.
Bentafat, Elmahdi, Rathore, M. Mazhar, Bakiras, Spiridon.  2020.  Privacy-Preserving Traffic Flow Estimation for Road Networks. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–6.
Future intelligent transportation systems necessitate a fine-grained and accurate estimation of vehicular traffic flows across critical paths of the underlying road network. This task is relatively trivial if we are able to collect detailed trajectories from every moving vehicle throughout the day. Nevertheless, this approach compromises the location privacy of the vehicles and may be used to build accurate profiles of the corresponding individuals. To this end, this work introduces a privacy-preserving protocol that leverages roadside units (RSUs) to communicate with the passing vehicles, in order to construct encrypted Bloom filters stemming from the vehicle IDs. The aggregate Bloom filters are encrypted with a threshold cryptosystem and can only be decrypted by the transportation authority in collaboration with multiple trusted entities. As a result, the individual communications between the vehicles and the RSUs remain secret. The decrypted Bloom filters reveal the aggregate traffic information at each RSU, but may also serve as a means to compute an approximation of the traffic flow between any pair of RSUs, by simply estimating the number of common vehicles in their respective Bloom filters. We performed extensive simulation experiments with various configuration parameters and demonstrate that our protocol reduces the estimation error considerably when compared to the current state-of-the-art approaches. Furthermore, our implementation of the underlying cryptographic primitives illustrates the feasibility, practicality, and scalability of the system.
Sengupta, Poushali, Paul, Sudipta, Mishra, Subhankar.  2020.  BUDS: Balancing Utility and Differential Privacy by Shuffling. 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1–7.
Balancing utility and differential privacy by shuffling or BUDS is an approach towards crowd sourced, statistical databases, with strong privacy and utility balance using differential privacy theory. Here, a novel algorithm is proposed using one-hot encoding and iterative shuffling with the loss estimation and risk minimization techniques, to balance both the utility and privacy. In this work, after collecting one-hot encoded data from different sources and clients, a step of novel attribute shuffling technique using iterative shuffling (based on the query asked by the analyst) and loss estimation with an updation function and risk minimization produces a utility and privacy balanced differential private report. During empirical test of balanced utility and privacy, BUDS produces ε = 0.02 which is a very promising result. Our algorithm maintains a privacy bound of ε = ln[t/((n1-1)S)] and loss bound of c'\textbackslashtextbareln[t/((n1-1)S)]-1\textbackslashtextbar.
Jiao, Rui, Zhang, Lan, Li, Anran.  2020.  IEye: Personalized Image Privacy Detection. 2020 6th International Conference on Big Data Computing and Communications (BIGCOM). :91–95.
Massive images are being shared via a variety of ways, such as social networking. The rich content of images raise a serious concern for privacy. A great number of efforts have been devoted to designing mechanisms for privacy protection based on the assumption that the privacy is well defined. However, in practice, given a collection of images it is usually nontrivial to decide which parts of images should be protected, since the sensitivity of objects is context-dependent and user-dependent. To meet personalized privacy requirements of different users, we propose a system IEye to automatically detect private parts of images based on both common knowledge and personal knowledge. Specifically, for each user's images, multi-layered semantic graphs are constructed as feature representations of his/her images and a rule set is learned from those graphs, which describes his/her personalized privacy. In addition, an optimization algorithm is proposed to protect the user's privacy as well as minimize the loss of utility. We conduct experiments on two datasets, the results verify the effectiveness of our design to detect and protect personalized image privacy.
Zheng, Zhihao, Cao, Zhenfu, Shen, Jiachen.  2020.  Practical and Secure Circular Range Search on Private Spatial Data. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :639–645.
With the location-based services (LBS) booming, the volume of spatial data inevitably explodes. In order to reduce local storage and computational overhead, users tend to outsource data and initiate queries to the cloud. However, sensitive data or queries may be compromised if cloud server has access to raw data and plaintext token. To cope with this problem, searchable encryption for geometric range is applied. Geometric range search has wide applications in many scenarios, especially the circular range search. In this paper, a practical and secure circular range search scheme (PSCS) is proposed to support searching for spatial data in a circular range. With our scheme, a semi-honest cloud server will return data for a given circular range correctly without uncovering index privacy or query privacy. We propose a polynomial split algorithm which can decompose the inner product calculation neatly. Then, we define the security of our PSCS formally and prove that it is secure under same-closeness-pattern chosen-plaintext attacks (CLS-CPA) in theory. In addition, we demonstrate the efficiency and accuracy through analysis and experiments compared with existing schemes.
Driss, Maha, Aljehani, Amani, Boulila, Wadii, Ghandorh, Hamza, Al-Sarem, Mohammed.  2020.  Servicing Your Requirements: An FCA and RCA-Driven Approach for Semantic Web Services Composition. IEEE Access. 8:59326—59339.
The evolution of Service-Oriented Computing (SOC) provides more efficient software development methods for building and engineering new value-added service-based applications. SOC is a computing paradigm that relies on Web services as fundamental elements. Research and technical advancements in Web services composition have been considered as an effective opportunity to develop new service-based applications satisfying complex requirements rapidly and efficiently. In this paper, we present a novel approach enhancing the composition of semantic Web services. The novelty of our approach, as compared to others reported in the literature, rests on: i) mapping user's/organization's requirements with Business Process Modeling Notation (BPMN) and semantic descriptions using ontologies, ii) considering functional requirements and also different types of non-functional requirements, such as quality of service (QoS), quality of experience (QoE), and quality of business (QoBiz), iii) using Formal Concept Analysis (FCA) technique to select the optimal set of Web services, iv) considering composability levels between sequential Web services using Relational Concept Analysis (RCA) technique to decrease the required adaptation efforts, and finally, v) validating the obtained service-based applications by performing an analytical technique, which is the monitoring. The approach experimented on an extended version of the OWLS-TC dataset, which includes more than 10830 Web services descriptions from various domains. The obtained results demonstrate that our approach allows to successfully and effectively compose Web services satisfying different types of user's functional and non-functional requirements.
Kim, Hyeji, Jiang, Yihan, Kannan, Sreeram, Oh, Sewoong, Viswanath, Pramod.  2020.  Deepcode: Feedback Codes via Deep Learning. IEEE Journal on Selected Areas in Information Theory. 1:194—206.
The design of codes for communicating reliably over a statistically well defined channel is an important endeavor involving deep mathematical research and wide-ranging practical applications. In this work, we present the first family of codes obtained via deep learning, which significantly outperforms state-of-the-art codes designed over several decades of research. The communication channel under consideration is the Gaussian noise channel with feedback, whose study was initiated by Shannon; feedback is known theoretically to improve reliability of communication, but no practical codes that do so have ever been successfully constructed. We break this logjam by integrating information theoretic insights harmoniously with recurrent-neural-network based encoders and decoders to create novel codes that outperform known codes by 3 orders of magnitude in reliability and achieve a 3dB gain in terms of SNR. We also demonstrate several desirable properties of the codes: (a) generalization to larger block lengths, (b) composability with known codes, and (c) adaptation to practical constraints. This result also has broader ramifications for coding theory: even when the channel has a clear mathematical model, deep learning methodologies, when combined with channel-specific information-theoretic insights, can potentially beat state-of-the-art codes constructed over decades of mathematical research.
Basu, Prithwish, Salonidis, Theodoros, Kraczek, Brent, Saghaian, Sayed M., Sydney, Ali, Ko, Bongjun, La Porta, Tom, Chan, Kevin.  2020.  Decentralized placement of data and analytics in wireless networks for energy-efficient execution. IEEE INFOCOM 2020 - IEEE Conference on Computer Communications. :486—495.
We address energy-efficient placement of data and analytics components of composite analytics services on a wireless network to minimize execution-time energy consumption (computation and communication) subject to compute, storage and network resource constraints. We introduce an expressive analytics service hypergraph model for representing k-ary composability relationships (k ≥ 2) between various analytics and data components and leverage binary quadratic programming (BQP) to minimize the total energy consumption of a given placement of the analytics hypergraph nodes on the network subject to resource availability constraints. Then, after defining a potential energy functional Φ(·) to model the affinities of analytics components and network resources using analogs of attractive and repulsive forces in physics, we propose a decentralized Metropolis Monte Carlo (MMC) sampling method which seeks to minimize Φ by moving analytics and data on the network. Although Φ is non-convex, using a potential game formulation, we identify conditions under which the algorithm provably converges to a local minimum energy equilibrium placement configuration. Trace-based simulations of the placement of a deep-neural-network analytics service on a realistic wireless network show that for smaller problem instances our MMC algorithm yields placements with total energy within a small factor of BQP and more balanced workload distributions; for larger problems, it yields low-energy configurations while the BQP approach fails.
2021-06-02
Bychkov, Igor, Feoktistov, Alexander, Gorsky, Sergey, Edelev, Alexei, Sidorov, Ivan, Kostromin, Roman, Fereferov, Evgeniy, Fedorov, Roman.  2020.  Supercomputer Engineering for Supporting Decision-making on Energy Systems Resilience. 2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT). :1—6.
We propose a new approach to creating a subject-oriented distributed computing environment. Such an environment is used to support decision-making in solving relevant problems of ensuring energy systems resilience. The proposed approach is based on the idea of advancing and integrating the following important capabilities in supercomputer engineering: continuous integration, delivery, and deployment of the system and applied software, high-performance computing in heterogeneous environments, multi-agent intelligent computation planning and resource allocation, big data processing and geo-information servicing for subject information, including weakly structured data, and decision-making support. This combination of capabilities and their advancing are unique to the subject domain under consideration, which is related to combinatorial studying critical objects of energy systems. Evaluation of decision-making alternatives is carrying out through applying combinatorial modeling and multi-criteria selection rules. The Orlando Tools framework is used as the basis for an integrated software environment. It implements a flexible modular approach to the development of scientific applications (distributed applied software packages).