Visible to the public Biblio

Found 152 results

Filters: Keyword is Robustness  [Clear All Filters]
2021-10-12
Gouk, Henry, Hospedales, Timothy M..  2020.  Optimising Network Architectures for Provable Adversarial Robustness. 2020 Sensor Signal Processing for Defence Conference (SSPD). :1–5.
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any Lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
Muller, Tim, Wang, Dongxia, Sun, Jun.  2020.  Provably Robust Decisions based on Potentially Malicious Sources of Information. 2020 IEEE 33rd Computer Security Foundations Symposium (CSF). :411–424.
Sometimes a security-critical decision must be made using information provided by peers. Think of routing messages, user reports, sensor data, navigational information, blockchain updates. Attackers manifest as peers that strategically report fake information. Trust models use the provided information, and attempt to suggest the correct decision. A model that appears accurate by empirical evaluation of attacks may still be susceptible to manipulation. For a security-critical decision, it is important to take the entire attack space into account. Therefore, we define the property of robustness: the probability of deciding correctly, regardless of what information attackers provide. We introduce the notion of realisations of honesty, which allow us to bypass reasoning about specific feedback. We present two schemes that are optimally robust under the right assumptions. The “majority-rule” principle is a special case of the other scheme which is more general, named “most plausible realisations”.
Zhong, Zhenyu, Hu, Zhisheng, Chen, Xiaowei.  2020.  Quantifying DNN Model Robustness to the Real-World Threats. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :150–157.
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.
Musleh, Ahmed S., Chen, Guo, Dong, Zhao Yang, Wang, Chen, Chen, Shiping.  2020.  Statistical Techniques-Based Characterization of FDIA in Smart Grids Considering Grid Contingencies. 2020 International Conference on Smart Grids and Energy Systems (SGES). :83–88.
False data injection attack (FDIA) is a real threat to smart grids due to its wide range of vulnerabilities and impacts. Designing a proper detection scheme for FDIA is the 1stcritical step in defending the attack in smart grids. In this paper, we investigate two main statistical techniques-based approaches in this regard. The first is based on the principal component analysis (PCA), and the second is based on the canonical correlation analysis (CCA). The test cases illustrate a better characterization performance of FDIA using CCA compared to the PCA. Further, CCA provides a better differentiation of FDIA from normal grid contingencies. On the other hand, PCA provides a significantly reduced false alarm rate.
2021-09-30
Mezzah, Ibrahim, Kermia, Omar, Chemali, Hamimi.  2020.  Extensive Fault Emulation on RFID Tags. 2020 15th Design Technology of Integrated Systems in Nanoscale Era (DTIS). :1–2.
Radio frequency identification (RFID) is widespread and still necessary in many important applications. However, and in various significant cases, the use of this technology faces multiple security issues that must be addressed. This is mainly related to the use of RFID tags (transponders) which are electronic components communicating wirelessly, and hence they are vulnerable to multiple attacks through several means. In this work, an extensive fault analysis is performed on a tag architecture in order to evaluate its hardness. Tens of millions of single-bit upset (SBU) and multiple-bit upset (MBU) faults are emulated randomly on this tag architecture using an FPGA-based emulation platform. The emulated faults are classified under five groups according to faults effect on the tag behaviour. The obtained results show the faults effect variation in function of the number of MBU affected bits. The interpretation of this variation allows evaluating the tag robustness. The proposed approach represents an efficient mean that permits to study tag architectures at the design level and evaluating their robustness and vulnerability to fault attacks.
dos Santos Dourado, Leonardo, Ishikawa, Edison.  2020.  Graphical Semantic Authentication. 2020 15th Iberian Conference on Information Systems and Technologies (CISTI). :1–6.
Authenticate on the system using only the authentication method based on username and password is not enough to ensure an acceptable level of information security for a critical system. It has been used in a multi factor authentication to increase the information security during the authentication process. However factors like what you have cause an inconvenience to the users, because the users during the authentication process always will need to have a device in their possession that complements the authentication process. By the other side of the biometric factor might change during the time, it needs an auxiliary device that will increase the costs and it also might be dependent from environmental conditions to work appropriately. To avoid some problems that exist in multi factor authentication, this work purposes authentication through semantic representation in OWL (web Ontology Language) tuples of recognized concepts in images as a form to increase the security in the authentication process. A proof of the concept was modeled and implemented, it has a demonstration that the robustness of this authentication system depends on the complexity of relationship in the semantic base (ontology) and in the simplicity of the relationship identified in the images.
2021-09-21
Brezinski, Kenneth, Ferens, Ken.  2020.  Complexity-Based Convolutional Neural Network for Malware Classification. 2020 International Conference on Computational Science and Computational Intelligence (CSCI). :1–9.
Malware classification remains at the forefront of ongoing research as the prevalence of metamorphic malware introduces new challenges to anti-virus vendors and firms alike. One approach to malware classification is Static Analysis - a form of analysis which does not require malware to be executed before classification can be performed. For this reason, a lightweight classifier based on the features of a malware binary is preferred, with relatively low computational overhead. In this work a modified convolutional neural network (CNN) architecture was deployed which integrated a complexity-based evaluation based on box-counting. This was implemented by setting up max-pooling layers in parallel, and then extracting the fractal dimension using a polyscalar relationship based on the resolution of the measurement scale and the number of elements of a malware image covered in the measurement under consideration. To test the robustness and efficacy of our approach we trained and tested on over 9300 malware binaries from 25 unique malware families. This work was compared to other award-winning image recognition models, and results showed categorical accuracy in excess of 96.54%.
2021-09-16
Guo, Minghao, Yang, Yuzhe, Xu, Rui, Liu, Ziwei, Lin, Dahua.  2020.  When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :628–637.
Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep neural networks. Since then, extensive efforts have been devoted to enhancing the robustness of deep networks via specialized learning algorithms and loss functions. In this work, we take an architectural perspective and investigate the patterns of network architectures that are resilient to adversarial attacks. To obtain the large number of networks needed for this study, we adopt one-shot neural architecture search, training a large network for once and then finetuning the sub-networks sampled therefrom. The sampled architectures together with the accuracies they achieve provide a rich basis for our study. Our ''robust architecture Odyssey'' reveals several valuable observations: 1) densely connected patterns result in improved robustness; 2) under computational budget, adding convolution operations to direct connection edge is effective; 3) flow of solution procedure (FSP) matrix is a good indicator of network robustness. Based on these observations, we discover a family of robust architectures (RobNets). On various datasets, including CIFAR, SVHN, Tiny-ImageNet, and ImageNet, RobNets exhibit superior robustness performance to other widely used architectures. Notably, RobNets substantially improve the robust accuracy ( 5% absolute gains) under both white-box and black-box attacks, even with fewer parameter numbers. Code is available at https://github.com/gmh14/RobNets.
2021-08-31
Wang, Jia, Gao, Min, Wang, Zongwei, Wang, Runsheng, Wen, Junhao.  2020.  Robustness Analysis of Triangle Relations Attack in Social Recommender Systems. 2020 IEEE 13th International Conference on Cloud Computing (CLOUD). :557–565.
Cloud computing is applied in various domains, among which social recommender systems are well-received because of their effectivity to provide suggestions for users. Social recommender systems perform well in alleviating cold start problem, but it suffers from shilling attack due to its natural openness. Shilling attack is an injection attack mainly acting on the training process of machine learning, which aims to advance or suppress the recommendation ranking of target items. Some researchers have studied the influence of shilling attacks in two perspectives simultaneously, which are user-item's rating and user-user's relation. However, they take more consideration into user-item's rating, and up to now, the construction of user-user's relation has not been explored in depth. To explore shilling attacks with complex relations, in this paper, we propose two novel attack models based on triangle relations in social networks. Furthermore, we explore the influence of these models on five social recommendation algorithms. The experimental results on three datasets show that the recommendation can be affected by the triangle relation attacks. The attack model combined with triangle relation has a better attack effect than the model only based on rating injection and the model combined with random relation. Besides, we compare the functions of triangle relations in friend recommendation and product recommendation.
Ge, Chonghui, Sun, Jian, Sun, Yuxin, Di, Yunlong, Zhu, Yongjin, Xie, Linfeng, Zhang, Yingzhou.  2020.  Reversible Database Watermarking Based on Random Forest and Genetic Algorithm. 2020 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC). :239—247.
The advancing information technology is playing more and more important role in data mining of relational database.1 The transfer and sharing of databases cause the copyright-related security threats. Database watermarking technology can effectively solve the problem with copyright protection and traceability, which has been attracting researchers' attention. In this paper, we proposed a novel, robust and reversible database watermarking technique, named histogram shifting watermarking based on random forest and genetic algorithm (RF-GAHCSW). It greatly improves the watermark capacity by means of histogram width reduction and eliminates the impact of the prediction error attack. Meanwhile, random forest algorithm is used to select important attributes for watermark embedding, and genetic algorithm is employed to find the optimal secret key for the database grouping and determine the position of watermark embedding to improve the watermark capacity and reduce data distortion. The experimental results show that the robustness of RF-GAHCSW is greatly improved, compared with the original HSW, and the distortion has little effect on the usability of database.
2021-08-02
Bouniot, Quentin, Audigier, Romaric, Loesch, Angélique.  2020.  Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :3450—3459.
Person re-identification (re-ID) is a key problem in smart supervision of camera networks. Over the past years, models using deep learning have become state of the art. However, it has been shown that deep neural networks are flawed with adversarial examples, i.e. human-imperceptible perturbations. Extensively studied for the task of image closed- set classification, this problem can also appear in the case of open-set retrieval tasks. Indeed, recent work has shown that we can also generate adversarial examples for metric learning systems such as re-ID ones. These models remain vulnerable: when faced with adversarial examples, they fail to correctly recognize a person, which represents a security breach. These attacks are all the more dangerous as they are impossible to detect for a human operator. Attacking a metric consists in altering the distances between the feature of an attacked image and those of reference images, i.e. guides. In this article, we investigate different possible attacks depending on the number and type of guides available. From this metric attack family, two particularly effective attacks stand out. The first one, called Self Metric Attack, is a strong attack that does not need any image apart from the attacked image. The second one, called FurthestNegative Attack, makes full use of a set of images. Attacks are evaluated on commonly used datasets: Market1501 and DukeMTMC. Finally, we propose an efficient extension of adversarial training protocol adapted to metric learning as a defense that increases the robustness of re-ID models.1
Peng, Ye, Fu, Guobin, Luo, Yingguang, Yu, Qi, Li, Bin, Hu, Jia.  2020.  A Two-Layer Moving Target Defense for Image Classification in Adversarial Environment. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :410—414.
Deep learning plays an increasingly important role in various fields due to its superior performance, and it also achieves advanced recognition performance in the field of image classification. However, the vulnerability of deep learning in the adversarial environment cannot be ignored, and the prediction result of the model is likely to be affected by the small perturbations added to the samples by the adversary. In this paper, we propose a two-layer dynamic defense method based on defensive techniques pool and retrained branch model pool. First, we randomly select defense methods from the defense pool to process the input. The perturbation ability of the adversarial samples preprocessed by different defense methods changed, which would produce different classification results. In addition, we conduct adversarial training based on the original model and dynamically generate multiple branch models. The classification results of these branch models for the same adversarial sample is inconsistent. We can detect the adversarial samples by using the inconsistencies in the output results of the two layers. The experimental results show that the two-layer dynamic defense method we designed achieves a good defense effect.
2021-07-27
Shabbir, Mudassir, Li, Jiani, Abbas, Waseem, Koutsoukos, Xenofon.  2020.  Resilient Vector Consensus in Multi-Agent Networks Using Centerpoints. 2020 American Control Conference (ACC). :4387–4392.
In this paper, we study the resilient vector consensus problem in multi-agent networks and improve resilience guarantees of existing algorithms. In resilient vector consensus, agents update their states, which are vectors in ℝd, by locally interacting with other agents some of which might be adversarial. The main objective is to ensure that normal (non-adversarial) agents converge at a common state that lies in the convex hull of their initial states. Currently, resilient vector consensus algorithms, such as approximate distributed robust convergence (ADRC) are based on the idea that to update states in each time step, every normal node needs to compute a point that lies in the convex hull of its normal neighbors' states. To compute such a point, the idea of Tverberg partition is typically used, which is computationally hard. Approximation algorithms for Tverberg partition negatively impact the resilience guarantees of consensus algorithm. To deal with this issue, we propose to use the idea of centerpoint, which is an extension of median in higher dimensions, instead of Tverberg partition. We show that the resilience of such algorithms to adversarial nodes is improved if we use the notion of centerpoint. Furthermore, using centerpoint provides a better characterization of the necessary and sufficient conditions guaranteeing resilient vector consensus. We analyze these conditions in two, three, and higher dimensions separately. We also numerically evaluate the performance of our approach.
2021-07-08
Alamsyah, Zaenal, Mantoro, Teddy, Adityawarman, Umar, Ayu, Media Anugerah.  2020.  Combination RSA with One Time Pad for Enhanced Scheme of Two-Factor Authentication. 2020 6th International Conference on Computing Engineering and Design (ICCED). :1—5.
RSA is a popular asymmetric key algorithm with two keys scheme, a public key for encryption and private key for decryption. RSA has weaknesses in encryption and decryption of data, including slow in the process of encryption and decryption because it uses a lot of number generation. The reason is RSA algorithm can work well and is resistant to attacks such as brute force and statistical attacks. in this paper, it aims to strengthen the scheme by combining RSA with the One Time Pad algorithm so that it will bring up a new design to be used to enhance security on two-factor authentication. Contribution in this paper is to find a new scheme algorithm for an enhanced scheme of RSA. One Time Pad and RSA can combine as well.
2021-06-30
Wang, Chenguang, Pan, Kaikai, Tindemans, Simon, Palensky, Peter.  2020.  Training Strategies for Autoencoder-based Detection of False Data Injection Attacks. 2020 IEEE PES Innovative Smart Grid Technologies Europe (ISGT-Europe). :1—5.
The security of energy supply in a power grid critically depends on the ability to accurately estimate the state of the system. However, manipulated power flow measurements can potentially hide overloads and bypass the bad data detection scheme to interfere the validity of estimated states. In this paper, we use an autoencoder neural network to detect anomalous system states and investigate the impact of hyperparameters on the detection performance for false data injection attacks that target power flows. Experimental results on the IEEE 118 bus system indicate that the proposed mechanism has the ability to achieve satisfactory learning efficiency and detection accuracy.
2021-06-28
Sarabia-Lopez, Jaime, Nuñez-Ramirez, Diana, Mata-Mendoza, David, Fragoso-Navarro, Eduardo, Cedillo-Hernandez, Manuel, Nakano-Miyatake, Mariko.  2020.  Visible-Imperceptible Image Watermarking based on Reversible Data Hiding with Contrast Enhancement. 2020 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE). :29–34.
Currently the use and production of multimedia data such as digital images have increased due to its wide use within smart devices and open networks. Although this has some advantages, it has generated several issues related to the infraction of intellectual property. Digital image watermarking is a promissory solution to solve these issues. Considering the need to develop mechanisms to improve the information security as well as protect the intellectual property of the digital images, in this paper we propose a novel visible-imperceptible watermarking based on reversible data hiding with contrast enhancement. In this way, a watermark logo is embedded in the spatial domain of the original image imperceptibly, so that the logo is revealed applying reversible data hiding increasing the contrast of the watermarked image and the same time concealing a great amount of data bits, which are extracted and the watermarked image restored to its original conditions using the reversible functionality. Experimental results show the effectiveness of the proposed algorithm. A performance comparison with the current state-of-the-art is provided.
2021-06-24
Javaheripi, Mojan, Chen, Huili, Koushanfar, Farinaz.  2020.  Unified Architectural Support for Secure and Robust Deep Learning. 2020 57th ACM/IEEE Design Automation Conference (DAC). :1—6.
Recent advances in Deep Learning (DL) have enabled a paradigm shift to include machine intelligence in a wide range of autonomous tasks. As a result, a largely unexplored surface has opened up for attacks jeopardizing the integrity of DL models and hindering the success of autonomous systems. To enable ubiquitous deployment of DL approaches across various intelligent applications, we propose to develop architectural support for hardware implementation of secure and robust DL. Towards this goal, we leverage hardware/software co-design to develop a DL execution engine that supports algorithms specifically designed to defend against various attacks. The proposed framework is enhanced with two real-time defense mechanisms, securing both DL training and execution stages. In particular, we enable model-level Trojan detection to mitigate backdoor attacks and malicious behaviors induced on the DL model during training. We further realize real-time adversarial attack detection to avert malicious behavior during execution. The proposed execution engine is equipped with hardware-level IP protection and usage control mechanism to attest the legitimacy of the DL model mapped to the device. Our design is modular and can be tuned to task-specific demands, e.g., power, throughput, and memory bandwidth, by means of a customized hardware compiler. We further provide an accompanying API to reduce the nonrecurring engineering cost and ensure automated adaptation to various domains and applications.
2021-06-02
Sun, Weiqi, Li, Yuanlong, Shi, Liangren.  2020.  The Performance Evaluation and Resilience Analysis of Supply Chain Based on Logistics Network. 2020 39th Chinese Control Conference (CCC). :5772—5777.
With the development of globalization, more and more enterprises are involved in the supply chain network with increasingly complex structure. In this paper, enterprises and relations in the logistics network are abstracted as nodes and edges of the complex network. A graph model for a supply chain network to specified industry is constructed, and the Neo4j graph database is employed to store the graph data. This paper uses the theoretical research tool of complex network to model and analyze the supply chain, and designs a supply chain network evaluation system which include static and dynamic measurement indexes according to the statistical characteristics of complex network. In this paper both the static and dynamic resilience characteristics of the the constructed supply chain network are evaluated from the perspective of complex network. The numeric experimental simulations are conducted for validation. This research has practical and theoretical significance for enterprises to make strategies to improve the anti-risk capability of supply chain network based on logistics network information.
Guerrero-Bonilla, Luis, Saldaña, David, Kumar, Vijay.  2020.  Dense r-robust formations on lattices. 2020 IEEE International Conference on Robotics and Automation (ICRA). :6633—6639.
Robot networks are susceptible to fail under the presence of malicious or defective robots. Resilient networks in the literature require high connectivity and large communication ranges, leading to high energy consumption in the communication network. This paper presents robot formations with guaranteed resiliency that use smaller communication ranges than previous results in the literature. The formations can be built on triangular and square lattices in the plane, and cubic lattices in the three-dimensional space. We support our theoretical framework with simulations.
2021-06-01
Zheng, Wenbo, Yan, Lan, Gou, Chao, Wang, Fei-Yue.  2020.  Webly Supervised Knowledge Embedding Model for Visual Reasoning. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :12442–12451.
Visual reasoning between visual image and natural language description is a long-standing challenge in computer vision. While recent approaches offer a great promise by compositionality or relational computing, most of them are oppressed by the challenge of training with datasets containing only a limited number of images with ground-truth texts. Besides, it is extremely time-consuming and difficult to build a larger dataset by annotating millions of images with text descriptions that may very likely lead to a biased model. Inspired by the majority success of webly supervised learning, we utilize readily-available web images with its noisy annotations for learning a robust representation. Our key idea is to presume on web images and corresponding tags along with fully annotated datasets in learning with knowledge embedding. We present a two-stage approach for the task that can augment knowledge through an effective embedding model with weakly supervised web data. This approach learns not only knowledge-based embeddings derived from key-value memory networks to make joint and full use of textual and visual information but also exploits the knowledge to improve the performance with knowledge-based representation learning for applying other general reasoning tasks. Experimental results on two benchmarks show that the proposed approach significantly improves performance compared with the state-of-the-art methods and guarantees the robustness of our model against visual reasoning tasks and other reasoning tasks.
2021-05-25
Kore, Ashwini, Patil, Shailaja.  2020.  Robust Cross-Layer Security Framework For Internet of Things Enabled Wireless Sensor Networks. 2020 International Conference on Emerging Smart Computing and Informatics (ESCI). :142—147.

The significant development of Internet of Things (IoT) paradigm for monitoring the real-time applications using the wireless communication technologies leads to various challenges. The secure data transmission and privacy is one of the key challenges of IoT enabled Wireless Sensor Networks (WSNs) communications. Due to heterogeneity of attackers like Man-in-Middle Attack (MIMA), the present single layered security solutions are not sufficient. In this paper, the robust cross-layer trust computation algorithm for MIMA attacker detection proposed for IoT enabled WSNs called IoT enabled Cross-Layer Man-in-Middle Attack Detection System (IC-MADS). In IC-MADS, first robust clustering method proposed to form the clusters and cluster head (CH) preference. After clustering, for every sensor node, its trust value computed using the parameters of three layers such as MAC, Physical, and Network layers to protect the network communications in presence of security threats. The simulation results prove that IC-MADS achieves better protection against MIMA attacks with minimum overhead and energy consumption.

2021-05-20
Chibaya, Colin, Jowa, Viola Jubile, Rupere, Taurayi.  2020.  A HES for Low Speed Processors. 2020 2nd International Multidisciplinary Information Technology and Engineering Conference (IMITEC). :1—6.
Adaptation of e-commerce in third world countries requires more secure computing facilities. Online data is vulnerable and susceptible to active attacks. Hundreds of security mechanisms and services have been proposed to curb this challenge. However, available security mechanisms, sufficiently strong, are heavy for the machines used. To secure online data where machines' processing power and memory are deficient, a Hybrid Encryption Standard (HES) is proposed. The HES is built on the Data Encryption Standard (DES) algorithm and its siblings. The component units of the DES are redesigned towards reduced demands for processing power and memory. Precisely, white box designs of IP tables, PC tables, Expansion tables, Rotation tables, S-boxes and P-boxes are proposed, all aimed at reducing the processing time and memory demands. Evaluation of the performance of the HES algorithm against the performance of the traditional DES algorithm reveal that the HES out-performs the DES with regards to speed, memory demands, and general acceptance by novice practitioners in the cryptography field. In addition, reproducibility and flexibility are attractive features of the HES over the DES.
2021-05-13
Bansal, Naman, Agarwal, Chirag, Nguyen, Anh.  2020.  SAM: The Sensitivity of Attribution Methods to Hyperparameters. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :11–21.
Attribution methods can provide powerful insights into the reasons for a classifier's decision. We argue that a key desideratum of an explanation method is its robustness to input hyperparameters which are often randomly set or empirically tuned. High sensitivity to arbitrary hyperparameter choices does not only impede reproducibility but also questions the correctness of an explanation and impairs the trust of end-users. In this paper, we provide a thorough empirical study on the sensitivity of existing attribution methods. We found an alarming trend that many methods are highly sensitive to changes in their common hyperparameters e.g. even changing a random seed can yield a different explanation! Interestingly, such sensitivity is not reflected in the average explanation accuracy scores over the dataset as commonly reported in the literature. In addition, explanations generated for robust classifiers (i.e. which are trained to be invariant to pixel-wise perturbations) are surprisingly more robust than those generated for regular classifiers.
2021-05-05
Pawar, Shrikant, Stanam, Aditya.  2020.  Scalable, Reliable and Robust Data Mining Infrastructures. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :123—125.

Mining of data is used to analyze facts to discover formerly unknown patterns, classifying and grouping the records. There are several crucial scalable statistics mining platforms that have been developed in latest years. RapidMiner is a famous open source software which can be used for advanced analytics, Weka and Orange are important tools of machine learning for classifying patterns with techniques of clustering and regression, whilst Knime is often used for facts preprocessing like information extraction, transformation and loading. This article encapsulates the most important and robust platforms.

2021-05-03
Naik, Nikhil, Nuzzo, Pierluigi.  2020.  Robustness Contracts for Scalable Verification of Neural Network-Enabled Cyber-Physical Systems. 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE). :1–12.
The proliferation of artificial intelligence based systems in all walks of life raises concerns about their safety and robustness, especially for cyber-physical systems including multiple machine learning components. In this paper, we introduce robustness contracts as a framework for compositional specification and reasoning about the robustness of cyber-physical systems based on neural network (NN) components. Robustness contracts can encompass and generalize a variety of notions of robustness which were previously proposed in the literature. They can seamlessly apply to NN-based perception as well as deep reinforcement learning (RL)-enabled control applications. We present a sound and complete algorithm that can efficiently verify the satisfaction of a class of robustness contracts on NNs by leveraging notions from Lagrangian duality to identify system configurations that violate the contracts. We illustrate the effectiveness of our approach on the verification of NN-based perception systems and deep RL-based control systems.