Biblio

Found 152 results

Filters: Keyword is Robustness  [Clear All Filters]
2021-04-08
Nakamura, R., Kamiyama, N..  2020.  Analysis of Content Availability at Network Failure in Information-Centric Networking. 2020 16th International Conference on Network and Service Management (CNSM). :1–7.
In recent years, ICN (Information-Centric Networking) has been under the spotlight as a network that mainly focuses on transmitted and received data rather than on the hosts that transmit and receive data. Generally, the communication networks such as ICNs are required to be robust against network failures caused by attacks and disasters. One of the metrics for the robustness of conventional host-centric networks, e.g., TCP/IP network, is reachability between nodes in the network after network failures, whereas the key metric for the robustness of ICNs is content availability. In this paper, we focus on an arbitrary ICN network and derive the content availability for a given probability of node removal. Especially, we analytically obtain the average content availability over an entire network in the case where just a single path from a node to a repository, i.e., contents server, storing contents is available and where multiple paths to the repository are available, respectively. Furthermore, through several numerical evaluations, we investigate the effect of the structure of network topology as well as the pattern and scale of the network failures on the content availability in ICN. Our findings include that, regardless of patterns of network failures, the content availability is significantly improved by caching contents at routers and using multiple paths, and that the content availability is more degraded at cluster-based node removal compared with random node removal.
2021-09-21
Brezinski, Kenneth, Ferens, Ken.  2020.  Complexity-Based Convolutional Neural Network for Malware Classification. 2020 International Conference on Computational Science and Computational Intelligence (CSCI). :1–9.
Malware classification remains at the forefront of ongoing research as the prevalence of metamorphic malware introduces new challenges to anti-virus vendors and firms alike. One approach to malware classification is Static Analysis - a form of analysis which does not require malware to be executed before classification can be performed. For this reason, a lightweight classifier based on the features of a malware binary is preferred, with relatively low computational overhead. In this work a modified convolutional neural network (CNN) architecture was deployed which integrated a complexity-based evaluation based on box-counting. This was implemented by setting up max-pooling layers in parallel, and then extracting the fractal dimension using a polyscalar relationship based on the resolution of the measurement scale and the number of elements of a malware image covered in the measurement under consideration. To test the robustness and efficacy of our approach we trained and tested on over 9300 malware binaries from 25 unique malware families. This work was compared to other award-winning image recognition models, and results showed categorical accuracy in excess of 96.54%.
2021-03-22
Li, Y., Zhou, W., Wang, H..  2020.  F-DPC: Fuzzy Neighborhood-Based Density Peak Algorithm. IEEE Access. 8:165963–165972.
Clustering is a concept in data mining, which divides a data set into different classes or clusters according to a specific standard, making the similarity of data objects in the same cluster as large as possible. Clustering by fast search and find of density peaks (DPC) is a novel clustering algorithm based on density. It is simple and novel, only requiring fewer parameters to achieve better clustering effect, without the requirement for iterative solution. And it has expandability and can detect the clustering of any shape. However, DPC algorithm still has some defects, such as it employs the clear neighborhood relations to calculate local density, so it cannot identify the neighborhood membership of different values of points from the distance of points and It is impossible to accurately cluster the data of the multi-density peak. The fuzzy neighborhood density peak clustering algorithm is proposed for this shortcoming (F-DPC): novel local density is defined by the fuzzy neighborhood relationship. The fuzzy set theory can be used to make the fuzzy neighborhood function of local density more sensitive, so that the clustering for data set of various shapes and densities is more robust. Experiments show that the algorithm has high accuracy and robustness.
2021-03-29
Li, J., Wang, X., Liu, S..  2020.  Hash Retrieval Method for Recaptured Images Based on Convolutional Neural Network. 2020 2nd World Symposium on Artificial Intelligence (WSAI). :79–83.
For the purpose of outdoor advertising market researching, AD images are recaptured and uploaded everyday for statistics. But the quality of the recaptured advertising images are often affected by conditions such as angle, distance, and light during the shooting process, which consequently reduce either the speed or the accuracy of the retrieving algorithm. In this paper, we proposed a hash retrieval method based on convolutional neural networks for recaptured images. The basic idea is to add a hash layer to the convolutional neural network and then extract the binary hash code output by the hash layer to perform image retrieval in lowdimensional Hamming space. Experimental results show that the retrieval performance is improved compared with the current commonly used hash retrieval methods.
2021-01-25
Chen, J., Lin, X., Shi, Z., Liu, Y..  2020.  Link Prediction Adversarial Attack Via Iterative Gradient Attack. IEEE Transactions on Computational Social Systems. 7:1081–1094.
Increasing deep neural networks are applied in solving graph evolved tasks, such as node classification and link prediction. However, the vulnerability of deep models can be revealed using carefully crafted adversarial examples generated by various adversarial attack methods. To explore this security problem, we define the link prediction adversarial attack problem and put forward a novel iterative gradient attack (IGA) strategy using the gradient information in the trained graph autoencoder (GAE) model. Not surprisingly, GAE can be fooled by an adversarial graph with a few links perturbed on the clean one. The results on comprehensive experiments of different real-world graphs indicate that most deep models and even the state-of-the-art link prediction algorithms cannot escape the adversarial attack, such as GAE. We can benefit the attack as an efficient privacy protection tool from the link prediction of unknown violations. On the other hand, the adversarial attack is a robust evaluation metric for current link prediction algorithms of their defensibility.
2021-04-27
Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., Shafique, M..  2020.  Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.
2021-02-01
Wickramasinghe, C. S., Marino, D. L., Grandio, J., Manic, M..  2020.  Trustworthy AI Development Guidelines for Human System Interaction. 2020 13th International Conference on Human System Interaction (HSI). :130–136.
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
2021-06-28
Sarabia-Lopez, Jaime, Nuñez-Ramirez, Diana, Mata-Mendoza, David, Fragoso-Navarro, Eduardo, Cedillo-Hernandez, Manuel, Nakano-Miyatake, Mariko.  2020.  Visible-Imperceptible Image Watermarking based on Reversible Data Hiding with Contrast Enhancement. 2020 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE). :29–34.
Currently the use and production of multimedia data such as digital images have increased due to its wide use within smart devices and open networks. Although this has some advantages, it has generated several issues related to the infraction of intellectual property. Digital image watermarking is a promissory solution to solve these issues. Considering the need to develop mechanisms to improve the information security as well as protect the intellectual property of the digital images, in this paper we propose a novel visible-imperceptible watermarking based on reversible data hiding with contrast enhancement. In this way, a watermark logo is embedded in the spatial domain of the original image imperceptibly, so that the logo is revealed applying reversible data hiding increasing the contrast of the watermarked image and the same time concealing a great amount of data bits, which are extracted and the watermarked image restored to its original conditions using the reversible functionality. Experimental results show the effectiveness of the proposed algorithm. A performance comparison with the current state-of-the-art is provided.
2021-09-16
Guo, Minghao, Yang, Yuzhe, Xu, Rui, Liu, Ziwei, Lin, Dahua.  2020.  When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :628–637.
Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep neural networks. Since then, extensive efforts have been devoted to enhancing the robustness of deep networks via specialized learning algorithms and loss functions. In this work, we take an architectural perspective and investigate the patterns of network architectures that are resilient to adversarial attacks. To obtain the large number of networks needed for this study, we adopt one-shot neural architecture search, training a large network for once and then finetuning the sub-networks sampled therefrom. The sampled architectures together with the accuracies they achieve provide a rich basis for our study. Our ''robust architecture Odyssey'' reveals several valuable observations: 1) densely connected patterns result in improved robustness; 2) under computational budget, adding convolution operations to direct connection edge is effective; 3) flow of solution procedure (FSP) matrix is a good indicator of network robustness. Based on these observations, we discover a family of robust architectures (RobNets). On various datasets, including CIFAR, SVHN, Tiny-ImageNet, and ImageNet, RobNets exhibit superior robustness performance to other widely used architectures. Notably, RobNets substantially improve the robust accuracy ( 5% absolute gains) under both white-box and black-box attacks, even with fewer parameter numbers. Code is available at https://github.com/gmh14/RobNets.
2021-01-20
Zarazaga, P. P., Bäckström, T., Sigg, S..  2020.  Acoustic Fingerprints for Access Management in Ad-Hoc Sensor Networks. IEEE Access. 8:166083—166094.

Voice user interfaces can offer intuitive interaction with our devices, but the usability and audio quality could be further improved if multiple devices could collaborate to provide a distributed voice user interface. To ensure that users' voices are not shared with unauthorized devices, it is however necessary to design an access management system that adapts to the users' needs. Prior work has demonstrated that a combination of audio fingerprinting and fuzzy cryptography yields a robust pairing of devices without sharing the information that they record. However, the robustness of these systems is partially based on the extensive duration of the recordings that are required to obtain the fingerprint. This paper analyzes methods for robust generation of acoustic fingerprints in short periods of time to enable the responsive pairing of devices according to changes in the acoustic scenery and can be integrated into other typical speech processing tools.

2021-03-01
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.

2021-02-22
Song, Z., Kar, P..  2020.  Name-Signature Lookup System: A Security Enhancement to Named Data Networking. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1444–1448.
Named Data Networking (NDN) is a content-centric networking, where the publisher of the packet signs and encapsulates the data packet with a name-content-signature encryption to verify the authenticity and integrity of itself. This scheme can solve many of the security issues inherently compared to IP networking. NDN also support mobility since it hides the point-to-point connection details. However, an extreme attack takes place when an NDN consumer newly connects to a network. A Man-in-the-middle (MITM) malicious node can block the consumer and keep intercepting the interest packets sent out so as to fake the corresponding data packets signed with its own private key. Without knowledge and trust to the network, the NDN consumer can by no means perceive the attack and thus exposed to severe security and privacy hazard. In this paper, the Name-Signature Lookup System (NSLS) and corresponding Name-Signature Lookup Protocol (NSLP) is introduced to verify packets with their registered genuine publisher even in an untrusted network with the help of embedded keys inside Network Interface Controller (NIC), by which attacks like MITM is eliminated. A theoretical analysis of comparing NSLS with existing security model is provided. Digest algorithm SHA-256 and signature algorithm RSA are used in the NSLP model without specific preference.
2021-07-27
Shabbir, Mudassir, Li, Jiani, Abbas, Waseem, Koutsoukos, Xenofon.  2020.  Resilient Vector Consensus in Multi-Agent Networks Using Centerpoints. 2020 American Control Conference (ACC). :4387–4392.
In this paper, we study the resilient vector consensus problem in multi-agent networks and improve resilience guarantees of existing algorithms. In resilient vector consensus, agents update their states, which are vectors in ℝd, by locally interacting with other agents some of which might be adversarial. The main objective is to ensure that normal (non-adversarial) agents converge at a common state that lies in the convex hull of their initial states. Currently, resilient vector consensus algorithms, such as approximate distributed robust convergence (ADRC) are based on the idea that to update states in each time step, every normal node needs to compute a point that lies in the convex hull of its normal neighbors' states. To compute such a point, the idea of Tverberg partition is typically used, which is computationally hard. Approximation algorithms for Tverberg partition negatively impact the resilience guarantees of consensus algorithm. To deal with this issue, we propose to use the idea of centerpoint, which is an extension of median in higher dimensions, instead of Tverberg partition. We show that the resilience of such algorithms to adversarial nodes is improved if we use the notion of centerpoint. Furthermore, using centerpoint provides a better characterization of the necessary and sufficient conditions guaranteeing resilient vector consensus. We analyze these conditions in two, three, and higher dimensions separately. We also numerically evaluate the performance of our approach.
2021-05-25
Kore, Ashwini, Patil, Shailaja.  2020.  Robust Cross-Layer Security Framework For Internet of Things Enabled Wireless Sensor Networks. 2020 International Conference on Emerging Smart Computing and Informatics (ESCI). :142—147.

The significant development of Internet of Things (IoT) paradigm for monitoring the real-time applications using the wireless communication technologies leads to various challenges. The secure data transmission and privacy is one of the key challenges of IoT enabled Wireless Sensor Networks (WSNs) communications. Due to heterogeneity of attackers like Man-in-Middle Attack (MIMA), the present single layered security solutions are not sufficient. In this paper, the robust cross-layer trust computation algorithm for MIMA attacker detection proposed for IoT enabled WSNs called IoT enabled Cross-Layer Man-in-Middle Attack Detection System (IC-MADS). In IC-MADS, first robust clustering method proposed to form the clusters and cluster head (CH) preference. After clustering, for every sensor node, its trust value computed using the parameters of three layers such as MAC, Physical, and Network layers to protect the network communications in presence of security threats. The simulation results prove that IC-MADS achieves better protection against MIMA attacks with minimum overhead and energy consumption.

2021-03-01
Said, S., Bouloiz, H., Gallab, M..  2020.  Identification and Assessment of Risks Affecting Sociotechnical Systems Resilience. 2020 IEEE 6th International Conference on Optimization and Applications (ICOA). :1–10.
Resilience is regarded nowadays as the ideal solution that can be envisaged by sociotechnical systems for coping with potential threats and crises. This being said, gaining and maintaining this ability is not always easy, given the multitude of risks driving the adverse and challenging events. This paper aims to propose a method consecrated to the assessment of risks directly affecting resilience. This work is conducted within the framework of risk assessment and resilience engineering approaches. A 5×5 matrix, dedicated to the identification and assessment of risk factors that constitute threats to the system resilience, has been elaborated. This matrix consists of two axes, namely, the impact on resilience metrics and the availability and effectiveness of resilience planning. Checklists serving to collect information about these two attributes are established and a case study is undertaken. In this paper, a new method for identifying and assessing risk factors menacing directly the resilience of a given system is presented. The analysis of these risks must be given priority to make the system more resilient to shocks.
2021-06-02
Guerrero-Bonilla, Luis, Saldaña, David, Kumar, Vijay.  2020.  Dense r-robust formations on lattices. 2020 IEEE International Conference on Robotics and Automation (ICRA). :6633—6639.
Robot networks are susceptible to fail under the presence of malicious or defective robots. Resilient networks in the literature require high connectivity and large communication ranges, leading to high energy consumption in the communication network. This paper presents robot formations with guaranteed resiliency that use smaller communication ranges than previous results in the literature. The formations can be built on triangular and square lattices in the plane, and cubic lattices in the three-dimensional space. We support our theoretical framework with simulations.
2021-05-03
Naik, Nikhil, Nuzzo, Pierluigi.  2020.  Robustness Contracts for Scalable Verification of Neural Network-Enabled Cyber-Physical Systems. 2020 18th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE). :1–12.
The proliferation of artificial intelligence based systems in all walks of life raises concerns about their safety and robustness, especially for cyber-physical systems including multiple machine learning components. In this paper, we introduce robustness contracts as a framework for compositional specification and reasoning about the robustness of cyber-physical systems based on neural network (NN) components. Robustness contracts can encompass and generalize a variety of notions of robustness which were previously proposed in the literature. They can seamlessly apply to NN-based perception as well as deep reinforcement learning (RL)-enabled control applications. We present a sound and complete algorithm that can efficiently verify the satisfaction of a class of robustness contracts on NNs by leveraging notions from Lagrangian duality to identify system configurations that violate the contracts. We illustrate the effectiveness of our approach on the verification of NN-based perception systems and deep RL-based control systems.
2021-08-02
Peng, Ye, Fu, Guobin, Luo, Yingguang, Yu, Qi, Li, Bin, Hu, Jia.  2020.  A Two-Layer Moving Target Defense for Image Classification in Adversarial Environment. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :410—414.
Deep learning plays an increasingly important role in various fields due to its superior performance, and it also achieves advanced recognition performance in the field of image classification. However, the vulnerability of deep learning in the adversarial environment cannot be ignored, and the prediction result of the model is likely to be affected by the small perturbations added to the samples by the adversary. In this paper, we propose a two-layer dynamic defense method based on defensive techniques pool and retrained branch model pool. First, we randomly select defense methods from the defense pool to process the input. The perturbation ability of the adversarial samples preprocessed by different defense methods changed, which would produce different classification results. In addition, we conduct adversarial training based on the original model and dynamically generate multiple branch models. The classification results of these branch models for the same adversarial sample is inconsistent. We can detect the adversarial samples by using the inconsistencies in the output results of the two layers. The experimental results show that the two-layer dynamic defense method we designed achieves a good defense effect.
2021-03-09
Injadat, M., Moubayed, A., Shami, A..  2020.  Detecting Botnet Attacks in IoT Environments: An Optimized Machine Learning Approach. 2020 32nd International Conference on Microelectronics (ICM). :1—4.

The increased reliance on the Internet and the corresponding surge in connectivity demand has led to a significant growth in Internet-of-Things (IoT) devices. The continued deployment of IoT devices has in turn led to an increase in network attacks due to the larger number of potential attack surfaces as illustrated by the recent reports that IoT malware attacks increased by 215.7% from 10.3 million in 2017 to 32.7 million in 2018. This illustrates the increased vulnerability and susceptibility of IoT devices and networks. Therefore, there is a need for proper effective and efficient attack detection and mitigation techniques in such environments. Machine learning (ML) has emerged as one potential solution due to the abundance of data generated and available for IoT devices and networks. Hence, they have significant potential to be adopted for intrusion detection for IoT environments. To that end, this paper proposes an optimized ML-based framework consisting of a combination of Bayesian optimization Gaussian Process (BO-GP) algorithm and decision tree (DT) classification model to detect attacks on IoT devices in an effective and efficient manner. The performance of the proposed framework is evaluated using the Bot-IoT-2018 dataset. Experimental results show that the proposed optimized framework has a high detection accuracy, precision, recall, and F-score, highlighting its effectiveness and robustness for the detection of botnet attacks in IoT environments.

2021-02-23
Wöhnert, S.-J., Wöhnert, K. H., Almamedov, E., Skwarek, V..  2020.  Trusted Video Streams in Camera Sensor Networks. 2020 IEEE 18th International Conference on Embedded and Ubiquitous Computing (EUC). :17—24.

Proof of integrity in produced video data by surveillance cameras requires active forensic methods such as signatures, otherwise authenticity and integrity can be comprised and data becomes unusable e. g. for legal evidence. But a simple file- or stream-signature loses its validity when the stream is cut in parts or by separating data and signature. Using the principles of security in distributed systems similar to those of blockchain and distributed ledger technologies (BC/DLT), a chain which consists of the frames of a video which frame hash values will be distributed among a camera sensor network is presented. The backbone of this Framechain within the camera sensor network will be a camera identity concept to ensure accountability, integrity and authenticity according to the extended CIA triad security concept. Modularity by secure sequences, autarky in proof and robustness against natural modulation of data are the key parameters of this new approach. It allows the standalone data and even parts of it to be used as hard evidence.

2020-12-28
Raju, R. S., Lipasti, M..  2020.  BlurNet: Defense by Filtering the Feature Maps. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :38—46.

Recently, the field of adversarial machine learning has been garnering attention by showing that state-of-the-art deep neural networks are vulnerable to adversarial examples, stemming from small perturbations being added to the input image. Adversarial examples are generated by a malicious adversary by obtaining access to the model parameters, such as gradient information, to alter the input or by attacking a substitute model and transferring those malicious examples over to attack the victim model. Specifically, one of these attack algorithms, Robust Physical Perturbations (RP2), generates adversarial images of stop signs with black and white stickers to achieve high targeted misclassification rates against standard-architecture traffic sign classifiers. In this paper, we propose BlurNet, a defense against the RP2 attack. First, we motivate the defense with a frequency analysis of the first layer feature maps of the network on the LISA dataset, which shows that high frequency noise is introduced into the input image by the RP2 algorithm. To remove the high frequency noise, we introduce a depthwise convolution layer of standard blur kernels after the first layer. We perform a blackbox transfer attack to show that low-pass filtering the feature maps is more beneficial than filtering the input. We then present various regularization schemes to incorporate this lowpass filtering behavior into the training regime of the network and perform white-box attacks. We conclude with an adaptive attack evaluation to show that the success rate of the attack drops from 90% to 20% with total variation regularization, one of the proposed defenses.

2021-06-02
Sun, Weiqi, Li, Yuanlong, Shi, Liangren.  2020.  The Performance Evaluation and Resilience Analysis of Supply Chain Based on Logistics Network. 2020 39th Chinese Control Conference (CCC). :5772—5777.
With the development of globalization, more and more enterprises are involved in the supply chain network with increasingly complex structure. In this paper, enterprises and relations in the logistics network are abstracted as nodes and edges of the complex network. A graph model for a supply chain network to specified industry is constructed, and the Neo4j graph database is employed to store the graph data. This paper uses the theoretical research tool of complex network to model and analyze the supply chain, and designs a supply chain network evaluation system which include static and dynamic measurement indexes according to the statistical characteristics of complex network. In this paper both the static and dynamic resilience characteristics of the the constructed supply chain network are evaluated from the perspective of complex network. The numeric experimental simulations are conducted for validation. This research has practical and theoretical significance for enterprises to make strategies to improve the anti-risk capability of supply chain network based on logistics network information.
2021-05-05
Pawar, Shrikant, Stanam, Aditya.  2020.  Scalable, Reliable and Robust Data Mining Infrastructures. 2020 Fourth World Conference on Smart Trends in Systems, Security and Sustainability (WorldS4). :123—125.

Mining of data is used to analyze facts to discover formerly unknown patterns, classifying and grouping the records. There are several crucial scalable statistics mining platforms that have been developed in latest years. RapidMiner is a famous open source software which can be used for advanced analytics, Weka and Orange are important tools of machine learning for classifying patterns with techniques of clustering and regression, whilst Knime is often used for facts preprocessing like information extraction, transformation and loading. This article encapsulates the most important and robust platforms.

2021-06-24
Javaheripi, Mojan, Chen, Huili, Koushanfar, Farinaz.  2020.  Unified Architectural Support for Secure and Robust Deep Learning. 2020 57th ACM/IEEE Design Automation Conference (DAC). :1—6.
Recent advances in Deep Learning (DL) have enabled a paradigm shift to include machine intelligence in a wide range of autonomous tasks. As a result, a largely unexplored surface has opened up for attacks jeopardizing the integrity of DL models and hindering the success of autonomous systems. To enable ubiquitous deployment of DL approaches across various intelligent applications, we propose to develop architectural support for hardware implementation of secure and robust DL. Towards this goal, we leverage hardware/software co-design to develop a DL execution engine that supports algorithms specifically designed to defend against various attacks. The proposed framework is enhanced with two real-time defense mechanisms, securing both DL training and execution stages. In particular, we enable model-level Trojan detection to mitigate backdoor attacks and malicious behaviors induced on the DL model during training. We further realize real-time adversarial attack detection to avert malicious behavior during execution. The proposed execution engine is equipped with hardware-level IP protection and usage control mechanism to attest the legitimacy of the DL model mapped to the device. Our design is modular and can be tuned to task-specific demands, e.g., power, throughput, and memory bandwidth, by means of a customized hardware compiler. We further provide an accompanying API to reduce the nonrecurring engineering cost and ensure automated adaptation to various domains and applications.
2021-03-04
Carlini, N., Farid, H..  2020.  Evading Deepfake-Image Detectors with White- and Black-Box Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2804—2813.

It is now possible to synthesize highly realistic images of people who do not exist. Such content has, for example, been implicated in the creation of fraudulent socialmedia profiles responsible for dis-information campaigns. Significant efforts are, therefore, being deployed to detect synthetically-generated content. One popular forensic approach trains a neural network to distinguish real from synthetic content.We show that such forensic classifiers are vulnerable to a range of attacks that reduce the classifier to near- 0% accuracy. We develop five attack case studies on a state- of-the-art classifier that achieves an area under the ROC curve (AUC) of 0.95 on almost all existing image generators, when only trained on one generator. With full access to the classifier, we can flip the lowest bit of each pixel in an image to reduce the classifier's AUC to 0.0005; perturb 1% of the image area to reduce the classifier's AUC to 0.08; or add a single noise pattern in the synthesizer's latent space to reduce the classifier's AUC to 0.17. We also develop a black-box attack that, with no access to the target classifier, reduces the AUC to 0.22. These attacks reveal significant vulnerabilities of certain image-forensic classifiers.