Visible to the public Biblio

Filters: Keyword is DNN  [Clear All Filters]
2021-06-01
Xu, Lei, Gao, Zhimin, Fan, Xinxin, Chen, Lin, Kim, Hanyee, Suh, Taeweon, Shi, Weidong.  2020.  Blockchain Based End-to-End Tracking System for Distributed IoT Intelligence Application Security Enhancement. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1028–1035.
IoT devices provide a rich data source that is not available in the past, which is valuable for a wide range of intelligence applications, especially deep neural network (DNN) applications that are data-thirsty. An established DNN model provides useful analysis results that can improve the operation of IoT systems in turn. The progress in distributed/federated DNN training further unleashes the potential of integration of IoT and intelligence applications. When a large number of IoT devices are deployed in different physical locations, distributed training allows training modules to be deployed to multiple edge data centers that are close to the IoT devices to reduce the latency and movement of large amounts of data. In practice, these IoT devices and edge data centers are usually owned and managed by different parties, who do not fully trust each other or have conflicting interests. It is hard to coordinate them to provide end-to-end integrity protection of the DNN construction and application with classical security enhancement tools. For example, one party may share an incomplete data set with others, or contribute a modified sub DNN model to manipulate the aggregated model and affect the decision-making process. To mitigate this risk, we propose a novel blockchain based end-to-end integrity protection scheme for DNN applications integrated with an IoT system in the edge computing environment. The protection system leverages a set of cryptography primitives to build a blockchain adapted for edge computing that is scalable to handle a large number of IoT devices. The customized blockchain is integrated with a distributed/federated DNN to offer integrity and authenticity protection services.
2021-05-13
Venceslai, Valerio, Marchisio, Alberto, Alouani, Ihsen, Martina, Maurizio, Shafique, Muhammad.  2020.  NeuroAttack: Undermining Spiking Neural Networks Security through Externally Triggered Bit-Flips. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Due to their proven efficiency, machine-learning systems are deployed in a wide range of complex real-life problems. More specifically, Spiking Neural Networks (SNNs) emerged as a promising solution to the accuracy, resource-utilization, and energy-efficiency challenges in machine-learning systems. While these systems are going mainstream, they have inherent security and reliability issues. In this paper, we propose NeuroAttack, a cross-layer attack that threatens the SNNs integrity by exploiting low-level reliability issues through a high-level attack. Particularly, we trigger a fault-injection based sneaky hardware backdoor through a carefully crafted adversarial input noise. Our results on Deep Neural Networks (DNNs) and SNNs show a serious integrity threat to state-of-the art machine-learning techniques.

2021-04-27
Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., Shafique, M..  2020.  Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.
2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
2020-11-02
Huang, S., Chen, Q., Chen, Z., Chen, L., Liu, J., Yang, S..  2019.  A Test Cases Generation Technique Based on an Adversarial Samples Generation Algorithm for Image Classification Deep Neural Networks. 2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :520–521.

With widely applied in various fields, deep learning (DL) is becoming the key driving force in industry. Although it has achieved great success in artificial intelligence tasks, similar to traditional software, it has defects that, once it failed, unpredictable accidents and losses would be caused. In this paper, we propose a test cases generation technique based on an adversarial samples generation algorithm for image classification deep neural networks (DNNs), which can generate a large number of good test cases for the testing of DNNs, especially in case that test cases are insufficient. We briefly introduce our method, and implement the framework. We conduct experiments on some classic DNN models and datasets. We further evaluate the test set by using a coverage metric based on states of the DNN.

2020-08-28
Gopinath, Divya, S. Pasareanu, Corina, Wang, Kaiyuan, Zhang, Mengshi, Khurshid, Sarfraz.  2019.  Symbolic Execution for Attribution and Attack Synthesis in Neural Networks. 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). :282—283.

This paper introduces DeepCheck, a new approach for validating Deep Neural Networks (DNNs) based on core ideas from program analysis, specifically from symbolic execution. DeepCheck implements techniques for lightweight symbolic analysis of DNNs and applies them in the context of image classification to address two challenging problems: 1) identification of important pixels (for attribution and adversarial generation); and 2) creation of adversarial attacks. Experimental results using the MNIST data-set show that DeepCheck's lightweight symbolic analysis provides a valuable tool for DNN validation.

2020-08-03
Kobayashi, Hiroyuki.  2019.  CEPHEID: the infrastructure-less indoor localization using lighting fixtures' acoustic frequency fingerprints. IECON 2019 - 45th Annual Conference of the IEEE Industrial Electronics Society. 1:6842–6847.
This paper deals with a new indoor localization scheme called “CEPHEID” by using ceiling lighting fixtures. It is based on the fact that each lighting fixture has its own characteristic flickering pattern. Then, the author proposes a technique to identify individual light by using simple instruments and DNN classifier. Thanks to the less requirements for hardware, CEPHEID can be implemented by a few simple discrete electronic components and an ordinary smartphone. A prototype “CEPHEID dongle” is also introduced in this paper. Finally, the validity of the author's method is examined by indoor positioning experiments.
2020-07-13
Andrew, J., Karthikeyan, J., Jebastin, Jeffy.  2019.  Privacy Preserving Big Data Publication On Cloud Using Mondrian Anonymization Techniques and Deep Neural Networks. 2019 5th International Conference on Advanced Computing Communication Systems (ICACCS). :722–727.

In recent trends, privacy preservation is the most predominant factor, on big data analytics and cloud computing. Every organization collects personal data from the users actively or passively. Publishing this data for research and other analytics without removing Personally Identifiable Information (PII) will lead to the privacy breach. Existing anonymization techniques are failing to maintain the balance between data privacy and data utility. In order to provide a trade-off between the privacy of the users and data utility, a Mondrian based k-anonymity approach is proposed. To protect the privacy of high-dimensional data Deep Neural Network (DNN) based framework is proposed. The experimental result shows that the proposed approach mitigates the information loss of the data without compromising privacy.

2020-05-08
Vigneswaran, Rahul K., Vinayakumar, R., Soman, K.P., Poornachandran, Prabaharan.  2018.  Evaluating Shallow and Deep Neural Networks for Network Intrusion Detection Systems in Cyber Security. 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT). :1—6.
Intrusion detection system (IDS) has become an essential layer in all the latest ICT system due to an urge towards cyber safety in the day-to-day world. Reasons including uncertainty in finding the types of attacks and increased the complexity of advanced cyber attacks, IDS calls for the need of integration of Deep Neural Networks (DNNs). In this paper, DNNs have been utilized to predict the attacks on Network Intrusion Detection System (N-IDS). A DNN with 0.1 rate of learning is applied and is run for 1000 number of epochs and KDDCup-`99' dataset has been used for training and benchmarking the network. For comparison purposes, the training is done on the same dataset with several other classical machine learning algorithms and DNN of layers ranging from 1 to 5. The results were compared and concluded that a DNN of 3 layers has superior performance over all the other classical machine learning algorithms.
2019-12-16
Karve, Shreya, Nagmal, Arati, Papalkar, Sahil, Deshpande, S. A..  2018.  Context Sensitive Conversational Agent Using DNN. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA). :475–478.
We investigate a method of building a closed domain intelligent conversational agent using deep neural networks. A conversational agent is a dialog system intended to converse with a human, with a coherent structure. Our conversational agent uses a retrieval based model that identifies the intent of the input user query and maps it to a knowledge base to return appropriate results. Human conversations are based on context, but existing conversational agents are context insensitive. To overcome this limitation, our system uses a simple stack based context identification and storage system. The conversational agent generates responses according to the current context of conversation. allowing more human-like conversations.