Visible to the public Biblio

Found 260 results

Filters: Keyword is Trusted Computing  [Clear All Filters]
2021-07-07
Wang, Guodong, Tian, Dongbo, Gu, Fengqiang, Li, Jia, Lu, Yang.  2020.  Design of Terminal Security Access Scheme based on Trusted Computing in Ubiquitous Electric Internet of Things. 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC). 9:188–192.
In the Ubiquitous Electric Internet of Things (UEIoT), the terminals are very easy to be accessed and attacked by attackers due to the lack of effective monitoring and safe isolation methods. Therefore, in the implementation of UEIoT, the security protection of terminals is particularly important. Therefore, this paper proposes a dual-system design scheme for terminal active immunity based on trusted computing. In this scheme, the terminal node in UEIoT is composed of two parts: computing part and trusted protection part. The computing component and the trusted protection component are logically independent of each other, forming a trusted computing active immune dual-system structure with both computing and protection functions. The Trusted Network Connection extends the trusted state of the terminal to the network, thus providing a solution for terminal secure access in the UEIoT.
2021-06-01
Gu, Yanyang, Zhang, Ping, Chen, Zhifeng, Cao, Fei.  2020.  UEFI Trusted Computing Vulnerability Analysis Based on State Transition Graph. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :1043–1052.
In the face of increasingly serious firmware attacks, it is of great significance to analyze the vulnerability security of UEFI. This paper first introduces the commonly used trusted authentication mechanisms of UEFI. Then, aiming at the loopholes in the process of UEFI trust verification in the startup phase, combined with the state transition diagram, PageRank algorithm and Bayesian network theory, the analysis model of UEFI trust verification startup vulnerability is constructed. And according to the example to verify the analysis. Through the verification and analysis of the data obtained, the vulnerable attack paths and key vulnerable nodes are found. Finally, according to the analysis results, security enhancement measures for UEFI are proposed.
2021-03-29
Luecking, M., Fries, C., Lamberti, R., Stork, W..  2020.  Decentralized Identity and Trust Management Framework for Internet of Things. 2020 IEEE International Conference on Blockchain and Cryptocurrency (ICBC). :1—9.

Today, Internet of Things (IoT) devices mostly operate in enclosed, proprietary environments. To unfold the full potential of IoT applications, a unifying and permissionless environment is crucial. All IoT devices, even unknown to each other, would be able to trade services and assets across various domains. In order to realize those applications, uniquely resolvable identities are essential. However, quantifiable trust in identities and their authentication are not trivially provided in such an environment due to the absence of a trusted authority. This research presents a new identity and trust framework for IoT devices, based on Distributed Ledger Technology (DLT). IoT devices assign identities to themselves, which are managed publicly and decentralized on the DLT's network as Self Sovereign Identities (SSI). In addition to the Identity Management System (IdMS), the framework provides a Web of Trust (WoT) approach to enable automatic trust rating of arbitrary identities. For the framework we used the IOTA Tangle to access and store data, achieving high scalability and low computational overhead. To demonstrate the feasibility of our framework, we provide a proof-of-concept implementation and evaluate the set objectives for real world applicability as well as the vulnerability against common threats in IdMSs and WoTs.

Liu, W., Niu, H., Luo, W., Deng, W., Wu, H., Dai, S., Qiao, Z., Feng, W..  2020.  Research on Technology of Embedded System Security Protection Component. 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications( AEECA). :21—27.

With the development of the Internet of Things (IoT), it has been widely deployed. As many embedded devices are connected to the network and massive amounts of security-sensitive data are stored in these devices, embedded devices in IoT have become the target of attackers. The trusted computing is a key technology to guarantee the security and trustworthiness of devices' execution environment. This paper focuses on security problems on IoT devices, and proposes a security architecture for IoT devices based on the trusted computing technology. This paper implements a security management system for IoT devices, which can perform integrity measurement, real-time monitoring and security management for embedded applications, providing a safe and reliable execution environment and whitelist-based security protection for IoT devices. This paper also designs and implements an embedded security protection system based on trusted computing technology, containing a measurement and control component in the kernel and a remote graphical management interface for administrators. The kernel layer enforces the integrity measurement and control of the embedded application on the device. The graphical management interface communicates with the remote embedded device through the TCP/IP protocol, and provides a feature-rich and user-friendly interaction interface. It implements functions such as knowledge base scanning, whitelist management, log management, security policy management, and cryptographic algorithm performance testing.

Zimmo, S., Refaey, A., Shami, A..  2020.  Trusted Boot for Embedded Systems Using Hypothesis Testing Benchmark. 2020 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). :1—2.

Security has become a crucial consideration and is one of the most important design goals for an embedded system. This paper examines the type of boot sequence, and more specifically a trusted boot which utilizes the method of chain of trust. After defining these terms, this paper will examine the limitations of the existing safe boot, and finally propose the method of trusted boot based on hypothesis testing benchmark and the cost it takes to perform this method.

2021-03-04
Riya, S. S., Lalu, V..  2020.  Stable cryptographic key generation using SRAM based Physical Unclonable Function. 2020 International Conference on Smart Electronics and Communication (ICOSEC). :653—657.
Physical unclonable functions(PUFs) are widely used as hardware root-of-trust to secure IoT devices, data and services. A PUF exploits inherent randomness introduced during manufacturing to give a unique digital fingerprint. Static Random-Access Memory (SRAM) based PUFs can be used as a mature technology for authentication. An SRAM with a number of SRAM cells gives an unrepeatable and random pattern of 0's and 1's during power on. As it is a unique pattern, it can be called as SRAM fingerprint and can be used as a PUF. The chance of producing more number of same values (either zero or one) is higher during power on. If a particular value present at almost all the cell during power on, it will lead to the dominance of either zero or one in the cryptographic key sequence. As the cryptographic key is generated by randomly taking address location of SRAM cells, (the subset of power on values of all the SRAM cells)the probability of occurring the same sequence most of the time is higher. In order to avoid that situation, SRAM should have to produce an equal number of zeros and ones during power on. SRAM PUF is implemented in Cadence Virtuoso tool. To generate equal zeros and ones during power on, variations can be done in the physical dimensions and to increase the stability body biasing can be effectively done.
Wang, L..  2020.  Trusted Connect Technology of Bioinformatics Authentication Cloud Platform Based on Point Set Topology Transformation Theory. 2020 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). :151—154.
The bioinformatics features are collected by pattern recognition technology, and the digital coding and format conversion of the feature data are realized by using the theory of topological group transformation. Authentication and Signature based on Zero Knowledge Proof Technology can be used as the trusted credentials of cloud platform and cannot be forged, thus realizing trusted and secure access.
Ghaffaripour, S., Miri, A..  2020.  A Decentralized, Privacy-preserving and Crowdsourcing-based Approach to Medical Research. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4510—4515.
Access to data at large scales expedites the progress of research in medical fields. Nevertheless, accessibility to patients' data faces significant challenges on regulatory, organizational and technical levels. In light of this, we present a novel approach based on the crowdsourcing paradigm to solve this data scarcity problem. Utilizing the infrastructure that blockchain provides, our decentralized platform enables researchers to solicit contributions to their well-defined research study from a large crowd of volunteers. Furthermore, to overcome the challenge of breach of privacy and mutual trust, we employed the cryptographic primitive of Zero-knowledge Argument of Knowledge (zk-SNARK). This not only allows participants to make contributions without exposing their privacy-sensitive health data, but also provides a means for a distributed network of users to verify the validity of the contributions in an efficient manner. Finally, since without an incentive mechanism in place, the crowdsourcing platform would be rendered ineffective, we incorporated smart contracts to ensure a fair reciprocal exchange of data for reward between patients and researchers.
Mehraj, S., Banday, M. T..  2020.  Establishing a Zero Trust Strategy in Cloud Computing Environment. 2020 International Conference on Computer Communication and Informatics (ICCCI). :1—6.
The increased use of cloud services and its various security and privacy challenges such as identity theft, data breach, data integrity and data confidentiality has made trust management, which is one of the most multifaceted aspect in cloud computing, inevitable. The growing reputation of cloud computing technology makes it immensely important to be acquainted with the meaning of trust in the cloud, as well as identify how the customer and the cloud service providers establish that trust. The traditional trust management mechanisms represent a static trust relationship which falls deficit while meeting up the dynamic requirement of cloud services. In this paper, a conceptual zero trust strategy for the cloud environment has been proposed. The model offers a conceptual typology of perceptions and philosophies for establishing trust in cloud services. Further, importance of trust establishment and challenges of trust in cloud computing have also been explored and discussed.
2021-02-23
Liu, W., Park, E. K., Krieger, U., Zhu, S. S..  2020.  Smart e-Health Security and Safety Monitoring with Machine Learning Services. 2020 29th International Conference on Computer Communications and Networks (ICCCN). :1—6.

This research provides security and safety extensions to a blockchain based solution whose target is e-health. The Advanced Blockchain platform is extended with intelligent monitoring for security and machine learning for detecting patient treatment medication safety issues. For the reasons of stringent HIPAA, HITECH, EU-GDPR and other regional regulations dictating security, safety and privacy requirements, the e-Health blockchains have to cover mandatory disclosure of violations or enforcements of policies during transaction flows involving healthcare. Our service solution further provides the benefits of resolving the abnormal flows of a medical treatment process, providing accountability of the service providers, enabling a trust health information environment for institutions to handle medication safely, giving patients a better safety guarantee, and enabling the authorities to supervise the security and safety of e-Health blockchains. The capabilities can be generalized to support a uniform smart solution across industry in a variety of blockchain applications.

2021-02-01
Calhoun, C. S., Reinhart, J., Alarcon, G. A., Capiola, A..  2020.  Establishing Trust in Binary Analysis in Software Development and Applications. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
The current exploratory study examined software programmer trust in binary analysis techniques used to evaluate and understand binary code components. Experienced software developers participated in knowledge elicitations to identify factors affecting trust in tools and methods used for understanding binary code behavior and minimizing potential security vulnerabilities. Developer perceptions of trust in those tools to assess implementation risk in binary components were captured across a variety of application contexts. The software developers reported source security and vulnerability reports provided the best insight and awareness of potential issues or shortcomings in binary code. Further, applications where the potential impact to systems and data loss is high require relying on more than one type of analysis to ensure the binary component is sound. The findings suggest binary analysis is viable for identifying issues and potential vulnerabilities as part of a comprehensive solution for understanding binary code behavior and security vulnerabilities, but relying simply on binary analysis tools and binary release metadata appears insufficient to ensure a secure solution.
Ng, M., Coopamootoo, K. P. L., Toreini, E., Aitken, M., Elliot, K., Moorsel, A. van.  2020.  Simulating the Effects of Social Presence on Trust, Privacy Concerns Usage Intentions in Automated Bots for Finance. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :190–199.
FinBots are chatbots built on automated decision technology, aimed to facilitate accessible banking and to support customers in making financial decisions. Chatbots are increasing in prevalence, sometimes even equipped to mimic human social rules, expectations and norms, decreasing the necessity for human-to-human interaction. As banks and financial advisory platforms move towards creating bots that enhance the current state of consumer trust and adoption rates, we investigated the effects of chatbot vignettes with and without socio-emotional features on intention to use the chatbot for financial support purposes. We conducted a between-subject online experiment with N = 410 participants. Participants in the control group were provided with a vignette describing a secure and reliable chatbot called XRO23, whereas participants in the experimental group were presented with a vignette describing a secure and reliable chatbot that is more human-like and named Emma. We found that Vignette Emma did not increase participants' trust levels nor lowered their privacy concerns even though it increased perception of social presence. However, we found that intention to use the presented chatbot for financial support was positively influenced by perceived humanness and trust in the bot. Participants were also more willing to share financially-sensitive information such as account number, sort code and payments information to XRO23 compared to Emma - revealing a preference for a technical and mechanical FinBot in information sharing. Overall, this research contributes to our understanding of the intention to use chatbots with different features as financial technology, in particular that socio-emotional support may not be favoured when designed independently of financial function.
Papadopoulos, A. V., Esterle, L..  2020.  Situational Trust in Self-aware Collaborating Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :91–94.
Trust among humans affects the way we interact with each other. In autonomous systems, this trust is often predefined and hard-coded before the systems are deployed. However, when systems encounter unfolding situations, requiring them to interact with others, a notion of trust will be inevitable. In this paper, we discuss trust as a fundamental measure to enable an autonomous system to decide whether or not to interact with another system, whether biological or artificial. These decisions become increasingly important when continuously integrating with others during runtime.
Wickramasinghe, C. S., Marino, D. L., Grandio, J., Manic, M..  2020.  Trustworthy AI Development Guidelines for Human System Interaction. 2020 13th International Conference on Human System Interaction (HSI). :130–136.
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
2021-01-25
Mao, J., Li, X., Lin, Q., Guan, Z..  2020.  Deeply understanding graph-based Sybil detection techniques via empirical analysis on graph processing. China Communications. 17:82–96.
Sybil attacks are one of the most prominent security problems of trust mechanisms in a distributed network with a large number of highly dynamic and heterogeneous devices, which expose serious threat to edge computing based distributed systems. Graphbased Sybil detection approaches extract social structures from target distributed systems, refine the graph via preprocessing methods and capture Sybil nodes based on the specific properties of the refined graph structure. Graph preprocessing is a critical component in such Sybil detection methods, and intuitively, the processing methods will affect the detection performance. Thoroughly understanding the dependency on the graph-processing methods is very important to develop and deploy Sybil detection approaches. In this paper, we design experiments and conduct systematic analysis on graph-based Sybil detection with respect to different graph preprocessing methods on selected network environments. The experiment results disclose the sensitivity caused by different graph transformations on accuracy and robustness of Sybil detection methods.
2020-12-21
Enkhtaivan, B., Inoue, A..  2020.  Mediating Data Trustworthiness by Using Trusted Hardware between IoT Devices and Blockchain. 2020 IEEE International Conference on Smart Internet of Things (SmartIoT). :314–318.
In recent years, with the progress of data analysis methods utilizing artificial intelligence (AI) technology, concepts of smart cities collecting data from IoT devices and creating values by analyzing it have been proposed. However, making sure that the data is not tampered with is of the utmost importance. One way to do this is to utilize blockchain technology to record and trace the history of the data. Park and Kim proposed ensuring the trustworthiness of the data by utilizing an IoT device with a trusted execution environment (TEE). Also, Guan et al. proposed authenticating an IoT device and mediating data using a TEE. For the authentication, they use the physically unclonable function of the IoT device. Usually, IoT devices suffer from the lack of resources necessary for creating transactions for the blockchain ledger. In this paper, we present a secure protocol in which a TEE acts as a proxy to the IoT devices and creates the necessary transactions for the blockchain. We use an authenticated encryption method on the data transmission between the IoT device and TEE to authenticate the device and ensure the integrity and confidentiality of the data generated by the IoT devices.
Liu, Q., Wu, W., Liu, Q., Huangy, Q..  2020.  T2DNS: A Third-Party DNS Service with Privacy Preservation and Trustworthiness. 2020 29th International Conference on Computer Communications and Networks (ICCCN). :1–11.
We design a third-party DNS service named T2DNS. T2DNS serves client DNS queries with the following features: protecting clients from channel and server attackers, providing trustworthiness proof to clients, being compatible with the existing Internet infrastructure, and introducing bounded overhead. T2DNS's privacy preservation is achieved by a hybrid protocol of encryption and obfuscation, and its service proxy is implemented on Intel SGX. We overcome the challenges of scaling the initialization process, bounding the obfuscation overhead, and tuning practical system parameters. We prototype T2DNS, and experiment results show that T2DNS is fully functional, has acceptable overhead in comparison with other solutions, and is scalable to the number of clients.
Figueiredo, N. M., Rodríguez, M. C..  2020.  Trustworthiness in Sensor Networks A Reputation-Based Method for Weather Stations. 2020 International Conference on Omni-layer Intelligent Systems (COINS). :1–6.
Trustworthiness is a soft-security feature that evaluates the correct behavior of nodes in a network. More specifically, this feature tries to answer the following question: how much should we trust in a certain node? To determine the trustworthiness of a node, our approach focuses on two reputation indicators: the self-data trust, which evaluates the data generated by the node itself taking into account its historical data; and the peer-data trust, which utilizes the nearest nodes' data. In this paper, we show how these two indicators can be calculated using the Gaussian Overlap and Pearson correlation. This paper includes a validation of our trustworthiness approach using real data from unofficial and official weather stations in Portugal. This is a representative scenario of the current situation in many other areas, with different entities providing different kinds of data using autonomous sensors in a continuous way over the networks.
Neises, J., Moldovan, G., Walloschke, T., Popovici, B..  2020.  Trustworthiness in Supply Chains : A modular extensible Approach applied to Industrial IoT. 2020 Global Internet of Things Summit (GIoTS). :1–6.
Typical transactions in cross-company Industry 4.0 supply chains require a dynamically evaluable form of trustworthiness. Therefore, specific requirements on the parties involved, down to the machine level, for automatically verifiable operations shall facilitate the realization of the economic advantages of future flexible process chains in production. The core of the paper is a modular and extensible model for the assessment of trustworthiness in industrial IoT based on the Industrial Internet Security Framework of the Industrial Internet Consortium, which among other things defines five trustworthiness key characteristics of NIST. This is the starting point for a flexible model, which contains features as discussed in ISO/IEC JTC 1/AG 7 N51 or trustworthiness profiles as used in regulatory requirements. Specific minimum and maximum requirement parameters define the range of trustworthy operation. An automated calculation of trustworthiness in a dynamic environment based on an initial trust metric is presented. The evaluation can be device-based, connection-based, behaviour-based and context-based and thus become part of measurable, trustworthy, monitorable Industry 4.0 scenarios. Finally, the dynamic evaluation of automatable trust models of industrial components is illustrated based on the Multi-Vendor-Industry of the Horizon 2020 project SecureIoT. (grant agreement number 779899).
Jithish, J., Sankaran, S., Achuthan, K..  2020.  Towards Ensuring Trustworthiness in Cyber-Physical Systems: A Game-Theoretic Approach. 2020 International Conference on COMmunication Systems NETworkS (COMSNETS). :626–629.

The emergence of Cyber-Physical Systems (CPSs) is a potential paradigm shift for the usage of Information and Communication Technologies (ICT). From predominantly a facilitator of information and communication services, the role of ICT in the present age has expanded to the management of objects and resources in the physical world. Thus, it is imperative to devise mechanisms to ensure the trustworthiness of data to secure vulnerable devices against security threats. This work presents an analytical framework based on non-cooperative game theory to evaluate the trustworthiness of individual sensor nodes that constitute the CPS. The proposed game-theoretic model captures the factors impacting the trustworthiness of CPS sensor nodes. Further, the model is used to estimate the Nash equilibrium solution of the game, to derive a trust threshold criterion. The trust threshold represents the minimum trust score required to be maintained by individual sensor nodes during CPS operation. Sensor nodes with trust scores below the threshold are potentially malicious and may be removed or isolated to ensure the secure operation of CPS.

Cheng, Z., Chow, M.-Y..  2020.  An Augmented Bayesian Reputation Metric for Trustworthiness Evaluation in Consensus-based Distributed Microgrid Energy Management Systems with Energy Storage. 2020 2nd IEEE International Conference on Industrial Electronics for Sustainable Energy Systems (IESES). 1:215–220.
Consensus-based distributed microgrid energy management system is one of the most used distributed control strategies in the microgrid area. To improve its cybersecurity, the system needs to evaluate the trustworthiness of the participating agents in addition to the conventional cryptography efforts. This paper proposes a novel augmented reputation metric to evaluate the agents' trustworthiness in a distributed fashion. The proposed metric adopts a novel augmentation method to substantially improve the trust evaluation and attack detection performance under three typical difficult-to-detect attack patterns. The proposed metric is implemented and validated on a real-time HIL microgrid testbed.
Huang, H., Zhou, S., Lin, J., Zhang, K., Guo, S..  2020.  Bridge the Trustworthiness Gap amongst Multiple Domains: A Practical Blockchain-based Approach. ICC 2020 - 2020 IEEE International Conference on Communications (ICC). :1–6.
In isolated network domains, global trustworthiness (e.g., consistent network view) is critical to the multiple-domain business partners who aim to perform the trusted corporations depending on each isolated network view. However, to achieve such global trustworthiness across distributed network domains is a challenge. This is because when multiple-domain partners are required to exchange their local domain views with each other, it is difficult to ensure the data trustworthiness among them. In addition, the isolated domain view in each partner is prone to be destroyed by malicious falsification attacks. To this end, we propose a blockchain-based approach that can ensure the trustworthiness among multiple-party domains. In this paper, we mainly present the design and implementation of the proposed trustworthiness-protection system. A cloud-based prototype and a local testbed are developed based on Ethereum. Finally, experimental results demonstrate the effectiveness of the proposed prototype and testbed.
2020-12-07
Silva, J. L. da, Assis, M. M., Braga, A., Moraes, R..  2019.  Deploying Privacy as a Service within a Cloud-Based Framework. 2019 9th Latin-American Symposium on Dependable Computing (LADC). :1–4.
Continuous monitoring and risk assessment of privacy violations on cloud systems are needed by anyone who has business needs subject to privacy regulations. Compliance to such regulations in dynamic systems demands appropriate techniques, tools and instruments. As a Service concepts can be a good option to support this task. Previous work presented PRIVAaaS, a software toolkit that allows controlling and reducing data leakages, thus preserving privacy, by providing anonymization capabilities to query-based systems. This short paper discusses the implementation details and deployment environment of an evolution of PRIVAaaS as a MAPE-K control loop within the ATMOSPHERE Platform. ATMOSPHERE is both a framework and a platform enabling the implementation of trustworthy cloud services. By enabling PRIVAaaS within ATMOSPHERE, privacy is made one of several trustworthiness properties continuously monitored and assessed by the platform with a software-based, feedback control loop known as MAPE-K.
Allig, C., Leinmüller, T., Mittal, P., Wanielik, G..  2019.  Trustworthiness Estimation of Entities within Collective Perception. 2019 IEEE Vehicular Networking Conference (VNC). :1–8.
The idea behind collective perception is to improve vehicles' awareness about their surroundings. Every vehicle shares information describing its perceived environment by means of V2X communication. Similar to other information shared using V2X communication, collective perception information is potentially safety relevant, which means there is a need to assess the reliability and quality of received information before further processing. Transmitted information may have been forged by attackers or contain inconsistencies e.g. caused by malfunctions. This paper introduces a novel approach for estimating a belief that a pair of entities, e.g. two remote vehicles or the host vehicle and a remote vehicle, within a Vehicular ad hoc Network (VANET) are both trustworthy. The method updates the belief based on the consistency of the data that both entities provide. The evaluation shows that the proposed method is able to identify forged information.
Yang, Z..  2019.  Fidelity: Towards Measuring the Trustworthiness of Neural Network Classification. 2019 IEEE Conference on Dependable and Secure Computing (DSC). :1–8.
With the increasing performance of neural networks on many security-critical tasks, the security concerns of machine learning have become increasingly prominent. Recent studies have shown that neural networks are vulnerable to adversarial examples: carefully crafted inputs with negligible perturbations on legitimate samples could mislead a neural network to produce adversary-selected outputs while humans can still correctly classify them. Therefore, we need an additional measurement on the trustworthiness of the results of a machine learning model, especially in adversarial settings. In this paper, we analyse the root cause of adversarial examples, and propose a new property, namely fidelity, of machine learning models to describe the gap between what a model learns and the ground truth learned by humans. One of its benefits is detecting adversarial attacks. We formally define fidelity, and propose a novel approach to quantify it. We evaluate the quantification of fidelity in adversarial settings on two neural networks. The study shows that involving the fidelity enables a neural network system to detect adversarial examples with true positive rate 97.7%, and false positive rate 1.67% on a studied neural network.