Visible to the public Biblio

Found 292 results

Filters: Keyword is Computational modeling  [Clear All Filters]
Kovach, Nicholas S., Lamont, Gary B..  2019.  Trust and Deception in Hypergame Theory. 2019 IEEE National Aerospace and Electronics Conference (NAECON). :262–268.
Hypergame theory has been used to model advantages in decision making. This research provides a formal representation of deception to further extend the hypergame model. In order to extend the model, we propose a hypergame theoretic framework based on temporal logic to model decision making under the potential for trust and deception. Using the temporal hypergame model, the concept of trust is defined within the constraints of the model. With a formal definition of trust in hypergame theory, the concepts of distrust, mistrust, misperception, and deception are then constructed. These formal definitions are then applied to an Attacker-Defender hypergame to show how the deception within the game can be formally modeled; the model is presented. This demonstrates how hypergame theory can be used to model trust, mistrust, misperception, and deception using a formal model.
Razaque, Abdul, Almiani, Muder, khan, Meer Jaro, Magableh, Basel, Al-Dmour, Ayman, Al-Rahayfeh, Amer.  2019.  Fuzzy-GRA Trust Model for Cloud Risk Management. 2019 Sixth International Conference on Software Defined Systems (SDS). :179–185.
Cloud computing is not adequately secure due to the currently used traditional trust methods such as global trust model and local trust model. These are prone to security vulnerabilities. This paper introduces a trust model based on the fuzzy mathematics and gray relational theory. Fuzzy mathematics and gray relational analysis (Fuzzy-GRA) aims to improve the poor dynamic adaptability of cloud computing. Fuzzy-GRA platform is used to test and validate the behavior of the model. Furthermore, our proposed model is compared to other known models. Based on the experimental results, we prove that our model has the edge over other existing models.
Du, Jia, Wang, Zhe, Yang, Junqiang, Song, Xiaofeng.  2019.  Research on Cognitive Linkage of Network Security Equipment. 2019 International Conference on Robots Intelligent System (ICRIS). :296–298.
To solve the problems of weak linkage ability and low intellectualization of strategy allocation in existing network security devices, a new method of cognitive linkage of network security equipment is proposed by learning from human brain. Firstly, the basic connotation and cognitive cycle of cognitive linkage are expounded. Secondly, the main functions of cognitive linkage are clarified. Finally, the cognitive linkage system model is constructed, and the information process flow of cognitive linkage is described. Cognitive linkage of network security equipment provides a new way to effectively enhance the overall protection capability of network security equipment.
HANJRI, Adnane EL, HAYAR, Aawatif, Haqiq, Abdelkrim.  2019.  Combined Compressive Sampling Techniques and Features Detection using Kullback Leibler Distance to Manage Handovers. 2019 IEEE International Smart Cities Conference (ISC2). :504–507.
In this paper, we present a new Handover technique which combines Distribution Analysis Detector and Compressive Sampling Techniques. The proposed approach consists of analysing Received Signal probability density function instead of demodulating and analysing Received Signal itself as in classical handover. In this method we will exploit some mathematical tools like Kullback Leibler Distance, Akaike Information Criterion (AIC) and Akaike weights, in order to decide blindly the best handover and the best Base Station (BS) for each user. The Compressive Sampling algorithm is designed to take advantage from the primary signals sparsity and to keep the linearity and properties of the original signal in order to be able to apply Distribution Analysis Detector on the compressed measurements.
Sani, Abubakar Sadiq, Yuan, Dong, Bao, Wei, Dong, Zhao Yang, Vucetic, Branka, Bertino, Elisa.  2019.  Universally Composable Key Bootstrapping and Secure Communication Protocols for the Energy Internet. IEEE Transactions on Information Forensics and Security. 14:2113–2127.
The Energy Internet is an advanced smart grid solution to increase energy efficiency by jointly operating multiple energy resources via the Internet. However, such an increasing integration of energy resources requires secure and efficient communication in the Energy Internet. To address such a requirement, we propose a new secure key bootstrapping protocol to support the integration and operation of energy resources. By using a universal composability model that provides a strong security notion for designing and analyzing cryptographic protocols, we define an ideal functionality that supports several cryptographic primitives used in this paper. Furthermore, we provide an ideal functionality for key bootstrapping and secure communication, which allows exchanged session keys to be used for secure communication in an ideal manner. We propose the first secure key bootstrapping protocol that enables a user to verify the identities of other users before key bootstrapping. We also present a secure communication protocol for unicast and multicast communications. The ideal functionalities help in the design and analysis of the proposed protocols. We perform some experiments to validate the performance of our protocols, and the results show that our protocols are superior to the existing related protocols and are suitable for the Energy Internet. As a proof of concept, we apply our functionalities to a practical key bootstrapping protocol, namely generic bootstrapping architecture.
Kansuwan, Thivanon, Chomsiri, Thawatchai.  2019.  Authentication Model using the Bundled CAPTCHA OTP Instead of Traditional Password. 2019 Joint International Conference on Digital Arts, Media and Technology with ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering (ECTI DAMT-NCON). :5—8.
In this research, we present identity verification using the “Bundled CAPTCHA OTP” instead of using the traditional password. This includes a combination of CAPTCHA and One Time Password (OTP) to reduce processing steps. Moreover, a user does not have to remember any password. The Bundled CAPTCHA OTP which is the unique random parameter for any login will be used instead of a traditional password. We use an e-mail as the way to receive client-side the Bundled CAPTCHA OTP because it is easier to apply without any problems compare to using mobile phones. Since mobile phones may be crashing, lost, change frequently, and easier violent access than e-mail. In this paper, we present a processing model of the proposed system and discuss advantages and disadvantages of the model.
Wu, Yi, Liu, Jian, Chen, Yingying, Cheng, Jerry.  2019.  Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples. 2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN). :1—5.
As automatic speech recognition (ASR) systems have been integrated into a diverse set of devices around us in recent years, security vulnerabilities of them have become an increasing concern for the public. Existing studies have demonstrated that deep neural networks (DNNs), acting as the computation core of ASR systems, is vulnerable to deliberately designed adversarial attacks. Based on the gradient descent algorithm, existing studies have successfully generated adversarial samples which can disturb ASR systems and produce adversary-expected transcript texts designed by adversaries. Most of these research simulated white-box attacks which require knowledge of all the components in the targeted ASR systems. In this work, we propose the first semi-black-box attack against the ASR system - Kaldi. Requiring only partial information from Kaldi and none from DNN, we can embed malicious commands into a single audio chip based on the gradient-independent genetic algorithm. The crafted audio clip could be recognized as the embedded malicious commands by Kaldi and unnoticeable to humans in the meanwhile. Experiments show that our attack can achieve high attack success rate with unnoticeable perturbations to three types of audio clips (pop music, pure music, and human command) without the need of the underlying DNN model parameters and architecture.
Jing, Huiyun, Meng, Chengrui, He, Xin, Wei, Wei.  2019.  Black Box Explanation Guided Decision-Based Adversarial Attacks. 2019 IEEE 5th International Conference on Computer and Communications (ICCC). :1592—1596.
Adversarial attacks have been the hot research field in artificial intelligence security. Decision-based black-box adversarial attacks are much more appropriate in the real-world scenarios, where only the final decisions of the targeted deep neural networks are accessible. However, since there is no available guidance for searching the imperceptive adversarial perturbation, boundary attack, one of the best performing decision-based black-box attacks, carries out computationally expensive search. For improving attack efficiency, we propose a novel black box explanation guided decision-based black-box adversarial attack. Firstly, the problem of decision-based adversarial attacks is modeled as a derivative-free and constraint optimization problem. To solve this optimization problem, the black box explanation guided constrained random search method is proposed to more quickly find the imperceptible adversarial example. The insights into the targeted deep neural networks explored by the black box explanation are fully used to accelerate the computationally expensive random search. Experimental results demonstrate that our proposed attack improves the attack efficiency by 64% compared with boundary attack.
Parafita, Álvaro, Vitrià, Jordi.  2019.  Explaining Visual Models by Causal Attribution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :4167—4175.

Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.

BOUGHACI, Dalila, BENMESBAH, Mounir, ZEBIRI, Aniss.  2019.  An improved N-grams based Model for Authorship Attribution. 2019 International Conference on Computer and Information Sciences (ICCIS). :1—6.

Authorship attribution is the problem of studying an anonymous text and finding the corresponding author in a set of candidate authors. In this paper, we propose a method based on N-grams model for the problem of authorship attribution. Several measures are used to assign an anonymous text to an author. The different variants of the proposed method are implemented and validated on PAN benchmarks. The numerical results are encouraging and demonstrate the benefit of the proposed idea.

Jafariakinabad, Fereshteh, Hua, Kien A..  2019.  Style-Aware Neural Model with Application in Authorship Attribution. 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA). :325—328.

Writing style is a combination of consistent decisions associated with a specific author at different levels of language production, including lexical, syntactic, and structural. In this paper, we introduce a style-aware neural model to encode document information from three stylistic levels and evaluate it in the domain of authorship attribution. First, we propose a simple way to jointly encode syntactic and lexical representations of sentences. Subsequently, we employ an attention-based hierarchical neural network to encode the syntactic and semantic structure of sentences in documents while rewarding the sentences which contribute more to capturing the writing style. Our experimental results, based on four benchmark datasets, reveal the benefits of encoding document information from all three stylistic levels when compared to the baseline methods in the literature.

Yee, George O.M..  2019.  Modeling and Reducing the Attack Surface in Software Systems. 2019 IEEE/ACM 11th International Workshop on Modelling in Software Engineering (MiSE). :55—62.

In today's world, software is ubiquitous and relied upon to perform many important and critical functions. Unfortunately, software is riddled with security vulnerabilities that invite exploitation. Attackers are particularly attracted to software systems that hold sensitive data with the goal of compromising the data. For such systems, this paper proposes a modeling method applied at design time to identify and reduce the attack surface, which arises due to the locations containing sensitive data within the software system and the accessibility of those locations to attackers. The method reduces the attack surface by changing the design so that the number of such locations is reduced. The method performs these changes on a graphical model of the software system. The changes are then considered for application to the design of the actual system to improve its security.

Jeon, Joohyung, Kim, Junhui, Kim, Joongheon, Kim, Kwangsoo, Mohaisen, Aziz, Kim, Jong-Kook.  2019.  Privacy-Preserving Deep Learning Computation for Geo-Distributed Medical Big-Data Platforms. 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks – Supplemental Volume (DSN-S). :3–4.
This paper proposes a distributed deep learning framework for privacy-preserving medical data training. In order to avoid patients' data leakage in medical platforms, the hidden layers in the deep learning framework are separated and where the first layer is kept in platform and others layers are kept in a centralized server. Whereas keeping the original patients' data in local platforms maintain their privacy, utilizing the server for subsequent layers improves learning performance by using all data from each platform during training.
Huang, Kaiqing.  2019.  Multi-Authority Attribute-Based Encryption for Resource-Constrained Users in Edge Computing. 2019 International Conference on Information Technology and Computer Application (ITCA). :323–326.
Multi-authority attribute-based encryption (MA-ABE) is a promising technique to protect data privacy and achieve fine-grained access control in edge computing for Internet of Things (IoT). However, most of the existing MA-ABE schemes suffer from expensive computational cost in the encryption and decryption phases, which are not practical for resource constrained users in IoT. We propose a large-universe MA-CP-ABE scheme with online/offline encryption and outsourced decryption. In our scheme, most expensive encryption operations have been executed in the user's initialization phase by adding reusable ciphertext pool besides splitting the encryption algorithm to online encryption and offline encryption. Moreover, massive decryption operation are outsourced to the near edge server for reducing the computation overhead of decryption. The proposed scheme is proven statically secure under the q-DPBDHE2 assumption. The performance analysis results indicate that the proposed scheme is efficient and suitable for resource-constrained users in edge computing for IoT.
Regol, Florence, Pal, Soumyasundar, Coates, Mark.  2019.  Node Copying for Protection Against Graph Neural Network Topology Attacks. 2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP). :709–713.
Adversarial attacks can affect the performance of existing deep learning models. With the increased interest in graph based machine learning techniques, there have been investigations which suggest that these models are also vulnerable to attacks. In particular, corruptions of the graph topology can degrade the performance of graph based learning algorithms severely. This is due to the fact that the prediction capability of these algorithms relies mostly on the similarity structure imposed by the graph connectivity. Therefore, detecting the location of the corruption and correcting the induced errors becomes crucial. There has been some recent work which tackles the detection problem, however these methods do not address the effect of the attack on the downstream learning task. In this work, we propose an algorithm that uses node copying to mitigate the degradation in classification that is caused by adversarial attacks. The proposed methodology is applied only after the model for the downstream task is trained and the added computation cost scales well for large graphs. Experimental results show the effectiveness of our approach for several real world datasets.
Zhou, Kexin, Wang, Jian.  2019.  Trajectory Protection Scheme Based on Fog Computing and K-anonymity in IoT. 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS). :1—6.
With the development of cloud computing technology in the Internet of Things (IoT), the trajectory privacy in location-based services (LBSs) has attracted much attention. Most of the existing work adopts point-to-point and centralized models, which will bring a heavy burden to the user and cause performance bottlenecks. Moreover, previous schemes did not consider both online and offline trajectory protection and ignored some hidden background information. Therefore, in this paper, we design a trajectory protection scheme based on fog computing and k-anonymity for real-time trajectory privacy protection in continuous queries and offline trajectory data protection in trajectory publication. Fog computing provides the user with local storage and mobility to ensure physical control, and k-anonymity constructs the cloaking region for each snapshot in terms of time-dependent query probability and transition probability. In this way, two k-anonymity-based dummy generation algorithms are proposed, which achieve the maximum entropy of online and offline trajectory protection. Security analysis and simulation results indicate that our scheme can realize trajectory protection effectively and efficiently.
Berady, Aimad, Viet Triem Tong, Valerie, Guette, Gilles, Bidan, Christophe, Carat, Guillaume.  2019.  Modeling the Operational Phases of APT Campaigns. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :96—101.
In the context of Advanced Persistent Threat (APT) attacks, this paper introduces a model, called Nuke, which tries to provide a more operational reading of the attackers' lifecycle in a compromised network. It allows to consider the notions of regression; and repetitiveness of final objectives achievement. By confronting this model with examples of recent attacks (Equifax data breach and TV5Monde sabotage), we emphasize the importance of the attack chronology in the Cyber Threat Intelligence (CTI) reports, as well as the Tactics, Techniques and Procedures (TTP) used by the attacker during his progression.
Safar, Jamie L., Tummala, Murali, McEachen, John C., Bollmann, Chad.  2019.  Modeling Worm Propagation and Insider Threat in Air-Gapped Network using Modified SEIQV Model. 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS). :1—6.
Computer worms pose a major threat to computer and communication networks due to the rapid speed at which they propagate. Biologically based epidemic models have been widely used to analyze the propagation of worms in computer networks. For an air-gapped network with an insider threat, we propose a modified Susceptible-Exposed-Infected-Quarantined-Vaccinated (SEIQV) model called the Susceptible-Exposed-Infected-Quarantined-Patched (SEIQP) model. We describe the assumptions that apply to this model, define a set of differential equations that characterize the system dynamics, and solve for the basic reproduction number. We then simulate and analyze the parameters controlled by the insider threat to determine where resources should be allocated to attain different objectives and results.
Juuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N..  2019.  PRADA: Protecting Against DNN Model Stealing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :512–527.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
Qin, Xinghong, Li, Bin, Huang, Jiwu.  2019.  A New Spatial Steganographic Scheme by Modeling Image Residuals with Multivariate Gaussian Model. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2617–2621.
Embedding costs used in content-adaptive image steganographic schemes can be defined in a heuristic way or with a statistical model. Inspired by previous steganographic methods, i.e., MG (multivariate Gaussian model) and MiPOD (minimizing the power of optimal detector), we propose a model-driven scheme in this paper. Firstly, we model image residuals obtained by high-pass filtering with quantized multivariate Gaussian distribution. Then, we derive the approximated Fisher Information (FI). We show that FI is related to both Gaussian variance and filter coefficients. Lastly, by selecting the maximum FI value derived with various filters as the final FI, we obtain embedding costs. Experimental results show that the proposed scheme is comparable to existing steganographic methods in resisting steganalysis equipped with rich models and selection-channel-aware rich models. It is also computational efficient when compared to MiPOD, which is the state-of-the-art model-driven method.
Reddy, Vijender Busi, Negi, Atul, Venkataraman, S, Venkataraman, V Raghu.  2019.  A Similarity based Trust Model to Mitigate Badmouthing Attacks in Internet of Things (IoT). 2019 IEEE 5th World Forum on Internet of Things (WF-IoT). :278—282.

In Internet of Things (IoT) each object is addressable, trackable and accessible on the Internet. To be useful, objects in IoT co-operate and exchange information. IoT networks are open, anonymous, dynamic in nature so, a malicious object may enter into the network and disrupt the network. Trust models have been proposed to identify malicious objects and to improve the reliability of the network. Recommendations in trust computation are the basis of trust models. Due to this, trust models are vulnerable to bad mouthing and collusion attacks. In this paper, we propose a similarity model to mitigate badmouthing and collusion attacks and show that proposed method efficiently removes the impact of malicious recommendations in trust computation.

Wang, Tianhao, Kerschbaum, Florian.  2019.  Attacks on Digital Watermarks for Deep Neural Networks. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2622—2626.
Training deep neural networks is a computationally expensive task. Furthermore, models are often derived from proprietary datasets that have been carefully prepared and labelled. Hence, creators of deep learning models want to protect their models against intellectual property theft. However, this is not always possible, since the model may, e.g., be embedded in a mobile app for fast response times. As a countermeasure watermarks for deep neural networks have been developed that embed secret information into the model. This information can later be retrieved by the creator to prove ownership. Uchida et al. proposed the first such watermarking method. The advantage of their scheme is that it does not compromise the accuracy of the model prediction. However, in this paper we show that their technique modifies the statistical distribution of the model. Using this modification we can not only detect the presence of a watermark, but even derive its embedding length and use this information to remove the watermark by overwriting it. We show analytically that our detection algorithm follows consequentially from their embedding algorithm and propose a possible countermeasure. Our findings shall help to refine the definition of undetectability of watermarks for deep neural networks.
Komargodski, Ilan, Naor, Moni, Yogev, Eylon.  2017.  White-Box vs. Black-Box Complexity of Search Problems: Ramsey and Graph Property Testing. 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS). :622–632.
Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued. But what if one is given a program or circuit for computing the existence of an edge? This problem was raised by Buss and Goldberg and Papadimitriou in the context of TFNP, search problems with a guaranteed solution. We examine the relationship between black-box complexity and white-box complexity for search problems with guaranteed solution such as the above Ramsey problem. We show that under the assumption that collision resistant hash function exist (which follows from the hardness of problems such as factoring, discrete-log and learning with errors) the white-box Ramsey problem is hard and this is true even if one is looking for a much smaller clique or independent set than the theorem guarantees. In general, one cannot hope to translate all black-box hardness for TFNP into white-box hardness: we show this by adapting results concerning the random oracle methodology and the impossibility of instantiating it. Another model we consider is the succinct black-box, where there is a known upper bound on the size of the black-box (but no limit on the computation time). In this case we show that for all TFNP problems there is an upper bound on the number of queries proportional to the description size of the box times the solution size. On the other hand, for promise problems this is not the case. Finally, we consider the complexity of graph property testing in the white-box model. We show a property which is hard to test even when one is given the program for computing the graph. The hard property is whether the graph is a two-source extractor.
Biancardi, Beatrice, Wang, Chen, Mancini, Maurizio, Cafaro, Angelo, Chanel, Guillaume, Pelachaud, Catherine.  2019.  A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :1—7.

This paper presents a computational model for managing an Embodied Conversational Agent's first impressions of warmth and competence towards the user. These impressions are important to manage because they can impact users' perception of the agent and their willingness to continue the interaction with the agent. The model aims at detecting user's impression of the agent and producing appropriate agent's verbal and nonverbal behaviours in order to maintain a positive impression of warmth and competence. User's impressions are recognized using a machine learning approach with facial expressions (action units) which are important indicators of users' affective states and intentions. The agent adapts in real-time its verbal and nonverbal behaviour, with a reinforcement learning algorithm that takes user's impressions as reward to select the most appropriate combination of verbal and non-verbal behaviour to perform. A user study to test the model in a contextualized interaction with users is also presented. Our hypotheses are that users' ratings differs when the agents adapts its behaviour according to our reinforcement learning algorithm, compared to when the agent does not adapt its behaviour to user's reactions (i.e., when it randomly selects its behaviours). The study shows a general tendency for the agent to perform better when using our model than in the random condition. Significant results shows that user's ratings about agent's warmth are influenced by their a-priori about virtual characters, as well as that users' judged the agent as more competent when it adapted its behaviour compared to random condition.

Lingasubramanian, Karthikeyan, Kumar, Ranveer, Gunti, Nagendra Babu, Morris, Thomas.  2018.  Study of hardware trojans based security vulnerabilities in cyber physical systems. 2018 IEEE International Conference on Consumer Electronics (ICCE). :1—6.

The dependability of Cyber Physical Systems (CPS) solely lies in the secure and reliable functionality of their backbone, the computing platform. Security of this platform is not only threatened by the vulnerabilities in the software peripherals, but also by the vulnerabilities in the hardware internals. Such threats can arise from malicious modifications to the integrated circuits (IC) based computing hardware, which can disable the system, leak information or produce malfunctions. Such modifications to computing hardware are made possible by the globalization of the IC industry, where a computing chip can be manufactured anywhere in the world. In the complex computing environment of CPS such modifications can be stealthier and undetectable. Under such circumstances, design of these malicious modifications, and eventually their detection, will be tied to the functionality and operation of the CPS. So it is imperative to address such threats by incorporating security awareness in the computing hardware design in a comprehensive manner taking the entire system into consideration. In this paper, we present a study in the influence of hardware Trojans on closed-loop systems, which form the basis of CPS, and establish threat models. Using these models, we perform a case study on a critical CPS application, gas pipeline based SCADA system. Through this process, we establish a completely virtual simulation platform along with a hardware-in-the-loop based simulation platform for implementation and testing.