Visible to the public Biblio

Filters: Keyword is psychology  [Clear All Filters]
Plager, Trenton, Zhu, Ying, Blackmon, Douglas A..  2020.  Creating a VR Experience of Solitary Confinement. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). :692—693.
The goal of this project is to create a realistic VR experience of solitary confinement and study its impact on users. Although there have been active debates and studies on this subject, very few people have personal experience of solitary confinement. Our first aim is to create such an experience in VR to raise the awareness of solitary confinement. We also want to conduct user studies to compare the VR solitary confinement experience with other types of media experiences, such as films or personal narrations. Finally, we want to study people’s sense of time in such a VR environment.
Schiliro, F., Moustafa, N., Beheshti, A..  2020.  Cognitive Privacy: AI-enabled Privacy using EEG Signals in the Internet of Things. 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys). :73—79.

With the advent of Industry 4.0, the Internet of Things (IoT) and Artificial Intelligence (AI), smart entities are now able to read the minds of users via extracting cognitive patterns from electroencephalogram (EEG) signals. Such brain data may include users' experiences, emotions, motivations, and other previously private mental and psychological processes. Accordingly, users' cognitive privacy may be violated and the right to cognitive privacy should protect individuals against the unconsented intrusion by third parties into the brain data as well as against the unauthorized collection of those data. This has caused a growing concern among users and industry experts that laws to protect the right to cognitive liberty, right to mental privacy, right to mental integrity, and the right to psychological continuity. In this paper, we propose an AI-enabled EEG model, namely Cognitive Privacy, that aims to protect data and classifies users and their tasks from EEG data. We present a model that protects data from disclosure using normalized correlation analysis and classifies subjects (i.e., a multi-classification problem) and their tasks (i.e., eye open and eye close as a binary classification problem) using a long-short term memory (LSTM) deep learning approach. The model has been evaluated using the EEG data set of PhysioNet BCI, and the results have revealed its high performance of classifying users and their tasks with achieving high data privacy.

Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Jia, C., Li, C. L., Ying, Z..  2020.  Facial expression recognition based on the ensemble learning of CNNs. 2020 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1—5.

As a part of body language, facial expression is a psychological state that reflects the current emotional state of the person. Recognition of facial expressions can help to understand others and enhance communication with others. We propose a facial expression recognition method based on convolutional neural network ensemble learning in this paper. Our model is composed of three sub-networks, and uses the SVM classifier to Integrate the output of the three networks to get the final result. The recognition accuracy of the model's expression on the FER2013 dataset reached 71.27%. The results show that the method has high test accuracy and short prediction time, and can realize real-time, high-performance facial recognition.

Eftimie, S., Moinescu, R., Rǎcuciu, C..  2020.  Insider Threat Detection Using Natural Language Processing and Personality Profiles. 2020 13th International Conference on Communications (COMM). :325–330.
This work represents an interdisciplinary effort to proactively identify insider threats, using natural language processing and personality profiles. Profiles were developed for the relevant insider threat types using the five-factor model of personality and were used in a proof-of-concept detection system. The system employs a third-party cloud service that uses natural language processing to analyze personality profiles based on personal content. In the end, an assessment was made over the feasibility of the system using a public dataset.
Mou, W., Ruocco, M., Zanatto, D., Cangelosi, A..  2020.  When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :956—962.

Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills.

Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.

Ajenaghughrure, I. B., Sousa, S. C. da Costa, Lamas, D..  2020.  Risk and Trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI). :118–123.
This study investigates how risk influences users' trust before and after interactions with technologies such as autonomous vehicles (AVs'). Also, the psychophysiological correlates of users' trust from users” eletrodermal activity responses. Eighteen (18) carefully selected participants embark on a hypothetical trip playing an autonomous vehicle driving game. In order to stay safe, throughout the drive experience under four risk conditions (very high risk, high risk, low risk and no risk) that are based on automotive safety and integrity levels (ASIL D, C, B, A), participants exhibit either high or low trust by evaluating the AVs' to be highly or less trustworthy and consequently relying on the Artificial intelligence or the joystick to control the vehicle. The result of the experiment shows that there is significant increase in users' trust and user's delegation of controls to AVs' as risk decreases and vice-versa. In addition, there was a significant difference between user's initial trust before and after interacting with AVs' under varying risk conditions. Finally, there was a significant correlation in users' psychophysiological responses (electrodermal activity) when exhibiting higher and lower trust levels towards AVs'. The implications of these results and future research opportunities are discussed.
Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., Billinghurst, M..  2020.  Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). :756–765.
With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user's trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users' head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
Xia, H., Xiao, F., Zhang, S., Hu, C., Cheng, X..  2019.  Trustworthiness Inference Framework in the Social Internet of Things: A Context-Aware Approach. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications. :838–846.
The concept of social networking is integrated into Internet of things (IoT) to socialize smart objects by mimicking human behaviors, leading to a new paradigm of Social Internet of Things (SIoT). A crucial problem that needs to be solved is how to establish reliable relationships autonomously among objects, i.e., building trust. This paper focuses on exploring an efficient context-aware trustworthiness inference framework to address this issue. Based on the sociological and psychological principles of trust generation between human beings, the proposed framework divides trust into two types: familiarity trust and similarity trust. The familiarity trust can be calculated by direct trust and recommendation trust, while the similarity trust can be calculated based on external similarity trust and internal similarity trust. We subsequently present concrete methods for the calculation of different trust elements. In particular, we design a kernel-based nonlinear multivariate grey prediction model to predict the direct trust of a specific object, which acts as the core module of the entire framework. Besides, considering the fuzziness and uncertainty in the concept of trust, we introduce the fuzzy logic method to synthesize these trust elements. The experimental results verify the validity of the core module and the resistance to attacks of this framework.
Geiskkovitch, D. Y., Thiessen, R., Young, J. E., Glenwright, M. R..  2019.  What? That's Not a Chair!: How Robot Informational Errors Affect Children's Trust Towards Robots 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :48—56.

Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.

Ogawa, R., Park, S., Umemuro, H..  2019.  How Humans Develop Trust in Communication Robots: A Phased Model Based on Interpersonal Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :606—607.

The purpose of this study was to propose a model of development of trust in social robots. Insights in interpersonal trust were adopted from social psychology and a novel model was proposed. In addition, this study aimed to investigate the relationship among trust development and self-esteem. To validate the proposed model, an experiment using a communication robot NAO was conducted and changes in categories of trust as well as self-esteem were measured. Results showed that general and category trust have been developed in the early phase. Self-esteem is also increased along the interactions with the robot.

Hu, Y., Sanjab, A., Saad, W..  2019.  Dynamic Psychological Game Theory for Secure Internet of Battlefield Things (IoBT) Systems. IEEE Internet of Things Journal. 6:3712—3726.

In this paper, a novel anti-jamming mechanism is proposed to analyze and enhance the security of adversarial Internet of Battlefield Things (IoBT) systems. In particular, the problem is formulated as a dynamic psychological game between a soldier and an attacker. In this game, the soldier seeks to accomplish a time-critical mission by traversing a battlefield within a certain amount of time, while maintaining its connectivity with an IoBT network. The attacker, on the other hand, seeks to find the optimal opportunity to compromise the IoBT network and maximize the delay of the soldier's IoBT transmission link. The soldier and the attacker's psychological behavior are captured using tools from psychological game theory, with which the soldier's and attacker's intentions to harm one another are considered in their utilities. To solve this game, a novel learning algorithm based on Bayesian updating is proposed to find an ∈ -like psychological self-confirming equilibrium of the game.

Ferguson-Walter, Kimberly, Major, Maxine, Van Bruggen, Dirk, Fugate, Sunny, Gutzwiller, Robert.  2019.  The World (of CTF) is Not Enough Data: Lessons Learned from a Cyber Deception Experiment. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :346–353.
The human side of cyber is fundamentally important to understanding and improving cyber operations. With the exception of Capture the Flag (CTF) exercises, cyber testing and experimentation tends to ignore the human attacker. While traditional CTF events include a deeply rooted human component, they rarely aim to measure human performance, cognition, or psychology. We argue that CTF is not sufficient for measuring these aspects of the human; instead, we examine the value in performing red team behavioral and cognitive testing in a large-scale, controlled human-subject experiment. In this paper we describe the pros and cons of performing this type of experimentation and provide detailed exposition of the data collection and experimental controls used during a recent cyber deception experiment-the Tularosa Study. Finally, we will discuss lessons learned and how our experiences can inform best practices in future cyber operations studies of human behavior and cognition.
Jeong, Jongkil, Mihelcic, Joanne, Oliver, Gillian, Rudolph, Carsten.  2019.  Towards an Improved Understanding of Human Factors in Cybersecurity. 2019 IEEE 5th International Conference on Collaboration and Internet Computing (CIC). :338–345.
Cybersecurity cannot be addressed by technology alone; the most intractable aspects are in fact sociotechnical. As a result, the 'human factor' has been recognised as being the weakest and most obscure link in creating safe and secure digital environments. This study examines the subjective and often complex nature of human factors in the cybersecurity context through a systematic literature review of 27 articles which span across technical, behavior and social sciences perspectives. Results from our study suggest that there is still a predominately a technical focus, which excludes the consideration of human factors in cybersecurity. Our literature review suggests that this is due to a lack of consolidation of the attributes pertaining to human factors; the application of theoretical frameworks; and a lack of in-depth qualitative studies. To ensure that these gaps are addressed, we propose that future studies take into consideration (a) consolidating the human factors; (b) examining cyber security from an interdisciplinary approach; (c) conducting additional qualitative research whilst investigating human factors in cybersecurity.
Granatyr, Jones, Gomes, Heitor Murilo, Dias, João Miguel, Paiva, Ana Maria, Nunes, Maria Augusta Silveira Netto, Scalabrin, Edson Emílio, Spak, Fábio.  2019.  Inferring Trust Using Personality Aspects Extracted from Texts. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :3840–3846.
Trust mechanisms are considered the logical protection of software systems, preventing malicious people from taking advantage or cheating others. Although these concepts are widely used, most applications in this field do not consider affective aspects to aid in trust computation. Researchers of Psychology, Neurology, Anthropology, and Computer Science argue that affective aspects are essential to human's decision-making processes. So far, there is a lack of understanding about how these aspects impact user's trust, particularly when they are inserted in an evaluation system. In this paper, we propose a trust model that accounts for personality using three personality models: Big Five, Needs, and Values. We tested our approach by extracting personality aspects from texts provided by two online human-fed evaluation systems and correlating them to reputation values. The empirical experiments show statistically significant better results in comparison to non-personality-wise approaches.
Ong, Desmond, Soh, Harold, Zaki, Jamil, Goodman, Noah.  2019.  Applying Probabilistic Programming to Affective Computing. IEEE Transactions on Affective Computing. :1—1.

Affective Computing is a rapidly growing field spurred by advancements in artificial intelligence, but often, held back by the inability to translate psychological theories of emotion into tractable computational models. To address this, we propose a probabilistic programming approach to affective computing, which models psychological-grounded theories as generative models of emotion, and implements them as stochastic, executable computer programs. We first review probabilistic approaches that integrate reasoning about emotions with reasoning about other latent mental states (e.g., beliefs, desires) in context. Recently-developed probabilistic programming languages offer several key desidarata over previous approaches, such as: (i) flexibility in representing emotions and emotional processes; (ii) modularity and compositionality; (iii) integration with deep learning libraries that facilitate efficient inference and learning from large, naturalistic data; and (iv) ease of adoption. Furthermore, using a probabilistic programming framework allows a standardized platform for theory-building and experimentation: Competing theories (e.g., of appraisal or other emotional processes) can be easily compared via modular substitution of code followed by model comparison. To jumpstart adoption, we illustrate our points with executable code that researchers can easily modify for their own models. We end with a discussion of applications and future directions of the probabilistic programming approach

Razin, Yosef, Feigh, Karen.  2019.  Toward Interactional Trust for Humans and Automation: Extending Interdependence. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1348–1355.
Trust in human-automation interaction is increasingly imperative as AI and robots become ubiquitous at home, school, and work. Interdependence theory allows for the identification of one-on-one interactions that require trust by analyzing the structure of the potential outcomes. This paper synthesizes multiple, formerly disparate research approaches by extending Interdependence theory to create a unified framework for outcome-based trust in human-automation interaction. This framework quantitatively contextualizes validated empirical results from social psychology on relationship formation, stability, and betrayal. It also contributes insights into trust-related concepts, such as power and commitment, which help further our understanding of trustworthy system design. This new integrated interactional approach reveals how trust and trustworthiness machines from merely reliable tools to trusted teammates working hand-in-actuator toward an automated future.
Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.

Yang, Chen, Liu, Tingting, Zuo, Lulu, Hao, Zhiyong.  2019.  An Empirical Study on the Data Security and Privacy Awareness to Use Health Care Wearable Devices. 2019 16th International Conference on Service Systems and Service Management (ICSSSM). :1–6.
Recently, several health care wearable devices which can intervene in health and collect personal health data have emerged in the medical market. Although health care wearable devices promote the integration of multi-layer medical resources and bring new ways of health applications for users, it is inevitable that some problems will be brought. This is mainly manifested in the safety protection of medical and health data and the protection of user's privacy. From the users' point of view, the irrational use of medical and health data may bring psychological and physical negative effects to users. From the government's perspective, it may be sold by private businesses in the international arena and threaten national security. The most direct precaution against the problem is users' initiative. For better understanding, a research model is designed by the following five aspects: Security knowledge (SK), Security attitude (SAT), Security practice (SP), Security awareness (SAW) and Security conduct (SC). To verify the model, structural equation analysis which is an empirical approach was applied to examine the validity and all the results showed that SK, SAT, SP, SAW and SC are important factors affecting users' data security and privacy protection awareness.
Hoey, Jesse, Sheikhbahaee, Zahra, MacKinnon, Neil J..  2019.  Deliberative and Affective Reasoning: a Bayesian Dual-Process Model. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :388–394.
The presence of artificial agents in human social networks is growing. From chatbots to robots, human experience in the developed world is moving towards a socio-technical system in which agents can be technological or biological, with increasingly blurred distinctions between. Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future in which technological agents and humans can work together. This paper presents work on building intelligent computational agents that integrate both emotion and cognition. These agents are grounded in the well-established social-psychological Bayesian Affect Control Theory (BayesAct). The core idea of BayesAct is that humans are motivated in their social interactions by affective alignment: they strive for their social experiences to be coherent at a deep, emotional level with their sense of identity and general world views as constructed through culturally shared symbols. This affective alignment creates cohesive bonds between group members, and is instrumental for collaborations to solidify as relational group commitments. BayesAct agents are motivated in their social interactions by a combination of affective alignment and decision theoretic reasoning, trading the two off as a function of the uncertainty or unpredictability of the situation. This paper provides a high-level view of dual process theories and advances BayesAct as a plausible, computationally tractable model based in social-psychological and sociological theory.
Schneeberger, Tanja, Scholtes, Mirella, Hilpert, Bernhard, Langer, Markus, Gebhard, Patrick.  2019.  Can Social Agents elicit Shame as Humans do? 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :164–170.
This paper presents a study that examines whether social agents can elicit the social emotion shame as humans do. For that, we use job interviews, which are highly evaluative situations per se. We vary the interview style (shame-eliciting vs. neutral) and the job interviewer (human vs. social agent). Our dependent variables include observational data regarding the social signals of shame and shame regulation as well as self-assessment questionnaires regarding the felt uneasiness and discomfort in the situation. Our results indicate that social agents can elicit shame to the same amount as humans. This gives insights about the impact of social agents on users and the emotional connection between them.
Barnes, Chloe M., Ekárt, Anikó, Lewis, Peter R..  2019.  Social Action in Socially Situated Agents. 2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO). :97–106.
Two systems pursuing their own goals in a shared world can interact in ways that are not so explicit - such that the presence of another system alone can interfere with how one is able to achieve its own goals. Drawing inspiration from human psychology and the theory of social action, we propose the notion of employing social action in socially situated agents as a means of alleviating interference in interacting systems. Here we demonstrate that these specific issues of behavioural and evolutionary instability caused by the unintended consequences of interactions can be addressed with agents capable of a fusion of goal-rationality and traditional action, resulting in a stable society capable of achieving goals during the course of evolution.
Pascucci, Antonio, Masucci, Vincenzo, Monti, Johanna.  2019.  Computational Stylometry and Machine Learning for Gender and Age Detection in Cyberbullying Texts. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :1–6.

The aim of this paper is to show the importance of Computational Stylometry (CS) and Machine Learning (ML) support in author's gender and age detection in cyberbullying texts. We developed a cyberbullying detection platform and we show the results of performances in terms of Precision, Recall and F -Measure for gender and age detection in cyberbullying texts we collected.

Tucker, Scot.  2018.  Engineering Trust: A Graph-Based Algorithm for Modeling, Validating, and Evaluating Trust. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :1–9.
Trust is an important topic in today's interconnected world. Breaches of trust in today's systems has had profound effects upon us all, and they are very difficult and costly to fix especially when caused by flaws in the system's architecture. Trust modeling can expose these types of issues, but modeling trust in complex multi-tiered system architectures can be very difficult. Often experts have differing views of trust and how it applies to systems within their domain. This work presents a graph-based modeling methodology that normalizes the application of trust across disparate system domains allowing the modeling of complex intersystem trust relationships. An algorithm is proposed that applies graph theory to model, validate and evaluate trust in system architectures. Also, it provides the means to apply metrics to compare and prioritize the effectiveness of trust management in system and component architectures. The results produced by the algorithm can be used in conjunction with systems engineering processes to ensure both trust and the efficient use of resources.