Visible to the public Biblio

Filters: Keyword is human trust  [Clear All Filters]
2021-02-01
Calhoun, C. S., Reinhart, J., Alarcon, G. A., Capiola, A..  2020.  Establishing Trust in Binary Analysis in Software Development and Applications. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
The current exploratory study examined software programmer trust in binary analysis techniques used to evaluate and understand binary code components. Experienced software developers participated in knowledge elicitations to identify factors affecting trust in tools and methods used for understanding binary code behavior and minimizing potential security vulnerabilities. Developer perceptions of trust in those tools to assess implementation risk in binary components were captured across a variety of application contexts. The software developers reported source security and vulnerability reports provided the best insight and awareness of potential issues or shortcomings in binary code. Further, applications where the potential impact to systems and data loss is high require relying on more than one type of analysis to ensure the binary component is sound. The findings suggest binary analysis is viable for identifying issues and potential vulnerabilities as part of a comprehensive solution for understanding binary code behavior and security vulnerabilities, but relying simply on binary analysis tools and binary release metadata appears insufficient to ensure a secure solution.
Ng, M., Coopamootoo, K. P. L., Toreini, E., Aitken, M., Elliot, K., Moorsel, A. van.  2020.  Simulating the Effects of Social Presence on Trust, Privacy Concerns Usage Intentions in Automated Bots for Finance. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :190–199.
FinBots are chatbots built on automated decision technology, aimed to facilitate accessible banking and to support customers in making financial decisions. Chatbots are increasing in prevalence, sometimes even equipped to mimic human social rules, expectations and norms, decreasing the necessity for human-to-human interaction. As banks and financial advisory platforms move towards creating bots that enhance the current state of consumer trust and adoption rates, we investigated the effects of chatbot vignettes with and without socio-emotional features on intention to use the chatbot for financial support purposes. We conducted a between-subject online experiment with N = 410 participants. Participants in the control group were provided with a vignette describing a secure and reliable chatbot called XRO23, whereas participants in the experimental group were presented with a vignette describing a secure and reliable chatbot that is more human-like and named Emma. We found that Vignette Emma did not increase participants' trust levels nor lowered their privacy concerns even though it increased perception of social presence. However, we found that intention to use the presented chatbot for financial support was positively influenced by perceived humanness and trust in the bot. Participants were also more willing to share financially-sensitive information such as account number, sort code and payments information to XRO23 compared to Emma - revealing a preference for a technical and mechanical FinBot in information sharing. Overall, this research contributes to our understanding of the intention to use chatbots with different features as financial technology, in particular that socio-emotional support may not be favoured when designed independently of financial function.
Kfoury, E. F., Khoury, D., AlSabeh, A., Gomez, J., Crichigno, J., Bou-Harb, E..  2020.  A Blockchain-based Method for Decentralizing the ACME Protocol to Enhance Trust in PKI. 2020 43rd International Conference on Telecommunications and Signal Processing (TSP). :461–465.
Blockchain technology is the cornerstone of digital trust and systems' decentralization. The necessity of eliminating trust in computing systems has triggered researchers to investigate the applicability of Blockchain to decentralize the conventional security models. Specifically, researchers continuously aim at minimizing trust in the well-known Public Key Infrastructure (PKI) model which currently requires a trusted Certificate Authority (CA) to sign digital certificates. Recently, the Automated Certificate Management Environment (ACME) was standardized as a certificate issuance automation protocol. It minimizes the human interaction by enabling certificates to be automatically requested, verified, and installed on servers. ACME only solved the automation issue, but the trust concerns remain as a trusted CA is required. In this paper we propose decentralizing the ACME protocol by using the Blockchain technology to enhance the current trust issues of the existing PKI model and to eliminate the need for a trusted CA. The system was implemented and tested on Ethereum Blockchain, and the results showed that the system is feasible in terms of cost, speed, and applicability on a wide range of devices including Internet of Things (IoT) devices.
Han, W., Schulz, H.-J..  2020.  Beyond Trust Building — Calibrating Trust in Visual Analytics. 2020 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX). :9–15.
Trust is a fundamental factor in how users engage in interactions with Visual Analytics (VA) systems. While the importance of building trust to this end has been pointed out in research, the aspect that trust can also be misplaced is largely ignored in VA so far. This position paper addresses this aspect by putting trust calibration in focus – i.e., the process of aligning the user’s trust with the actual trustworthiness of the VA system. To this end, we present the trust continuum in the context of VA, dissect important trust issues in both VA systems and users, as well as discuss possible approaches that can build and calibrate trust.
Rutard, F., Sigaud, O., Chetouani, M..  2020.  TIRL: Enriching Actor-Critic RL with non-expert human teachers and a Trust Model. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :604–611.
Reinforcement learning (RL) algorithms have been demonstrated to be very attractive tools to train agents to achieve sequential tasks. However, these algorithms require too many training data to converge to be efficiently applied to physical robots. By using a human teacher, the learning process can be made faster and more robust, but the overall performance heavily depends on the quality and availability of teacher demonstrations or instructions. In particular, when these teaching signals are inadequate, the agent may fail to learn an optimal policy. In this paper, we introduce a trust-based interactive task learning approach. We propose an RL architecture able to learn both from environment rewards and from various sparse teaching signals provided by non-expert teachers, using an actor-critic agent, a human model and a trust model. We evaluate the performance of this architecture on 4 different setups using a maze environment with different simulated teachers and show that the benefits of the trust model.
Ajenaghughrure, I. B., Sousa, S. C. da Costa, Lamas, D..  2020.  Risk and Trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI). :118–123.
This study investigates how risk influences users' trust before and after interactions with technologies such as autonomous vehicles (AVs'). Also, the psychophysiological correlates of users' trust from users” eletrodermal activity responses. Eighteen (18) carefully selected participants embark on a hypothetical trip playing an autonomous vehicle driving game. In order to stay safe, throughout the drive experience under four risk conditions (very high risk, high risk, low risk and no risk) that are based on automotive safety and integrity levels (ASIL D, C, B, A), participants exhibit either high or low trust by evaluating the AVs' to be highly or less trustworthy and consequently relying on the Artificial intelligence or the joystick to control the vehicle. The result of the experiment shows that there is significant increase in users' trust and user's delegation of controls to AVs' as risk decreases and vice-versa. In addition, there was a significant difference between user's initial trust before and after interacting with AVs' under varying risk conditions. Finally, there was a significant correlation in users' psychophysiological responses (electrodermal activity) when exhibiting higher and lower trust levels towards AVs'. The implications of these results and future research opportunities are discussed.
Papadopoulos, A. V., Esterle, L..  2020.  Situational Trust in Self-aware Collaborating Systems. 2020 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C). :91–94.
Trust among humans affects the way we interact with each other. In autonomous systems, this trust is often predefined and hard-coded before the systems are deployed. However, when systems encounter unfolding situations, requiring them to interact with others, a notion of trust will be inevitable. In this paper, we discuss trust as a fundamental measure to enable an autonomous system to decide whether or not to interact with another system, whether biological or artificial. These decisions become increasingly important when continuously integrating with others during runtime.
Hou, M..  2020.  IMPACT: A Trust Model for Human-Agent Teaming. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
A trust model IMPACT: Intention, Measurability, Predictability, Agility, Communication, and Transparency has been conceptualized to build human trust in autonomous agents. The six critical characteristics must be exhibited by the agents in order to gain and maintain the trust from their human partners towards an effective and collaborative team in achieving common goals. The IMPACT model guided a design of an intelligent adaptive decision aid for dynamic target engagement processes in a military context. Positive feedback from subject matter experts participated in a large scale joint exercise controlling multiple unmanned vehicles indicated the effectiveness of the decision aid. It also demonstrated the utility of the IMPACT model as design principles for building up a trusted human-agent teaming.
Wickramasinghe, C. S., Marino, D. L., Grandio, J., Manic, M..  2020.  Trustworthy AI Development Guidelines for Human System Interaction. 2020 13th International Conference on Human System Interaction (HSI). :130–136.
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
Gupta, K., Hajika, R., Pai, Y. S., Duenser, A., Lochner, M., Billinghurst, M..  2020.  Measuring Human Trust in a Virtual Assistant using Physiological Sensing in Virtual Reality. 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). :756–765.
With the advancement of Artificial Intelligence technology to make smart devices, understanding how humans develop trust in virtual agents is emerging as a critical research field. Through our research, we report on a novel methodology to investigate user's trust in auditory assistance in a Virtual Reality (VR) based search task, under both high and low cognitive load and under varying levels of agent accuracy. We collected physiological sensor data such as electroencephalography (EEG), galvanic skin response (GSR), and heart-rate variability (HRV), subjective data through questionnaire such as System Trust Scale (STS), Subjective Mental Effort Questionnaire (SMEQ) and NASA-TLX. We also collected a behavioral measure of trust (congruency of users' head motion in response to valid/ invalid verbal advice from the agent). Our results indicate that our custom VR environment enables researchers to measure and understand human trust in virtual agents using the matrices, and both cognitive load and agent accuracy play an important role in trust formation. We discuss the implications of the research and directions for future work.
Lee, J., Abe, G., Sato, K., Itoh, M..  2020.  Impacts of System Transparency and System Failure on Driver Trust During Partially Automated Driving. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–3.
The objective of this study is to explore changes of trust by a situation where drivers need to intervene. Trust in automation is a key determinant for appropriate interaction between drivers and the system. System transparency and types of system failure influence shaping trust in a supervisory control. Subjective ratings of trust were collected to examine the impact of two factors: system transparency (Detailed vs. Less) and system failure (by Limits vs. Malfunction) in a driving simulator study in which drivers experienced a partially automated vehicle. We examined trust ratings at three points: before and after driver intervention in the automated vehicle, and after subsequent experience of flawless automated driving. Our result found that system transparency did not have significant impacts on trust change from before to after the intervention. System-malfunction led trust reduction compared to those of before the intervention, whilst system-limits did not influence trust. The subsequent experience recovered decreased trust, in addition, when the system-limit occurred to drivers who have detailed information about the system, trust prompted in spite of the intervention. The present finding has implications for automation design to achieve the appropriate level of trust.
2020-12-01
Harris, L., Grzes, M..  2019.  Comparing Explanations between Random Forests and Artificial Neural Networks. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). :2978—2985.

The decisions made by machines are increasingly comparable in predictive performance to those made by humans, but these decision making processes are often concealed as black boxes. Additional techniques are required to extract understanding, and one such category are explanation methods. This research compares the explanations of two popular forms of artificial intelligence; neural networks and random forests. Researchers in either field often have divided opinions on transparency, and comparing explanations may discover similar ground truths between models. Similarity can help to encourage trust in predictive accuracy alongside transparent structure and unite the respective research fields. This research explores a variety of simulated and real-world datasets that ensure fair applicability to both learning algorithms. A new heuristic explanation method that extends an existing technique is introduced, and our results show that this is somewhat similar to the other methods examined whilst also offering an alternative perspective towards least-important features.

Wang, S., Mei, Y., Park, J., Zhang, M..  2019.  A Two-Stage Genetic Programming Hyper-Heuristic for Uncertain Capacitated Arc Routing Problem. 2019 IEEE Symposium Series on Computational Intelligence (SSCI). :1606—1613.

Genetic Programming Hyper-heuristic (GPHH) has been successfully applied to automatically evolve effective routing policies to solve the complex Uncertain Capacitated Arc Routing Problem (UCARP). However, GPHH typically ignores the interpretability of the evolved routing policies. As a result, GP-evolved routing policies are often very complex and hard to be understood and trusted by human users. In this paper, we aim to improve the interpretability of the GP-evolved routing policies. To this end, we propose a new Multi-Objective GP (MOGP) to optimise the performance and size simultaneously. A major issue here is that the size is much easier to be optimised than the performance, and the search tends to be biased to the small but poor routing policies. To address this issue, we propose a simple yet effective Two-Stage GPHH (TS-GPHH). In the first stage, only the performance is to be optimised. Then, in the second stage, both objectives are considered (using our new MOGP). The experimental results showed that TS-GPHH could obtain much smaller and more interpretable routing policies than the state-of-the-art single-objective GPHH, without deteriorating the performance. Compared with traditional MOGP, TS-GPHH can obtain a much better and more widespread Pareto front.

Nikander, P., Autiosalo, J., Paavolainen, S..  2019.  Interledger for the Industrial Internet of Things. 2019 IEEE 17th International Conference on Industrial Informatics (INDIN). 1:908—915.

The upsurge of Industrial Internet of Things is forcing industrial information systems to enable less hierarchical information flow. The connections between humans, devices, and their digital twins are growing in numbers, creating a need for new kind of security and trust solutions. To address these needs, industries are applying distributed ledger technologies, aka blockchains. A significant number of use cases have been studied in the sectors of logistics, energy markets, smart grid security, and food safety, with frequently reported benefits in transparency, reduced costs, and disintermediation. However, distributed ledger technologies have challenges with transaction throughput, latency, and resource requirements, which render the technology unusable in many cases, particularly with constrained Internet of Things devices.To overcome these challenges within the Industrial Internet of Things, we suggest a set of interledger approaches that enable trusted information exchange across different ledgers and constrained devices. With these approaches, the technically most suitable ledger technology can be selected for each use case while simultaneously enjoying the benefits of the most widespread ledger implementations. We present state of the art for distributed ledger technologies to support the use of interledger approaches in industrial settings.

Tanana, D..  2019.  Decentralized Labor Record System Based on Wavelet Consensus Protocol. 2019 International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON). :0496—0499.

The labor market involves several untrusted actors with contradicting objectives. We propose a blockchain based system for labor market, which provides benefits to all participants in terms of confidence, transparency, trust and tracking. Our system would handle employment data through new Wavelet blockchain platform. It would change the job market enabling direct agreements between parties without other participants, and providing new mechanisms for negotiating the employment conditions. Furthermore, our system would reduce the need in existing paper workflow as well as in major internet recruiting companies. The key differences of our work from other blockchain based labor record systems are usage of Wavelet blockchain platform, which features metastability, directed acyclic graph system and Turing complete smart contracts platform and introduction of human interaction inside the smart contracts logic, instead of automatic execution of contracts. The results are promising while inconclusive and we would further explore potential of blockchain solutions for labor market problems.

Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N..  2019.  DeepRing: Protecting Deep Neural Network With Blockchain. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2821—2828.

Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions and optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, with the advent of "blockchain", the cybersecurity industry has developed a new sense of trust which was earlier missing from both the technical and commercial perspectives. Employment of cryptographic hash as well as symmetric/asymmetric encryption and decryption algorithms ensure security without any human intervention (i.e., centralized authority). In this research, we present the synergy between the best of both these worlds. We first propose a model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology. As the second contribution of the proposed research, a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.

Apau, M. N., Sedek, M., Ahmad, R..  2019.  A Theoretical Review: Risk Mitigation Through Trusted Human Framework for Insider Threats. 2019 International Conference on Cybersecurity (ICoCSec). :37—42.

This paper discusses the possible effort to mitigate insider threats risk and aim to inspire organizations to consider identifying insider threats as one of the risks in the company's enterprise risk management activities. The paper suggests Trusted Human Framework (THF) as the on-going and cyclic process to detect and deter potential employees who bound to become the fraudster or perpetrator violating the access and trust given. The mitigation's control statements were derived from the recommended practices in the “Common Sense Guide to Mitigating Insider Threats” produced by the Software Engineering Institute, Carnegie Mellon University (SEI-CMU). The statements validated via a survey which was responded by fifty respondents who work in Malaysia.

Craggs, B., Rashid, A..  2019.  Trust Beyond Computation Alone: Human Aspects of Trust in Blockchain Technologies. 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Society (ICSE-SEIS). :21—30.

Blockchains - with their inherent properties of transaction transparency, distributed consensus, immutability and cryptographic verifiability - are increasingly seen as a means to underpin innovative products and services in a range of sectors from finance through to energy and healthcare. Discussions, too often, make assertions that the trustless nature of blockchain technologies enables and actively promotes their suitability - there being no need to trust third parties or centralised control. Yet humans need to be able to trust systems, and others with whom the system enables transactions. In this paper, we highlight that understanding this need for trust is critical for the development of blockchain-based systems. Through an online study with 125 users of the most well-known of blockchain based systems - the cryptocurrency Bitcoin - we uncover that human and institutional aspects of trust are pervasive. Our analysis highlights that, when designing future blockchain-based technologies, we ought to not only consider computational trust but also the wider eco-system, how trust plays a part in users engaging/disengaging with such eco-systems and where design choices impact upon trust. From this, we distill a set of guidelines for software engineers developing blockchain-based systems for societal applications.

Gao, Y., Sibirtseva, E., Castellano, G., Kragic, D..  2019.  Fast Adaptation with Meta-Reinforcement Learning for Trust Modelling in Human-Robot Interaction. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :305—312.

In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a meta-learning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found that our proposed model increased the perceived trustworthiness of the robot and influenced the dynamics of gaining human's trust. Additionally, participants evaluated that the robot perceived them as more trustworthy during the interactions with the meta-learning based adaptation compared to the previously studied statistical adaptation model.

Losey, D. P., Sadigh, D..  2019.  Robots that Take Advantage of Human Trust. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :7001—7008.

Humans often assume that robots are rational. We believe robots take optimal actions given their objective; hence, when we are uncertain about what the robot's objective is, we interpret the robot's actions as optimal with respect to our estimate of its objective. This approach makes sense when robots straightforwardly optimize their objective, and enables humans to learn what the robot is trying to achieve. However, our insight is that-when robots are aware that humans learn by trusting that the robot actions are rational-intelligent robots do not act as the human expects; instead, they take advantage of the human's trust, and exploit this trust to more efficiently optimize their own objective. In this paper, we formally model instances of human-robot interaction (HRI) where the human does not know the robot's objective using a two-player game. We formulate different ways in which the robot can model the uncertain human, and compare solutions of this game when the robot has conservative, optimistic, rational, and trusting human models. In an offline linear-quadratic case study and a real-time user study, we show that trusting human models can naturally lead to communicative robot behavior, which influences end-users and increases their involvement.

2020-11-23
Li, W., Zhu, H., Zhou, X., Shimizu, S., Xin, M., Jin, Q..  2018.  A Novel Personalized Recommendation Algorithm Based on Trust Relevancy Degree. 2018 IEEE 16th Intl Conf on Dependable, Autonomic and Secure Computing, 16th Intl Conf on Pervasive Intelligence and Computing, 4th Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress(DASC/PiCom/DataCom/CyberSciTech). :418–422.
The rapid development of the Internet and ecommerce has brought a lot of convenience to people's life. Personalized recommendation technology provides users with services that they may be interested according to users' information such as personal characteristics and historical behaviors. The research of personalized recommendation has been a hot point of data mining and social networks. In this paper, we focus on resolving the problem of data sparsity based on users' rating data and social network information, introduce a set of new measures for social trust and propose a novel personalized recommendation algorithm based on matrix factorization combining trust relevancy. Our experiments were performed on the Dianping datasets. The results show that our algorithm outperforms traditional approaches in terms of accuracy and stability.
Gao, Y., Li, X., Li, J., Gao, Y., Guo, N..  2018.  Graph Mining-based Trust Evaluation Mechanism with Multidimensional Features for Large-scale Heterogeneous Threat Intelligence. 2018 IEEE International Conference on Big Data (Big Data). :1272–1277.
More and more organizations and individuals start to pay attention to real-time threat intelligence to protect themselves from the complicated, organized, persistent and weaponized cyber attacks. However, most users worry about the trustworthiness of threat intelligence provided by TISPs (Threat Intelligence Sharing Platforms). The trust evaluation mechanism has become a hot topic in applications of TISPs. However, most current TISPs do not present any practical solution for trust evaluation of threat intelligence itself. In this paper, we propose a graph mining-based trust evaluation mechanism with multidimensional features for large-scale heterogeneous threat intelligence. This mechanism provides a feasible scheme and achieves the task of trust evaluation for TISP, through the integration of a trust-aware intelligence architecture model, a graph mining-based intelligence feature extraction method, and an automatic and interpretable trust evaluation algorithm. We implement this trust evaluation mechanism in a practical TISP (called GTTI), and evaluate the performance of our system on a real-world dataset from three popular cyber threat intelligence sharing platforms. Experimental results show that our mechanism can achieve 92.83% precision and 93.84% recall in trust evaluation. To the best of our knowledge, this work is the first to evaluate the trust level of heterogeneous threat intelligence automatically from the perspective of graph mining with multidimensional features including source, content, time, and feedback. Our work is beneficial to provide assistance on intelligence quality for the decision-making of human analysts, build a trust-aware threat intelligence sharing platform, and enhance the availability of heterogeneous threat intelligence to protect organizations against cyberspace attacks effectively.
Haddad, G. El, Aïmeur, E., Hage, H..  2018.  Understanding Trust, Privacy and Financial Fears in Online Payment. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :28–36.
In online payment, customers must transmit their personal and financial information through the website to conclude their purchase and pay the services or items selected. They may face possible fears from online transactions raised by their risk perception about financial or privacy loss. They may have concerns over the payment decision with the possible negative behaviors such as shopping cart abandonment. Therefore, customers have three major players that need to be addressed in online payment: the online seller, the payment page, and their own perception. However, few studies have explored these three players in an online purchasing environment. In this paper, we focus on the customer concerns and examine the antecedents of trust, payment security perception as well as their joint effect on two fundamentally important customers' aspects privacy concerns and financial fear perception. A total of 392 individuals participated in an online survey. The results highlight the importance, of the seller website's components (such as ease of use, security signs, and quality information) and their impact on the perceived payment security as well as their impact on customer's trust and financial fear perception. The objective of our study is to design a research model that explains the factors contributing to an online payment decision.
Sutton, A., Samavi, R., Doyle, T. E., Koff, D..  2018.  Digitized Trust in Human-in-the-Loop Health Research. 2018 16th Annual Conference on Privacy, Security and Trust (PST). :1–10.
In this paper, we propose an architecture that utilizes blockchain technology for enabling verifiable trust in collaborative health research environments. The architecture supports the human-in-the-loop paradigm for health research by establishing trust between participants, including human researchers and AI systems, by making all data transformations transparent and verifiable by all participants. We define the trustworthiness of the system and provide an analysis of the architecture in terms of trust requirements. We then evaluate our architecture by analyzing its resiliency to common security threats and through an experimental realization.
Gwak, B., Cho, J., Lee, D., Son, H..  2018.  TARAS: Trust-Aware Role-Based Access Control System in Public Internet-of-Things. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :74–85.
Due to the proliferation of Internet-of-Things (IoT) environments, humans working with heterogeneous, smart objects in public IoT environments become more popular than ever before. This situation often requires to establish trust relationships between a user and a smart object for their secure interactions, but without the presence of prior interactions. In this work, we are interested in how a smart object can grant an access right to a human user in the absence of any prior knowledge in which some users may be malicious aiming to breach security goals of the IoT system. To solve this problem, we propose a trust-aware, role-based access control system, namely TARAS, which provides adaptive authorization to users based on dynamic trust estimation. In TARAS, for the initial trust establishment, we take a multidisciplinary approach by adopting the concept of I-sharing from psychology. The I-sharing follows the rationale that people with similar roles and traits are more likely to respond in a similar way. This theory provides a powerful tool to quickly establish trust between a smart object and a new user with no prior interactions. In addition, TARAS can adaptively filter malicious users out by revoking their access rights based on adaptive, dynamic trust estimation. Our experimental results show that the proposed TARAS mechanism can maximize system integrity in terms of correctly detecting malicious or benign users while maximizing service availability to users particularly when the system is fine-tuned based on the identified optimal setting in terms of an optimal trust threshold.