Visible to the public Biblio

Filters: Keyword is robots  [Clear All Filters]
2021-08-02
Fernandez, J., Allen, B., Thulasiraman, P., Bingham, B..  2020.  Performance Study of the Robot Operating System 2 with QoS and Cyber Security Settings. 2020 IEEE International Systems Conference (SysCon). :1—6.
Throughout the Department of Defense, there are ongoing efforts to increase cybersecurity and improve data transfer in unmanned robotic systems (UxS). This paper explores the performance of the Robot Operating System (ROS) 2, which is built with the Data Distribution Service (DDS) standard as a middleware. Based on how quality of service (QoS) parameters are defined in the robotic middleware interface, it is possible to implement strict delivery requirements to different nodes on a dynamic nodal network with multiple unmanned systems connected. Through this research, different scenarios with varying QoS settings were implemented and compared to baseline values to help illustrate the impact of latency and throughput on data flow. DDS security settings were also enabled to help understand the cost of overhead and performance when secured data is compared to plaintext baseline values. Our experiments were performed using a basic ROS 2 network consisting of two nodes (one publisher and one subscriber). Our experiments showed a measurable latency and throughput change between different QoS profiles and security settings. We analyze the trends and tradeoffs associated with varying QoS and security settings. This paper provides performance data points that can be used to help future researchers and developers make informative choices when using ROS 2 for UxS.
2021-07-27
Chaudhry, Y. S., Sharma, U., Rana, A..  2020.  Enhancing Security Measures of AI Applications. 2020 8th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO). :713—716.
Artificial Intelligence also often referred to as machine learning is being labelled to as the future has been into light since more than a decade. Artificial Intelligence designated by the acronym AI has a vast scope of development and the developers have been working on with it constantly. AI is being associated with the existing objects in the world as well as with the ones that are about to arrive to improve them and make them more reliable. AI as it states in its name is intelligence, intelligence shown by the machines to work similar to humans and work on achieving the goals they are being provided with. Another application of AI could be to provide defenses against the present cyber threats, vehicle overrides etc. Also, AI might be intelligence but, in the end, it's still a bunch of codes, hence it is prone to be corrupted or misused by the world. To prevent the misuse of the technologies, it is necessary to deploy them with a sustainable defensive system as well. Obviously, there is going to be a default defense system but it is prone to be corrupted by the hackers or malfunctioning of the intelligence in certain scenarios which can result disastrous especially in case of Robotics. A proposal referred to as the “Guard Masking” has been offered in the following paper, to provide an alternative for securing Artificial Intelligence.
2021-07-07
Antevski, Kiril, Groshev, Milan, Baldoni, Gabriele, Bernardos, Carlos J..  2020.  DLT federation for Edge robotics. 2020 IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN). :71–76.
The concept of federation in 5G and NFV networks aims to provide orchestration of services across multiple administrative domains. Edge robotics, as a field of robotics, implements the robot control on the network edge by relying on low-latency and reliable access connectivity. In this paper, we propose a solution that enables Edge robotics service to expand its service footprint or access coverage over multiple administrative domains. We propose application of Distributed ledger technologies (DLTs) for the federation procedures to enable private, secure and trusty interactions between undisclosed administrative domains. The solution is applied on a real-case Edge robotics experimental scenario. The results show that it takes around 19 seconds to deploy & federate a Edge robotics service in an external/anonymous domain without any service down-time.
2021-06-28
Hannum, Corey, Li, Rui, Wang, Weitian.  2020.  Trust or Not?: A Computational Robot-Trusting-Human Model for Human-Robot Collaborative Tasks 2020 IEEE International Conference on Big Data (Big Data). :5689–5691.
The trust of a robot in its human partner is a significant issue in human-robot interaction, which is seldom explored in the field of robotics. This study addresses a critical issue of robots' trust in humans during the human-robot collaboration process based on the data of human motions, past interactions of the human-robot pair, and the human's current performance in the co-carry task. The trust level is evaluated dynamically throughout the collaborative task that allows the trust level to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Experimental results showed that the robot effectively assisted the human in collaborative tasks through the proposed computational trust model.
2021-06-01
Averta, Giuseppe, Hogan, Neville.  2020.  Enhancing Robot-Environment Physical Interaction via Optimal Impedance Profiles. 2020 8th IEEE RAS/EMBS International Conference for Biomedical Robotics and Biomechatronics (BioRob). :973–980.
Physical interaction of robots with their environment is a challenging problem because of the exchanged forces. Hybrid position/force control schemes often exhibit problems during the contact phase, whereas impedance control appears to be more simple and reliable, especially when impedance is shaped to be energetically passive. Even if recent technologies enable shaping the impedance of a robot, how best to plan impedance parameters for task execution remains an open question. In this paper we present an optimization-based approach to plan not only the robot motion but also its desired end-effector mechanical impedance. We show how our methodology is able to take into account the transition from free motion to a contact condition, typical of physical interaction tasks. Results are presented for planar and three-dimensional open-chain manipulator arms. The compositionality of mechanical impedance is exploited to deal with kinematic redundancy and multi-arm manipulation.
2021-03-01
Nasir, J., Norman, U., Bruno, B., Dillenbourg, P..  2020.  When Positive Perception of the Robot Has No Effect on Learning. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :313–320.
Humanoid robots, with a focus on personalised social behaviours, are increasingly being deployed in educational settings to support learning. However, crafting pedagogical HRI designs and robot interventions that have a real, positive impact on participants' learning, as well as effectively measuring such impact, is still an open challenge. As a first effort in tackling the issue, in this paper we propose a novel robot-mediated, collaborative problem solving activity for school children, called JUSThink, aiming at improving their computational thinking skills. JUSThink will serve as a baseline and reference for investigating how the robot's behaviour can influence the engagement of the children with the activity, as well as their collaboration and mutual understanding while working on it. To this end, this first iteration aims at investigating (i) participants' engagement with the activity (Intrinsic Motivation Inventory-IMI), their mutual understanding (IMIlike) and perception of the robot (Godspeed Questionnaire); (ii) participants' performance during the activity, using several performance and learning metrics. We carried out an extensive user-study in two international schools in Switzerland, in which around 100 children participated in pairs in one-hour long interactions with the activity. Surprisingly, we observe that while a teams' performance significantly affects how team members evaluate their competence, mutual understanding and task engagement, it does not affect their perception of the robot and its helpfulness, a fact which highlights the need for baseline studies and multi-dimensional evaluation metrics when assessing the impact of robots in educational activities.
2021-02-10
Lei, L., Chen, M., He, C., Li, D..  2020.  XSS Detection Technology Based on LSTM-Attention. 2020 5th International Conference on Control, Robotics and Cybernetics (CRC). :175—180.
Cross-site scripting (XSS) is one of the main threats of Web applications, which has great harm. How to effectively detect and defend against XSS attacks has become more and more important. Due to the malicious obfuscation of attack codes and the gradual increase in number, the traditional XSS detection methods have some defects such as poor recognition of malicious attack codes, inadequate feature extraction and low efficiency. Therefore, we present a novel approach to detect XSS attacks based on the attention mechanism of Long Short-Term Memory (LSTM) recurrent neural network. First of all, the data need to be preprocessed, we used decoding technology to restore the XSS codes to the unencoded state for improving the readability of the code, then we used word2vec to extract XSS payload features and map them to feature vectors. And then, we improved the LSTM model by adding attention mechanism, the LSTM-Attention detection model was designed to train and test the data. We used the ability of LSTM model to extract context-related features for deep learning, the added attention mechanism made the model extract more effective features. Finally, we used the classifier to classify the abstract features. Experimental results show that the proposed XSS detection model based on LSTM-Attention achieves a precision rate of 99.3% and a recall rate of 98.2% in the actually collected dataset. Compared with traditional machine learning methods and other deep learning methods, this method can more effectively identify XSS attacks.
2021-02-03
Rabby, M. K. Monir, Khan, M. Altaf, Karimoddini, A., Jiang, S. X..  2020.  Modeling of Trust Within a Human-Robot Collaboration Framework. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4267—4272.

In this paper, a time-driven performance-aware mathematical model for trust in the robot is proposed for a Human-Robot Collaboration (HRC) framework. The proposed trust model is based on both the human operator and the robot performances. The human operator’s performance is modeled based on both the physical and cognitive performances, while the robot performance is modeled over its unpredictable, predictable, dependable, and faithful operation regions. The model is validated via different simulation scenarios. The simulation results show that the trust in the robot in the HRC framework is governed by robot performance and human operator’s performance and can be improved by enhancing the robot performance.

Xu, J., Howard, A..  2020.  How much do you Trust your Self-Driving Car? Exploring Human-Robot Trust in High-Risk Scenarios 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4273—4280.

Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.

Lyons, J. B., Nam, C. S., Jessup, S. A., Vo, T. Q., Wynne, K. T..  2020.  The Role of Individual Differences as Predictors of Trust in Autonomous Security Robots. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—5.

This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.

2020-12-17
Lee, J., Chen, H., Young, J., Kim, H..  2020.  RISC-V FPGA Platform Toward ROS-Based Robotics Application. 2020 30th International Conference on Field-Programmable Logic and Applications (FPL). :370—370.

RISC-V is free and open standard instruction set architecture following reduced instruction set computer principle. Because of its openness and scalability, RISC-V has been adapted not only for embedded CPUs such as mobile and IoT market, but also for heavy-workload CPUs such as the data center or super computing field. On top of it, Robotics is also a good application of RISC-V because security and reliability become crucial issues of robotics system. These problems could be solved by enthusiastic open source community members as they have shown on open source operating system. However, running RISC-V on local FPGA becomes harder than before because now RISC-V foundation are focusing on cloud-based FPGA environment. We have experienced that recently released OS and toolchains for RISC-V are not working well on the previous CPU image for local FPGA. In this paper we design the local FPGA platform for RISC-V processor and run the robotics application on mainstream Robot Operating System on top of the RISC-V processor. This platform allow us to explore the architecture space of RISC-V CPU for robotics application, and get the insight of the RISC-V CPU architecture for optimal performance and the secure system.

Gao, X., Fu, X..  2020.  Miniature Water Surface Garbage Cleaning Robot. 2020 International Conference on Computer Engineering and Application (ICCEA). :806—810.

In light of the problem for garbage cleaning in small water area, an intelligent miniature water surface garbage cleaning robot with unmanned driving and convenient operation is designed. Based on STC12C5A60S2 as the main controller in the design, power module, transmission module and cleaning module are controlled together to realize the function of cleaning and transporting garbage, intelligent remote control of miniature water surface garbage cleaning robot is realized by the WiFi module. Then the prototype is developed and tested, which will verify the rationality of the design. Compared with the traditional manual driving water surface cleaning devices, the designed robot realizes the intelligent control of unmanned driving, and achieves the purpose of saving human resources and reducing labor intensity, and the system operates security and stability, which has certain practical value.

charan, S. S., karuppaiah, D..  2020.  Operating System Process Using Message Passing Concept in Military. 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). :1—4.

In Robotics Operating System Process correspondence is the instrument given by the working framework that enables procedures to speak with one another Message passing model enables different procedures to peruse and compose information to the message line without being associated with one another, messages going between Robots. ROS is intended to be an inexactly coupled framework where a procedure is known as a hub and each hub ought to be answerable for one assignment. In the military application robots will go to go about as an officer and going ensure nation. In the referenced idea robot solider will give the message passing idea then the officers will go caution and start assaulting on the foes.

Zong, Y., Guo, Y., Chen, X..  2019.  Policy-Based Access Control for Robotic Applications. 2019 IEEE International Conference on Service-Oriented System Engineering (SOSE). :368—3685.

With the wide application of modern robots, more concerns have been raised on security and privacy of robotic systems and applications. Although the Robot Operating System (ROS) is commonly used on different robots, there have been few work considering the security aspects of ROS. As ROS does not employ even the basic permission control mechanism, applications can access any resources without limitation, which could result in equipment damage, harm to human, as well as privacy leakage. In this paper we propose an access control mechanism for ROS based on an extended policy-based access control (PBAC) model. Specifically, we extend ROS to add an additional node dedicated for access control so that it can provide user identity and permission management services. The proposed mechanism also allows the administrator to revoke a permission dynamically. We implemented the proposed method in ROS and demonstrated its applicability and performance through several case studies.

Rivera, S., Lagraa, S., State, R..  2019.  ROSploit: Cybersecurity Tool for ROS. 2019 Third IEEE International Conference on Robotic Computing (IRC). :415—416.

Robotic Operating System(ROS) security research is currently in a preliminary state, with limited research in tools or models. Considering the trend of digitization of robotic systems, this lack of foundational knowledge increases the potential threat posed by security vulnerabilities in ROS. In this article, we present a new tool to assist further security research in ROS, ROSploit. ROSploit is a modular two-pronged offensive tool covering both reconnaissance and exploitation of ROS systems, designed to assist researchers in testing exploits for ROS.

Basan, E., Gritsynin, A., Avdeenko, T..  2019.  Framework for Analyzing the Security of Robot Control Systems. 2019 International Conference on Information Systems and Computer Science (INCISCOS). :354—360.

The purpose of this work is to analyze the security model of a robotized system, to analyze the approaches to assessing the security of this system, and to develop our own framework. The solution to this problem involves the use of developed frameworks. The analysis will be conducted on a robotic system of robots. The prefix structures assume that the robotic system is divided into levels, and after that it is necessary to directly protect each level. Each level has its own characteristics and drawbacks that must be considered when developing a security system for a robotic system.

Sandoval, S., Thulasiraman, P..  2019.  Cyber Security Assessment of the Robot Operating System 2 for Aerial Networks. 2019 IEEE International Systems Conference (SysCon). :1—8.

The Robot Operating System (ROS) is a widely adopted standard robotic middleware. However, its preliminary design is devoid of any network security features. Military grade unmanned systems must be guarded against network threats. ROS 2 is built upon the Data Distribution Service (DDS) standard and is designed to provide solutions to identified ROS 1 security vulnerabilities by incorporating authentication, encryption, and process profile features, which rely on public key infrastructure. The Department of Defense is looking to use ROS 2 for its military-centric robotics platform. This paper seeks to demonstrate that ROS 2 and its DDS security architecture can serve as a functional platform for use in military grade unmanned systems, particularly in unmanned Naval aerial swarms. In this paper, we focus on the viability of ROS 2 to safeguard communications between swarms and a ground control station (GCS). We test ROS 2's ability to mitigate and withstand certain cyber threats, specifically that of rogue nodes injecting unauthorized data and accessing services that will disable parts of the UAV swarm. We use the Gazebo robotics simulator to target individual UAVs to ascertain the effectiveness of our attack vectors under specific conditions. We demonstrate the effectiveness of ROS 2 in mitigating the chosen attack vectors but observed a measurable operational delay within our simulations.

2020-12-15
Xu, Z., Zhu, Q..  2018.  Cross-Layer Secure and Resilient Control of Delay-Sensitive Networked Robot Operating Systems. 2018 IEEE Conference on Control Technology and Applications (CCTA). :1712—1717.

A Robot Operating System (ROS) plays a significant role in organizing industrial robots for manufacturing. With an increasing number of the robots, the operators integrate a ROS with networked communication to share the data. This cyber-physical nature exposes the ROS to cyber attacks. To this end, this paper proposes a cross-layer approach to achieve secure and resilient control of a ROS. In the physical layer, due to the delay caused by the security mechanism, we design a time-delay controller for the ROS agent. In the cyber layer, we define cyber states and use Markov Decision Process to evaluate the tradeoffs between physical and security performance. Due to the uncertainty of the cyber state, we extend the MDP to a Partially Observed Markov Decision Process (POMDP). We propose a threshold solution based on our theoretical results. Finally, we present numerical examples to evaluate the performance of the secure and resilient mechanism.

2020-12-01
Attia, M., Hossny, M., Nahavandi, S., Dalvand, M., Asadi, H..  2018.  Towards Trusted Autonomous Surgical Robots. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4083—4088.

Throughout the last few decades, a breakthrough took place in the field of autonomous robotics. They have been introduced to perform dangerous, dirty, difficult, and dull tasks, to serve the community. They have been also used to address health-care related tasks, such as enhancing the surgical skills of the surgeons and enabling surgeries in remote areas. This may help to perform operations in remote areas efficiently and in timely manner, with or without human intervention. One of the main advantages is that robots are not affected with human-related problems such as: fatigue or momentary lapses of attention. Thus, they can perform repeated and tedious operations. In this paper, we propose a framework to establish trust in autonomous medical robots based on mutual understanding and transparency in decision making.

Nam, C., Li, H., Li, S., Lewis, M., Sycara, K..  2018.  Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :825—830.

In this paper, we study trust-related human factors in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We compare three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual LOA, the human operator chooses headings for a flocking swarm, issuing new headings as needed. In the fully autonomous LOA, the swarm is redirected automatically by changing headings using a search algorithm. In the mixed-initiative LOA, if performance declines, control is switched from human to swarm or swarm to human. The result of this work extends the current knowledge on human factors in swarm supervisory control. Specifically, the finding that the relationship between trust and performance improved for passively monitoring operators (i.e., improved situation awareness in higher LOAs) is particularly novel in its contradiction of earlier work. We also discover that operators switch the degree of autonomy when their trust in the swarm system is low. Last, our analysis shows that operator's preference for a lower LOA is confirmed for a new domain of swarm control.

Xu, J., Bryant, D. G., Howard, A..  2018.  Would You Trust a Robot Therapist? Validating the Equivalency of Trust in Human-Robot Healthcare Scenarios 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :442—447.

With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.

Xie, Y., Bodala, I. P., Ong, D. C., Hsu, D., Soh, H..  2019.  Robot Capability and Intention in Trust-Based Decisions Across Tasks. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :39—47.

In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.

Sebo, S. S., Krishnamurthi, P., Scassellati, B..  2019.  “I Don't Believe You”: Investigating the Effects of Robot Trust Violation and Repair. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :57—65.

When a robot breaks a person's trust by making a mistake or failing, continued interaction will depend heavily on how the robot repairs the trust that was broken. Prior work in psychology has demonstrated that both the trust violation framing and the trust repair strategy influence how effectively trust can be restored. We investigate trust repair between a human and a robot in the context of a competitive game, where a robot tries to restore a human's trust after a broken promise, using either a competence or integrity trust violation framing and either an apology or denial trust repair strategy. Results from a 2×2 between-subjects study ( n=82) show that participants interacting with a robot employing the integrity trust violation framing and the denial trust repair strategy are significantly more likely to exhibit behavioral retaliation toward the robot. In the Dyadic Trust Scale survey, an interaction between trust violation framing and trust repair strategy was observed. Our results demonstrate the importance of considering both trust violation framing and trust repair strategy choice when designing robots to repair trust. We also discuss the influence of human-to-robot promises and ethical considerations when framing and repairing trust between a human and robot.

Robinette, P., Novitzky, M., Fitzgerald, C., Benjamin, M. R., Schmidt, H..  2019.  Exploring Human-Robot Trust During Teaming in a Real-World Testbed. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :592—593.

Project Aquaticus is a human-robot teaming competition on the water involving autonomous surface vehicles and human operated motorized kayaks. Teams composed of both humans and robots share the same physical environment to play capture the flag. In this paper, we present results from seven competitions of our half-court (one participant versus one robot) game. We found that participants indicated more trust in more aggressive behaviors from robots.

Ullman, D., Malle, B. F..  2019.  Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :618—619.

Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.