Visible to the public Biblio

Filters: Author is Lamas, D.  [Clear All Filters]
Sousa, S., Dias, P., Lamas, D..  2014.  A model for Human-computer trust: A key contribution for leveraging trustful interactions. Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on. :1-6.

This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.

Ajenaghughrure, I. B., Sousa, S. C. da Costa, Lamas, D..  2020.  Risk and Trust in artificial intelligence technologies: A case study of Autonomous Vehicles. 2020 13th International Conference on Human System Interaction (HSI). :118–123.
This study investigates how risk influences users' trust before and after interactions with technologies such as autonomous vehicles (AVs'). Also, the psychophysiological correlates of users' trust from users” eletrodermal activity responses. Eighteen (18) carefully selected participants embark on a hypothetical trip playing an autonomous vehicle driving game. In order to stay safe, throughout the drive experience under four risk conditions (very high risk, high risk, low risk and no risk) that are based on automotive safety and integrity levels (ASIL D, C, B, A), participants exhibit either high or low trust by evaluating the AVs' to be highly or less trustworthy and consequently relying on the Artificial intelligence or the joystick to control the vehicle. The result of the experiment shows that there is significant increase in users' trust and user's delegation of controls to AVs' as risk decreases and vice-versa. In addition, there was a significant difference between user's initial trust before and after interacting with AVs' under varying risk conditions. Finally, there was a significant correlation in users' psychophysiological responses (electrodermal activity) when exhibiting higher and lower trust levels towards AVs'. The implications of these results and future research opportunities are discussed.