Visible to the public Biblio

Found 172 results

Filters: Keyword is Trust  [Clear All Filters]
MacDermott, Áine, Carr, John, Shi, Qi, Baharon, Mohd Rizuan, Lee, Gyu Myoung.  2020.  Privacy Preserving Issues in the Dynamic Internet of Things (IoT). 2020 International Symposium on Networks, Computers and Communications (ISNCC). :1–6.
Convergence of critical infrastructure and data, including government and enterprise, to the dynamic Internet of Things (IoT) environment and future digital ecosystems exhibit significant challenges for privacy and identity in these interconnected domains. There are an increasing variety of devices and technologies being introduced, rendering existing security tools inadequate to deal with the dynamic scale and varying actors. The IoT is increasingly data driven with user sovereignty being essential - and actors in varying scenarios including user/customer, device, manufacturer, third party processor, etc. Therefore, flexible frameworks and diverse security requirements for such sensitive environments are needed to secure identities and authenticate IoT devices and their data, protecting privacy and integrity. In this paper we present a review of the principles, techniques and algorithms that can be adapted from other distributed computing paradigms. Said review will be used in application to the development of a collaborative decision-making framework for heterogeneous entities in a distributed domain, whilst simultaneously highlighting privacy preserving issues in the IoT. In addition, we present our trust-based privacy preserving schema using Dempster-Shafer theory of evidence. While still in its infancy, this application could help maintain a level of privacy and nonrepudiation in collaborative environments such as the IoT.
Sharma, Prince, Shukla, Shailendra, Vasudeva, Amol.  2020.  Trust-based Incentive for Mobile Offloaders in Opportunistic Networks. 2020 International Conference on Smart Electronics and Communication (ICOSEC). :872—877.
Mobile data offloading using opportunistic network has recently gained its significance to increase mobile data needs. Such offloaders need to be properly incentivized to encourage more and more users to act as helpers in such networks. The extent of help offered by mobile data offloading alternatives using appropriate incentive mechanisms is significant in such scenarios. The limitation of existing incentive mechanisms is that they are partial in implementation while most of them use third party intervention based derivation. However, none of the papers considers trust as an essential factor for incentive distribution. Although few works contribute to the trust analysis, but the implementation is limited to offloading determination only while the incentive is independent of trust. We try to investigate if trust could be related to the Nash equilibrium based incentive evaluation. Our analysis results show that trust-based incentive distribution encourages more than 50% offloaders to act positively and contribute successfully towards efficient mobile data offloading. We compare the performance of our algorithm with literature based salary-bonus scheme implementation and get optimum incentive beyond 20% dependence over trust-based output.
Fatehi, Nina, Shahhoseini, HadiShahriar.  2020.  A Hybrid Algorithm for Evaluating Trust in Online Social Networks. 2020 10th International Conference on Computer and Knowledge Engineering (ICCKE). :158—162.
The acceleration of extending popularity of Online Social Networks (OSNs) thanks to various services with which they provide people, is inevitable. This is why in OSNs security as a way to protect private data of users to be abused by unauthoritative people has a vital role to play. Trust evaluation is the security approach that has been utilized since the advent of OSNs. Graph-based approaches are among the most popular methods for trust evaluation. However, graph-based models need to employ limitations in the search process of finding trusted paths. This contributes to a reduction in trust accuracy. In this investigation, a learning-based model which with no limitation is able to find reliable users of any target user, is proposed. Experimental results depict 12% improvement in trust accuracy compares to models based on the graph-based approach.
Hannum, Corey, Li, Rui, Wang, Weitian.  2020.  Trust or Not?: A Computational Robot-Trusting-Human Model for Human-Robot Collaborative Tasks 2020 IEEE International Conference on Big Data (Big Data). :5689–5691.
The trust of a robot in its human partner is a significant issue in human-robot interaction, which is seldom explored in the field of robotics. This study addresses a critical issue of robots' trust in humans during the human-robot collaboration process based on the data of human motions, past interactions of the human-robot pair, and the human's current performance in the co-carry task. The trust level is evaluated dynamically throughout the collaborative task that allows the trust level to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Experimental results showed that the robot effectively assisted the human in collaborative tasks through the proposed computational trust model.
Zhu, Luqi, Wang, Jin, Shi, Lianmin, Zhou, Jingya, Lu, Kejie, Wang, Jianping.  2020.  Secure Coded Matrix Multiplication Against Cooperative Attack in Edge Computing. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :547–556.
In recent years, the computation security of edge computing has been raised as a major concern since the edge devices are often distributed on the edge of the network, less trustworthy than cloud servers and have limited storage/ computation/ communication resources. Recently, coded computing has been proposed to protect the confidentiality of computing data under edge device's independent attack and minimize the total cost (resource consumption) of edge system. In this paper, for the cooperative attack, we design an efficient scheme to ensure the information-theory security (ITS) of user's data and further reduce the total cost of edge system. Specifically, we take matrix multiplication as an example, which is an important module appeared in many application operations. Moreover, we theoretically analyze the necessary and sufficient conditions for the existence of feasible scheme, prove the security and decodeability of the proposed scheme. We also prove the effectiveness of the proposed scheme through considerable simulation experiments. Compared with the existing schemes, the proposed scheme further reduces the total cost of edge system. The experiments also show a trade-off between storage and communication.
Hashemi, Seyed Mahmood.  2020.  Intelligent Approaches for the Trust Assessment. 2020 International Conference on Computation, Automation and Knowledge Management (ICCAKM). :348–352.
There is a need for suitable approaches to trust assessment to cover the problems of human life. Trust assessment for the information communication related to the quality of service (QoS). The server sends data packets to the client(s) according to the trust assessment. The motivation of this paper is designing a proper approach for the trust assessment process. We propose two methods that are based on the fuzzy systems and genetic algorithm. We compare the results of proposed approaches that can guide to select the proper approaches.
Naderi, Pooria Taghizadeh, Taghiyareh, Fattaneh.  2020.  LookLike: Similarity-based Trust Prediction in Weighted Sign Networks. 2020 6th International Conference on Web Research (ICWR). :294–298.
Trust network is widely considered to be one of the most important aspects of social networks. It has many applications in the field of recommender systems and opinion formation. Few researchers have addressed the problem of trust/distrust prediction and, it has not yet been established whether the similarity measures can do trust prediction. The present paper aims to validate that similar users have related trust relationships. To predict trust relations between two users, the LookLike algorithm was introduced. Then we used the LookLike algorithm results as new features for supervised classifiers to predict the trust/distrust label. We chose a list of similarity measures to examined our claim on four real-world trust network datasets. The results demonstrated that there is a strong correlation between users' similarity and their opinion on trust networks. Due to the tight relation between trust prediction and truth discovery, we believe that our similarity-based algorithm could be a promising solution in their challenging domains.
Thakare, Vaishali Ravindra, Singh, K. John, Prabhu, C S R, Priya, M..  2020.  Trust Evaluation Model for Cloud Security Using Fuzzy Theory. 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). :1–4.
Cloud computing is a new kind of computing model which allows users to effectively rent virtualized computing resources on pay as you go model. It offers many advantages over traditional models in IT industries and healthcare as well. However, there is lack of trust between CSUs and CSPs to prevent the extensive implementation of cloud technologies amongst industries. Different models are developed to overcome the uncertainty and complexity between CSP and CSU regarding suitability. Several researchers focused on resource optimization, scheduling and service dependability in cloud computing by using fuzzy logic. But, data storage and security using fuzzy logic have been ignored. In this paper, a trust evaluation model is proposed for cloud computing security using fuzzy theory. Authors evaluates how fuzzy logic increases efficiency in trust evaluation. To validate the effectiveness of proposed FTEM, authors presents a case study of healthcare organization.
Mohammed, Alshaimaa M., Omara, Fatma A..  2020.  A Framework for Trust Management in Cloud Computing Environment. 2020 International Conference on Innovative Trends in Communication and Computer Engineering (ITCE). :7–13.
Cloud Computing is considered as a business model for providing IT resources as services through the Internet based on pay-as-you-go principle. These IT resources are provided by Cloud Service Providers (CSPs) and requested by Cloud Service Consumers (CSCs). Selecting the proper CSP to deliver services is a critical and strategic process. According to the work in this paper, a framework for trust management in cloud computing has been introduced. The proposed framework consists of five stages; Filtrating, Trusting, Similarity, Ranking and Monitoring. In the Filtrating stage, the existing CSPs in the system will be filtered based on their parameters. The CSPs trust values are calculated in the Trusting stage. Then, the similarity between the CSC requirements and the CSPs data is calculated in the Similarity stage. The ranking of CSPs will be performed in Ranking stage. According to the Monitoring stage, after finishing the service, the CSC sends his feedbacks about the CSP who delivered the service to be used to monitor this CSP. To evaluate the performance of the proposed framework, a comparative study has been done for the Ranking and Monitoring stages using Armor dataset. According to the comparative results it is found that the proposed framework increases the reliability and performance of the cloud environment.
Yan, Qifei, Zhou, Yan, Zou, Li, Li, Yanling.  2020.  Evidence Fusion Method Based on Evidence Trust and Exponential Weighting. 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC). 1:1851–1855.
In order to solve the problems of unreasonable fusion results of high conflict evidence and ineffectiveness of coefficient weighting in classical evidence theory, a method of evidence fusion based on evidence trust degree and exponential weighting is proposed. Firstly, the fusion factor is constructed based on probability distribution function and evidence trust degree, then the fusion factor is exponentially weighted by evidence weight, and then the evidence fusion rule based on fusion factor is constructed. The results show that this method can effectively solve the problems of unreasonable fusion results of high conflict evidence and ineffectiveness of coefficient weighting. It shows that the new fusion method are more reasonable, which provides a new idea and method for solving the problems in evidence theory.
Zheng, Yang, Chunlin, Yin, Zhengyun, Fang, Na, Zhao.  2020.  Trust Chain Model and Credibility Analysis in Software Systems. 2020 5th International Conference on Computer and Communication Systems (ICCCS). :153–156.
The credibility of software systems is an important indicator in measuring the performance of software systems. Effective analysis of the credibility of systems is a controversial topic in the research of trusted software. In this paper, the trusted boot and integrity metrics of a software system are analyzed. The different trust chain models, chain and star, are obtained by using different methods for credibility detection of functional modules in the system operation. Finally, based on the operation of the system, trust and failure relation graphs are established to analyze and measure the credibility of the system.
Hatti, Daneshwari I., Sutagundar, Ashok V..  2020.  Trust Induced Resource Provisioning (TIRP) Mechanism in IoT. 2020 4th International Conference on Computer, Communication and Signal Processing (ICCCSP). :1–5.
Due to increased number of devices with limited resources in Internet of Things (IoT) has to serve time sensitive applications including health monitoring, emergency response, industrial applications and smart city etc. This has incurred the problem of solving the provisioning of limited computational resources of the devices to fulfill the requirement with reduced latency. With rapid increase of devices and heterogeneity characteristic the resource provisioning is crucial and leads to conflict of trusting among the devices requests. Trust is essential component in any context for communicating or sharing the resources in the network. The proposed work comprises of trusting and provisioning based on deadline. Trust quantity is measured with concept of game theory and optimal strategy decision among provider and customer and provision resources within deadline to execute the tasks is done by finding Nash equilibrium. Nash equilibrium (NE) is estimated by constructing the payoff matrix with choice of two player strategies. NE is obtained in the proposed work for the Trust- Respond (TR) strategy. The latency aware approach for avoiding resource contention due to limited resources of the edge devices, fog computing leverages the cloud services in a distributed way at the edge of the devices. The communication is established between edge devices-fog-cloud and provision of resources is performed based on scalar chain and Gang Plank theory of management to reduce latency and increase trust quantity. To test the performance of proposed work performance parameter considered are latency and computational time.
Wang, Qi, Zhao, Weiliang, Yang, Jian, Wu, Jia, Zhou, Chuan, Xing, Qianli.  2020.  AtNE-Trust: Attributed Trust Network Embedding for Trust Prediction in Online Social Networks. 2020 IEEE International Conference on Data Mining (ICDM). :601–610.
Trust relationship prediction among people provides valuable supports for decision making, information dissemination, and product promotion in online social networks. Network embedding has achieved promising performance for link prediction by learning node representations that encode intrinsic network structures. However, most of the existing network embedding solutions cannot effectively capture the properties of a trust network that has directed edges and nodes with in/out links. Furthermore, there usually exist rich user attributes in trust networks, such as ratings, reviews, and the rated/reviewed items, which may exert significant impacts on the formation of trust relationships. It is still lacking a network embedding-based method that can adequately integrate these properties for trust prediction. In this work, we develop an AtNE-Trust model to address these issues. We firstly capture user embedding from both the trust network structures and user attributes. Then we design a deep multi-view representation learning module to further mine and fuse the obtained user embedding. Finally, a trust evaluation module is developed to predict the trust relationships between users. Representation learning and trust evaluation are optimized together to capture high-quality user embedding and make accurate predictions simultaneously. A set of experiments against the real-world datasets demonstrates the effectiveness of the proposed approach.
Gu, Yanyang, Zhang, Ping, Chen, Zhifeng, Cao, Fei.  2020.  UEFI Trusted Computing Vulnerability Analysis Based on State Transition Graph. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :1043–1052.
In the face of increasingly serious firmware attacks, it is of great significance to analyze the vulnerability security of UEFI. This paper first introduces the commonly used trusted authentication mechanisms of UEFI. Then, aiming at the loopholes in the process of UEFI trust verification in the startup phase, combined with the state transition diagram, PageRank algorithm and Bayesian network theory, the analysis model of UEFI trust verification startup vulnerability is constructed. And according to the example to verify the analysis. Through the verification and analysis of the data obtained, the vulnerable attack paths and key vulnerable nodes are found. Finally, according to the analysis results, security enhancement measures for UEFI are proposed.
Mane, Y. D., Khot, U. P..  2020.  A Systematic Way to Implement Private Tor Network with Trusted Middle Node. 2020 International Conference for Emerging Technology (INCET). :1—6.

Initially, legitimate users were working under a normal web browser to do all activities over the internet [1]. To get more secure service and to get protection against Bot activity, the legitimate users switched their activity from Normal web browser to low latency anonymous communication such as Tor Browser. The Traffic monitoring in Tor Network is difficult as the packets are traveling from source to destination in an encrypted fashion and the Tor network hides its identity from destination. But lately, even the illegitimate users such as attackers/criminals started their activity on the Tor browser. The secured Tor network makes the detection of Botnet more difficult. The existing tools for botnet detection became inefficient against Tor-based bots because of the features of the Tor browser. As the Tor Browser is highly secure and because of the ethical issues, doing practical experiments on it is not advisable which could affect the performance and functionality of the Tor browser. It may also affect the endanger users in situations where the failure of Tor's anonymity has severe consequences. So, in the proposed research work, Private Tor Networks (PTN) on physical or virtual machines with dedicated resources have been created along with Trusted Middle Node. The motivation behind the trusted middle node is to make the Private Tor network more efficient and to increase its performance.

Le, T. V., Huan, T. T..  2020.  Computational Intelligence Towards Trusted Cloudlet Based Fog Computing. 2020 5th International Conference on Green Technology and Sustainable Development (GTSD). :141—147.

The current trend of IoT user is toward the use of services and data externally due to voluminous processing, which demands resourceful machines. Instead of relying on the cloud of poor connectivity or a limited bandwidth, the IoT user prefers to use a cloudlet-based fog computing. However, the choice of cloudlet is solely dependent on its trust and reliability. In practice, even though a cloudlet possesses a required trusted platform module (TPM), we argue that the presence of a TPM is not enough to make the cloudlet trustworthy as the TPM supports only the primitive security of the bootstrap. Besides uncertainty in security, other uncertain conditions of the network (e.g. network bandwidth, latency and expectation time to complete a service request for cloud-based services) may also prevail for the cloudlets. Therefore, in order to evaluate the trust value of multiple cloudlets under uncertainty, this paper broadly proposes the empirical process for evaluation of trust. This will be followed by a measure of trust-based reputation of cloudlets through computational intelligence such as fuzzy logic and ant colony optimization (ACO). In the process, fuzzy logic-based inference and membership evaluation of trust are presented. In addition, ACO and its pheromone communication across different colonies are being modeled with multiple cloudlets. Finally, a measure of affinity or popular trust and reputation of the cloudlets is also proposed. Together with the context of application under multiple cloudlets, the computationally intelligent approaches have been investigated in terms of performance. Hence the contribution is subjected towards building a trusted cloudlet-based fog platform.

Dimitrakos, T., Dilshener, T., Kravtsov, A., Marra, A. La, Martinelli, F., Rizos, A., Rosetti, A., Saracino, A..  2020.  Trust Aware Continuous Authorization for Zero Trust in Consumer Internet of Things. 2020 IEEE 19th International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom). :1801—1812.
This work describes the architecture and prototype implementation of a novel trust-aware continuous authorization technology that targets consumer Internet of Things (IoT), e.g., Smart Home. Our approach extends previous authorization models in three complementary ways: (1) By incorporating trust-level evaluation formulae as conditions inside authorization rules and policies, while supporting the evaluation of such policies through the fusion of an Attribute-Based Access Control (ABAC) authorization policy engine with a Trust-Level-Evaluation-Engine (TLEE). (2) By introducing contextualized, continuous monitoring and re-evaluation of policies throughout the authorization life-cycle. That is, mutable attributes about subjects, resources and environment as well as trust levels that are continuously monitored while obtaining an authorization, throughout the duration of or after revoking an existing authorization. Whenever change is detected, the corresponding authorization rules, including both access control rules and trust level expressions, are re-evaluated.(3) By minimizing the computational and memory footprint and maximizing concurrency and modular evaluation to improve performance while preserving the continuity of monitoring. Finally we introduce an application of such model in Zero Trust Architecture (ZTA) for consumer IoT.
Rabby, M. K. Monir, Khan, M. Altaf, Karimoddini, A., Jiang, S. X..  2020.  Modeling of Trust Within a Human-Robot Collaboration Framework. 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :4267—4272.

In this paper, a time-driven performance-aware mathematical model for trust in the robot is proposed for a Human-Robot Collaboration (HRC) framework. The proposed trust model is based on both the human operator and the robot performances. The human operator’s performance is modeled based on both the physical and cognitive performances, while the robot performance is modeled over its unpredictable, predictable, dependable, and faithful operation regions. The model is validated via different simulation scenarios. The simulation results show that the trust in the robot in the HRC framework is governed by robot performance and human operator’s performance and can be improved by enhancing the robot performance.

Bellas, A., Perrin, S., Malone, B., Rogers, K., Lucas, G., Phillips, E., Tossell, C., Visser, E. d.  2020.  Rapport Building with Social Robots as a Method for Improving Mission Debriefing in Human-Robot Teams. 2020 Systems and Information Engineering Design Symposium (SIEDS). :160—163.

Conflicts may arise at any time during military debriefing meetings, especially in high intensity deployed settings. When such conflicts arise, it takes time to get everyone back into a receptive state of mind so that they engage in reflective discussion rather than unproductive arguing. It has been proposed by some that the use of social robots equipped with social abilities such as emotion regulation through rapport building may help to deescalate these situations to facilitate critical operational decisions. However, in military settings, the same AI agent used in the pre-brief of a mission may not be the same one used in the debrief. The purpose of this study was to determine whether a brief rapport-building session with a social robot could create a connection between a human and a robot agent, and whether consistency in the embodiment of the robot agent was necessary for maintaining this connection once formed. We report the results of a pilot study conducted at the United States Air Force Academy which simulated a military mission (i.e., Gravity and Strike). Participants' connection with the agent, sense of trust, and overall likeability revealed that early rapport building can be beneficial for military missions.

Lyons, J. B., Nam, C. S., Jessup, S. A., Vo, T. Q., Wynne, K. T..  2020.  The Role of Individual Differences as Predictors of Trust in Autonomous Security Robots. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—5.

This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.

Alarcon, G. M., Gibson, A. M., Jessup, S. A..  2020.  Trust Repair in Performance, Process, and Purpose Factors of Human-Robot Trust. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1—6.

The current study explored the influence of trust and distrust behaviors on performance, process, and purpose (trustworthiness) perceptions over time when participants were paired with a robot partner. We examined the changes in trustworthiness perceptions after trust violations and trust repair after those violations. Results indicated performance, process, and purpose perceptions were all affected by trust violations, but perceptions of process and purpose decreased more than performance following a distrust behavior. Similarly, trust repair was achieved in performance perceptions, but trust repair in perceived process and purpose was absent. When a trust violation occurred, process and purpose perceptions deteriorated and failed to recover from the violation. In addition, the trust violation resulted in untrustworthy perceptions of the robot. In contrast, trust violations decreased partner performance perceptions, and subsequent trust behaviors resulted in a trust repair. These findings suggest that people are more sensitive to distrust behaviors in their perceptions of process and purpose than they are in performance perceptions.

Rossi, A., Dautenhahn, K., Koay, K. Lee, Walters, M. L..  2020.  How Social Robots Influence People’s Trust in Critical Situations. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :1020—1025.

As we expect that the presence of autonomous robots in our everyday life will increase, we must consider that people will have not only to accept robots to be a fundamental part of their lives, but they will also have to trust them to reliably and securely engage them in collaborative tasks. Several studies showed that robots are more comfortable interacting with robots that respect social conventions. However, it is still not clear if a robot that expresses social conventions will gain more favourably people's trust. In this study, we aimed to assess whether the use of social behaviours and natural communications can affect humans' sense of trust and companionship towards the robots. We conducted a between-subjects study where participants' trust was tested in three scenarios with increasing trust criticality (low, medium, high) in which they interacted either with a social or a non-social robot. Our findings showed that participants trusted equally a social and non-social robot in the low and medium consequences scenario. On the contrary, we observed that participants' choices of trusting the robot in a higher sensitive task was affected more by a robot that expressed social cues with a consequent decrease of their trust in the robot.

Calhoun, C. S., Reinhart, J., Alarcon, G. A., Capiola, A..  2020.  Establishing Trust in Binary Analysis in Software Development and Applications. 2020 IEEE International Conference on Human-Machine Systems (ICHMS). :1–4.
The current exploratory study examined software programmer trust in binary analysis techniques used to evaluate and understand binary code components. Experienced software developers participated in knowledge elicitations to identify factors affecting trust in tools and methods used for understanding binary code behavior and minimizing potential security vulnerabilities. Developer perceptions of trust in those tools to assess implementation risk in binary components were captured across a variety of application contexts. The software developers reported source security and vulnerability reports provided the best insight and awareness of potential issues or shortcomings in binary code. Further, applications where the potential impact to systems and data loss is high require relying on more than one type of analysis to ensure the binary component is sound. The findings suggest binary analysis is viable for identifying issues and potential vulnerabilities as part of a comprehensive solution for understanding binary code behavior and security vulnerabilities, but relying simply on binary analysis tools and binary release metadata appears insufficient to ensure a secure solution.
Ng, M., Coopamootoo, K. P. L., Toreini, E., Aitken, M., Elliot, K., Moorsel, A. van.  2020.  Simulating the Effects of Social Presence on Trust, Privacy Concerns Usage Intentions in Automated Bots for Finance. 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :190–199.
FinBots are chatbots built on automated decision technology, aimed to facilitate accessible banking and to support customers in making financial decisions. Chatbots are increasing in prevalence, sometimes even equipped to mimic human social rules, expectations and norms, decreasing the necessity for human-to-human interaction. As banks and financial advisory platforms move towards creating bots that enhance the current state of consumer trust and adoption rates, we investigated the effects of chatbot vignettes with and without socio-emotional features on intention to use the chatbot for financial support purposes. We conducted a between-subject online experiment with N = 410 participants. Participants in the control group were provided with a vignette describing a secure and reliable chatbot called XRO23, whereas participants in the experimental group were presented with a vignette describing a secure and reliable chatbot that is more human-like and named Emma. We found that Vignette Emma did not increase participants' trust levels nor lowered their privacy concerns even though it increased perception of social presence. However, we found that intention to use the presented chatbot for financial support was positively influenced by perceived humanness and trust in the bot. Participants were also more willing to share financially-sensitive information such as account number, sort code and payments information to XRO23 compared to Emma - revealing a preference for a technical and mechanical FinBot in information sharing. Overall, this research contributes to our understanding of the intention to use chatbots with different features as financial technology, in particular that socio-emotional support may not be favoured when designed independently of financial function.
Kfoury, E. F., Khoury, D., AlSabeh, A., Gomez, J., Crichigno, J., Bou-Harb, E..  2020.  A Blockchain-based Method for Decentralizing the ACME Protocol to Enhance Trust in PKI. 2020 43rd International Conference on Telecommunications and Signal Processing (TSP). :461–465.
Blockchain technology is the cornerstone of digital trust and systems' decentralization. The necessity of eliminating trust in computing systems has triggered researchers to investigate the applicability of Blockchain to decentralize the conventional security models. Specifically, researchers continuously aim at minimizing trust in the well-known Public Key Infrastructure (PKI) model which currently requires a trusted Certificate Authority (CA) to sign digital certificates. Recently, the Automated Certificate Management Environment (ACME) was standardized as a certificate issuance automation protocol. It minimizes the human interaction by enabling certificates to be automatically requested, verified, and installed on servers. ACME only solved the automation issue, but the trust concerns remain as a trusted CA is required. In this paper we propose decentralizing the ACME protocol by using the Blockchain technology to enhance the current trust issues of the existing PKI model and to eliminate the need for a trusted CA. The system was implemented and tested on Ethereum Blockchain, and the results showed that the system is feasible in terms of cost, speed, and applicability on a wide range of devices including Internet of Things (IoT) devices.