Visible to the public Biblio

Found 1525 results

Filters: Keyword is human factors  [Clear All Filters]
2020-12-01
Byrne, K., Marín, C..  2018.  Human Trust in Robots When Performing a Service. 2018 IEEE 27th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE). :9—14.

The presence of robots is becoming more apparent as technology progresses and the market focus transitions from smart phones to robotic personal assistants such as those provided by Amazon and Google. The integration of robots in our societies is an inevitable tendency in which robots in many forms and with many functionalities will provide services to humans. This calls for an understanding of how humans are affected by both the presence of and the reliance on robots to perform services for them. In this paper we explore the effects that robots have on humans when a service is performed on request. We expose three groups of human participants to three levels of service completion performed by robots. We record and analyse human perceptions such as propensity to trust, competency, responsiveness, sociability, and team work ability. Our results demonstrate that humans tend to trust robots and are more willing to interact with them when they autonomously recover from failure by requesting help from other robots to fulfil their service. This supports the view that autonomy and team working capabilities must be brought into robots in an effort to strengthen trust in robots performing a service.

Poulsen, A., Burmeister, O. K., Tien, D..  2018.  Care Robot Transparency Isn't Enough for Trust. 2018 IEEE Region Ten Symposium (Tensymp). :293—297.

A recent study featuring a new kind of care robot indicated that participants expect a robot's ethical decision-making to be transparent to develop trust, even though the same type of `inspection of thoughts' isn't expected of a human carer. At first glance, this might suggest that robot transparency mechanisms are required for users to develop trust in robot-made ethical decisions. But the participants were found to desire transparency only when they didn't know the specifics of a human-robot social interaction. Humans trust others without observing their thoughts, which implies other means of determining trustworthiness. The study reported here suggests that the method is social interaction and observation, signifying that trust is a social construct. Moreover, that `social determinants of trust' are the transparent elements. This socially determined behaviour draws on notions of virtue ethics. If a caregiver (nurse or robot) consistently provides good, ethical care, then patients can trust that caregiver to do so often. The same social determinants may apply to care robots and thus it ought to be possible to trust them without the ability to see their thoughts. This study suggests why transparency mechanisms may not be effective in helping to develop trust in care robot ethical decision-making. It suggests that roboticists need to build sociable elements into care robots to help patients to develop patient trust in the care robot's ethical decision-making.

Xu, J., Bryant, D. G., Howard, A..  2018.  Would You Trust a Robot Therapist? Validating the Equivalency of Trust in Human-Robot Healthcare Scenarios 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :442—447.

With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.

Herse, S., Vitale, J., Tonkin, M., Ebrahimian, D., Ojha, S., Johnston, B., Judge, W., Williams, M..  2018.  Do You Trust Me, Blindly? Factors Influencing Trust Towards a Robot Recommender System 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :7—14.

When robots and human users collaborate, trust is essential for user acceptance and engagement. In this paper, we investigated two factors thought to influence user trust towards a robot: preference elicitation (a combination of user involvement and explanation) and embodiment. We set our experiment in the application domain of a restaurant recommender system, assessing trust via user decision making and perceived source credibility. Previous research in this area uses simulated environments and recommender systems that present the user with the best choice from a pool of options. This experiment builds on past work in two ways: first, we strengthened the ecological validity of our experimental paradigm by incorporating perceived risk during decision making; and second, we used a system that recommends a nonoptimal choice to the user. While no effect of embodiment is found for trust, the inclusion of preference elicitation features significantly increases user trust towards the robot recommender system. These findings have implications for marketing and health promotion in relation to Human-Robot Interaction and call for further investigation into the development and maintenance of trust between robot and user.

Xu, J., Howard, A..  2018.  The Impact of First Impressions on Human- Robot Trust During Problem-Solving Scenarios. 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :435—441.

With recent advances in robotics, it is expected that robots will become increasingly common in human environments, such as in the home and workplaces. Robots will assist and collaborate with humans on a variety of tasks. During these collaborations, it is inevitable that disagreements in decisions would occur between humans and robots. Among factors that lead to which decision a human should ultimately follow, theirs or the robot, trust is a critical factor to consider. This study aims to investigate individuals' behaviors and aspects of trust in a problem-solving situation in which a decision must be made in a bounded amount of time. A between-subject experiment was conducted with 100 participants. With the assistance of a humanoid robot, participants were requested to tackle a cognitive-based task within a given time frame. Each participant was randomly assigned to one of the following initial conditions: 1) a working robot in which the robot provided a correct answer or 2) a faulty robot in which the robot provided an incorrect answer. Impacts of the faulty robot behavior on participant's decision to follow the robot's suggested answer were analyzed. Survey responses about trust were collected after interacting with the robot. Results indicated that the first impression has a significant impact on participant's behavior of trusting a robot's advice during a disagreement. In addition, this study discovered evidence supporting that individuals still have trust in a malfunctioning robot even after they have observed a robot's faulty behavior.

Xie, Y., Bodala, I. P., Ong, D. C., Hsu, D., Soh, H..  2019.  Robot Capability and Intention in Trust-Based Decisions Across Tasks. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :39—47.

In this paper, we present results from a human-subject study designed to explore two facets of human mental models of robots - inferred capability and intention - and their relationship to overall trust and eventual decisions. In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.

Nielsen, C., Mathiesen, M., Nielsen, J., Jensen, L. C..  2019.  Changes in Heart Rate and Feeling of Safety When Led by a Rehabilitation Robot. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :580—581.

Trust is an important topic in medical human-robot interaction, since patients may be more fragile than other groups of people. This paper investigates the issue of users' trust when interacting with a rehabilitation robot. In the study, we investigate participants' heart rate and perception of safety in a scenario when their arm is led by the rehabilitation robot in two types of exercises at three different velocities. The participants' heart rate are measured during each exercise and the participants are asked how safe they feel after each exercise. The results showed that velocity and type of exercise has no significant influence on the participants' heart rate, but they do have significant influence on how safe they feel. We found that increasing velocity and longer exercises negatively influence participants' perception of safety.

Gao, Y., Sibirtseva, E., Castellano, G., Kragic, D..  2019.  Fast Adaptation with Meta-Reinforcement Learning for Trust Modelling in Human-Robot Interaction. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :305—312.

In socially assistive robotics, an important research area is the development of adaptation techniques and their effect on human-robot interaction. We present a meta-learning based policy gradient method for addressing the problem of adaptation in human-robot interaction and also investigate its role as a mechanism for trust modelling. By building an escape room scenario in mixed reality with a robot, we test our hypothesis that bi-directional trust can be influenced by different adaptation algorithms. We found that our proposed model increased the perceived trustworthiness of the robot and influenced the dynamics of gaining human's trust. Additionally, participants evaluated that the robot perceived them as more trustworthy during the interactions with the meta-learning based adaptation compared to the previously studied statistical adaptation model.

Losey, D. P., Sadigh, D..  2019.  Robots that Take Advantage of Human Trust. 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :7001—7008.

Humans often assume that robots are rational. We believe robots take optimal actions given their objective; hence, when we are uncertain about what the robot's objective is, we interpret the robot's actions as optimal with respect to our estimate of its objective. This approach makes sense when robots straightforwardly optimize their objective, and enables humans to learn what the robot is trying to achieve. However, our insight is that-when robots are aware that humans learn by trusting that the robot actions are rational-intelligent robots do not act as the human expects; instead, they take advantage of the human's trust, and exploit this trust to more efficiently optimize their own objective. In this paper, we formally model instances of human-robot interaction (HRI) where the human does not know the robot's objective using a two-player game. We formulate different ways in which the robot can model the uncertain human, and compare solutions of this game when the robot has conservative, optimistic, rational, and trusting human models. In an offline linear-quadratic case study and a real-time user study, we show that trusting human models can naturally lead to communicative robot behavior, which influences end-users and increases their involvement.

Haider, C., Chebotarev, Y., Tsiourti, C., Vincze, M..  2019.  Effects of Task-Dependent Robot Errors on Trust in Human-Robot Interaction: A Pilot Study. 2019 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computing, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :172—177.

The growing diffusion of robotics in our daily life demands a deeper understanding of the mechanisms of trust in human-robot interaction. The performance of a robot is one of the most important factors influencing the trust of a human user. However, it is still unclear whether the circumstances in which a robot fails to affect the user's trust. We investigate how the perception of robot failures may influence the willingness of people to cooperate with the robot by following its instructions in a time-critical task. We conducted an experiment in which participants interacted with a robot that had previously failed in a related or an unrelated task. We hypothesized that users' observed and self-reported trust ratings would be higher in the condition where the robot has previously failed in an unrelated task. A proof-of-concept study with nine participants timidly confirms our hypothesis. At the same time, our results reveal some flaws in the design experimental, and encourage a future large scale study.

Sebo, S. S., Krishnamurthi, P., Scassellati, B..  2019.  “I Don't Believe You”: Investigating the Effects of Robot Trust Violation and Repair. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :57—65.

When a robot breaks a person's trust by making a mistake or failing, continued interaction will depend heavily on how the robot repairs the trust that was broken. Prior work in psychology has demonstrated that both the trust violation framing and the trust repair strategy influence how effectively trust can be restored. We investigate trust repair between a human and a robot in the context of a competitive game, where a robot tries to restore a human's trust after a broken promise, using either a competence or integrity trust violation framing and either an apology or denial trust repair strategy. Results from a 2×2 between-subjects study ( n=82) show that participants interacting with a robot employing the integrity trust violation framing and the denial trust repair strategy are significantly more likely to exhibit behavioral retaliation toward the robot. In the Dyadic Trust Scale survey, an interaction between trust violation framing and trust repair strategy was observed. Our results demonstrate the importance of considering both trust violation framing and trust repair strategy choice when designing robots to repair trust. We also discuss the influence of human-to-robot promises and ethical considerations when framing and repairing trust between a human and robot.

Geiskkovitch, D. Y., Thiessen, R., Young, J. E., Glenwright, M. R..  2019.  What? That's Not a Chair!: How Robot Informational Errors Affect Children's Trust Towards Robots 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :48—56.

Robots that interact with children are becoming more common in places such as child care and hospital environments. While such robots may mistakenly provide nonsensical information, or have mechanical malfunctions, we know little of how these robot errors are perceived by children, and how they impact trust. This is particularly important when robots provide children with information or instructions, such as in education or health care. Drawing inspiration from established psychology literature investigating how children trust entities who teach or provide them with information (informants), we designed and conducted an experiment to examine how robot errors affect how young children (3-5 years old) trust robots. Our results suggest that children utilize their understanding of people to develop their perceptions of robots, and use this to determine how to interact with robots. Specifically, we found that children developed their trust model of a robot based on the robot's previous errors, similar to how they would for a person. We however failed to replicate other prior findings with robots. Our results provide insight into how children as young as 3 years old might perceive robot errors and develop trust.

Robinette, P., Novitzky, M., Fitzgerald, C., Benjamin, M. R., Schmidt, H..  2019.  Exploring Human-Robot Trust During Teaming in a Real-World Testbed. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :592—593.

Project Aquaticus is a human-robot teaming competition on the water involving autonomous surface vehicles and human operated motorized kayaks. Teams composed of both humans and robots share the same physical environment to play capture the flag. In this paper, we present results from seven competitions of our half-court (one participant versus one robot) game. We found that participants indicated more trust in more aggressive behaviors from robots.

Ullman, D., Malle, B. F..  2019.  Measuring Gains and Losses in Human-Robot Trust: Evidence for Differentiable Components of Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :618—619.

Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.

Ogawa, R., Park, S., Umemuro, H..  2019.  How Humans Develop Trust in Communication Robots: A Phased Model Based on Interpersonal Trust. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :606—607.

The purpose of this study was to propose a model of development of trust in social robots. Insights in interpersonal trust were adopted from social psychology and a novel model was proposed. In addition, this study aimed to investigate the relationship among trust development and self-esteem. To validate the proposed model, an experiment using a communication robot NAO was conducted and changes in categories of trust as well as self-esteem were measured. Results showed that general and category trust have been developed in the early phase. Self-esteem is also increased along the interactions with the robot.

2020-11-23
Haddad, G. El, Aïmeur, E., Hage, H..  2018.  Understanding Trust, Privacy and Financial Fears in Online Payment. 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :28–36.
In online payment, customers must transmit their personal and financial information through the website to conclude their purchase and pay the services or items selected. They may face possible fears from online transactions raised by their risk perception about financial or privacy loss. They may have concerns over the payment decision with the possible negative behaviors such as shopping cart abandonment. Therefore, customers have three major players that need to be addressed in online payment: the online seller, the payment page, and their own perception. However, few studies have explored these three players in an online purchasing environment. In this paper, we focus on the customer concerns and examine the antecedents of trust, payment security perception as well as their joint effect on two fundamentally important customers' aspects privacy concerns and financial fear perception. A total of 392 individuals participated in an online survey. The results highlight the importance, of the seller website's components (such as ease of use, security signs, and quality information) and their impact on the perceived payment security as well as their impact on customer's trust and financial fear perception. The objective of our study is to design a research model that explains the factors contributing to an online payment decision.
Alruwaythi, M., Kambampaty, K., Nygard, K..  2018.  User Behavior Trust Modeling in Cloud Security. 2018 International Conference on Computational Science and Computational Intelligence (CSCI). :1336–1339.
Evaluating user behavior in cloud computing infrastructure is important for both Cloud Users and Cloud Service Providers. The service providers must ensure the safety of users who access the cloud. User behavior can be modeled and employed to help assess trust and play a role in ensuring authenticity and safety of the user. In this paper, we propose a User Behavior Trust Model based on Fuzzy Logic (UBTMFL). In this model, we develop user history patterns and compare them current user behavior. The outcome of the comparison is sent to a trust computation center to calculate a user trust value. This model considers three types of trust: direct, history and comprehensive. Simulation results are included.
2020-11-20
Mousavi, M. Z., Kumar, S..  2019.  Analysis of key Factors for Organization Information Security. 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon). :514—518.
Protecting sensitive information from illegal access and misuse is crucial to all organizations. An inappropriate Information Security (IS) policy and procedures are not only a suitable environment for an outsider attack but also a good chance for the insiders' misuse. In this paper, we will discuss the roles of an organization in information security and how human behavior affects the Information Security System (ISS). How an organization can create and instill an effective information security culture in an organization to improve their information safeguards. The findings in this review can be used to further researches and will be useful for organizations to improve their information security structure (ISC).
Alzahrani, A., Johnson, C., Altamimi, S..  2018.  Information security policy compliance: Investigating the role of intrinsic motivation towards policy compliance in the organization. 2018 4th International Conference on Information Management (ICIM). :125—132.
Recent behavioral research in information security has focused on increasing employees' motivation to enhance the security performance in an organization. This empirical study investigated employees' information security policy (ISP) compliance intentions using self-determination theory (SDT). Relevant hypotheses were developed to test the proposed research model. Data obtained via a survey (N=3D407) from a Fortune 600 organization in Saudi Arabia provides empirical support for the model. The results confirmed that autonomy, competence and the concept of relatedness all positively affect employees' intentions to comply. The variable 'perceived value congruence' had a negative effect on ISP compliance intentions, and the perceived legitimacy construct did not affect employees' intentions. In general, the findings of this study suggest that SDT has value in research into employees' ISP compliance intentions.
2020-11-17
Abdelzaher, T., Ayanian, N., Basar, T., Diggavi, S., Diesner, J., Ganesan, D., Govindan, R., Jha, S., Lepoint, T., Marlin, B. et al..  2018.  Toward an Internet of Battlefield Things: A Resilience Perspective. Computer. 51:24—36.

The Internet of Battlefield Things (IoBT) might be one of the most expensive cyber-physical systems of the next decade, yet much research remains to develop its fundamental enablers. A challenge that distinguishes the IoBT from its civilian counterparts is resilience to a much larger spectrum of threats.

Radha, P., Selvakumar, N., Sekar, J. Raja, Johnsonselva, J. V..  2018.  Enhancing Internet of Battle Things using Ultrasonic assisted Non-Destructive Testing (Technical solution). 2018 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). :1—4.

The subsystem of IoMT (Internet of Military of Things) called IoBT (Internet of Battle of Things) is the major resource of the military where the various stack holders of the battlefield and different categories of equipment are tightly integrated through the internet. The proposed architecture mentioned in this paper will be helpful to design IoBT effectively for warfare using irresistible technologies like information technology, embedded technology, and network technology. The role of Machine intelligence is essential in IoBT to create smart things and provide accurate solutions without human intervention. Non-Destructive Testing (NDT) is used in Industries to examine and analyze the invisible defects of equipment. Generally, the ultrasonic waves are used to examine and analyze the internal defects of materials. Hence the proposed architecture of IoBT is enhanced by ultrasonic based NDT to study the properties of the things of the battlefield without causing any damage.

Abuzainab, N., Saad, W..  2018.  Misinformation Control in the Internet of Battlefield Things: A Multiclass Mean-Field Game. 2018 IEEE Global Communications Conference (GLOBECOM). :1—7.

In this paper, the problem of misinformation propagation is studied for an Internet of Battlefield Things (IoBT) system in which an attacker seeks to inject false information in the IoBT nodes in order to compromise the IoBT operations. In the considered model, each IoBT node seeks to counter the misinformation attack by finding the optimal probability of accepting a given information that minimizes its cost at each time instant. The cost is expressed in terms of the quality of information received as well as the infection cost. The problem is formulated as a mean-field game with multiclass agents which is suitable to model a massive heterogeneous IoBT system. For this game, the mean-field equilibrium is characterized, and an algorithm based on the forward backward sweep method is proposed. Then, the finite IoBT case is considered, and the conditions of convergence of the equilibria in the finite case to the mean-field equilibrium are presented. Numerical results show that the proposed scheme can achieve a two-fold increase in the quality of information (QoI) compared to the baseline when the nodes are always transmitting.

Kamhoua, C. A..  2018.  Game theoretic modeling of cyber deception in the Internet of Battlefield Things. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :862—862.

Internet of Battlefield Things (IoBT) devices such as actuators, sensors, wearable devises, robots, drones, and autonomous vehicles, facilitate the Intelligence, Surveillance and Reconnaissance (ISR) to Command and Control and battlefield services. IoBT devices have the ability to collect operational field data, to compute on the data, and to upload its information to the network. Securing the IoBT presents additional challenges compared with traditional information technology (IT) systems. First, IoBT devices are mass produced rapidly to be low-cost commodity items without security protection in their original design. Second, IoBT devices are highly dynamic, mobile, and heterogeneous without common standards. Third, it is imperative to understand the natural world, the physical process(es) under IoBT control, and how these real-world processes can be compromised before recommending any relevant security counter measure. Moreover, unprotected IoBT devices can be used as “stepping stones” by attackers to launch more sophisticated attacks such as advanced persistent threats (APTs). As a result of these challenges, IoBT systems are the frequent targets of sophisticated cyber attack that aim to disrupt mission effectiveness.

Abuzainab, N., Saad, W..  2018.  A Multiclass Mean-Field Game for Thwarting Misinformation Spread in the Internet of Battlefield Things. IEEE Transactions on Communications. 66:6643—6658.

In this paper, the problem of misinformation propagation is studied for an Internet of Battlefield Things (IoBT) system, in which an attacker seeks to inject false information in the IoBT nodes in order to compromise the IoBT operations. In the considered model, each IoBT node seeks to counter the misinformation attack by finding the optimal probability of accepting given information that minimizes its cost at each time instant. The cost is expressed in terms of the quality of information received as well as the infection cost. The problem is formulated as a mean-field game with multiclass agents, which is suitable to model a massive heterogeneous IoBT system. For this game, the mean-field equilibrium is characterized, and an algorithm based on the forward backward sweep method is proposed to find the mean-field equilibrium. Then, the finite-IoBT case is considered, and the conditions of convergence of the equilibria in the finite case to the mean-field equilibrium are presented. Numerical results show that the proposed scheme can achieve a 1.2-fold increase in the quality of information compared with a baseline scheme, in which the IoBT nodes are always transmitting. The results also show that the proposed scheme can reduce the proportion of infected nodes by 99% compared with the baseline.

Tosh, D. K., Shetty, S., Foytik, P., Njilla, L., Kamhoua, C. A..  2018.  Blockchain-Empowered Secure Internet -of- Battlefield Things (IoBT) Architecture. MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM). :593—598.

Internet of Things (IoT) technology is emerging to advance the modern defense and warfare applications because the battlefield things, such as combat equipment, warfighters, and vehicles, can sense and disseminate information from the battlefield to enable real-time decision making on military operations and enhance autonomy in the battlefield. Since this Internet-of-Battlefield Things (IoBT) environment is highly heterogeneous in terms of devices, network standards, platforms, connectivity, and so on, it introduces trust, security, and privacy challenges when battlefield entities exchange information with each other. To address these issues, we propose a Blockchain-empowered auditable platform for IoBT and describe its architectural components, such as battlefield-sensing layer, network layer, and consensus and service layer, in depth. In addition to the proposed layered architecture, this paper also presents several open research challenges involved in each layer to realize the Blockchain-enabled IoBT platform.