Visible to the public Biblio

Filters: Keyword is computer theory  [Clear All Filters]
2021-07-27
Sharma, Prince, Shukla, Shailendra, Vasudeva, Amol.  2020.  Trust-based Incentive for Mobile Offloaders in Opportunistic Networks. 2020 International Conference on Smart Electronics and Communication (ICOSEC). :872—877.
Mobile data offloading using opportunistic network has recently gained its significance to increase mobile data needs. Such offloaders need to be properly incentivized to encourage more and more users to act as helpers in such networks. The extent of help offered by mobile data offloading alternatives using appropriate incentive mechanisms is significant in such scenarios. The limitation of existing incentive mechanisms is that they are partial in implementation while most of them use third party intervention based derivation. However, none of the papers considers trust as an essential factor for incentive distribution. Although few works contribute to the trust analysis, but the implementation is limited to offloading determination only while the incentive is independent of trust. We try to investigate if trust could be related to the Nash equilibrium based incentive evaluation. Our analysis results show that trust-based incentive distribution encourages more than 50% offloaders to act positively and contribute successfully towards efficient mobile data offloading. We compare the performance of our algorithm with literature based salary-bonus scheme implementation and get optimum incentive beyond 20% dependence over trust-based output.
Fatehi, Nina, Shahhoseini, HadiShahriar.  2020.  A Hybrid Algorithm for Evaluating Trust in Online Social Networks. 2020 10th International Conference on Computer and Knowledge Engineering (ICCKE). :158—162.
The acceleration of extending popularity of Online Social Networks (OSNs) thanks to various services with which they provide people, is inevitable. This is why in OSNs security as a way to protect private data of users to be abused by unauthoritative people has a vital role to play. Trust evaluation is the security approach that has been utilized since the advent of OSNs. Graph-based approaches are among the most popular methods for trust evaluation. However, graph-based models need to employ limitations in the search process of finding trusted paths. This contributes to a reduction in trust accuracy. In this investigation, a learning-based model which with no limitation is able to find reliable users of any target user, is proposed. Experimental results depict 12% improvement in trust accuracy compares to models based on the graph-based approach.
2020-10-05
Rafati, Jacob, DeGuchy, Omar, Marcia, Roummel F..  2018.  Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques. 2018 26th European Signal Processing Conference (EUSIPCO). :2015—2019.

Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.

Parvina, Hashem, Moradi, Parham, Esmaeilib, Shahrokh, Jalilic, Mahdi.  2018.  An Efficient Recommender System by Integrating Non-Negative Matrix Factorization With Trust and Distrust Relationships. 2018 IEEE Data Science Workshop (DSW). :135—139.

Matrix factorization (MF) has been proved to be an effective approach to build a successful recommender system. However, most current MF-based recommenders cannot obtain high prediction accuracy due to the sparseness of user-item matrix. Moreover, these methods suffer from the scalability issues when applying on large-scale real-world tasks. To tackle these issues, in this paper a social regularization method called TrustRSNMF is proposed that incorporates the social trust information of users in nonnegative matrix factorization framework. The proposed method integrates trust statements along with user-item ratings as an additional information source into the recommendation model to deal with the data sparsity and cold-start issues. In order to evaluate the effectiveness of the proposed method, a number of experiments are performed on two real-world datasets. The obtained results demonstrate significant improvements of the proposed method compared to state-of-the-art recommendation methods.

Lowney, M. Phil, Liu, Hong, Chabot, Eugene.  2018.  Trust Management in Underwater Acoustic MANETs based on Cloud Theory using Multi-Parameter Metrics. 2018 International Carnahan Conference on Security Technology (ICCST). :1—5.

With wide applications like surveillance and imaging, securing underwater acoustic Mobile Ad-hoc NETworks (MANET) becomes a double-edged sword for oceanographic operations. Underwater acoustic MANET inherits vulnerabilities from 802.11-based MANET which renders traditional cryptographic approaches defenseless. A Trust Management Framework (TMF), allowing maintained confidence among participating nodes with metrics built from their communication activities, promises secure, efficient and reliable access to terrestrial MANETs. TMF cannot be directly applied to the underwater environment due to marine characteristics that make it difficult to differentiate natural turbulence from intentional misbehavior. This work proposes a trust model to defend underwater acoustic MANETs against attacks using a machine learning method with carefully chosen communication metrics, and a cloud model to address the uncertainty of trust in harsh underwater environments. By integrating the trust framework of communication with the cloud model to combat two kinds of uncertainties: fuzziness and randomness, trust management is greatly improved for underwater acoustic MANETs.

Ahmed, Abdelmuttlib Ibrahim Abdalla, Khan, Suleman, Gani, Abdullah, Hamid, Siti Hafizah Ab, Guizani, Mohsen.  2018.  Entropy-based Fuzzy AHP Model for Trustworthy Service Provider Selection in Internet of Things. 2018 IEEE 43rd Conference on Local Computer Networks (LCN). :606—613.

Nowadays, trust and reputation models are used to build a wide range of trust-based security mechanisms and trust-based service management applications on the Internet of Things (IoT). Considering trust as a single unit can result in missing important and significant factors. We split trust into its building-blocks, then we sort and assign weight to these building-blocks (trust metrics) on the basis of its priorities for the transaction context of a particular goal. To perform these processes, we consider trust as a multi-criteria decision-making problem, where a set of trust worthiness metrics represent the decision criteria. We introduce Entropy-based fuzzy analytic hierarchy process (EFAHP) as a trust model for selecting a trustworthy service provider, since the sense of decision making regarding multi-metrics trust is structural. EFAHP gives 1) fuzziness, which fits the vagueness, uncertainty, and subjectivity of trust attributes; 2) AHP, which is a systematic way for making decisions in complex multi-criteria decision making; and 3) entropy concept, which is utilized to calculate the aggregate weights for each service provider. We present a numerical illustration in trust-based Service Oriented Architecture in the IoT (SOA-IoT) to demonstrate the service provider selection using the EFAHP Model in assessing and aggregating the trust scores.

Mitra, Aritra, Abbas, Waseem, Sundaram, Shreyas.  2018.  On the Impact of Trusted Nodes in Resilient Distributed State Estimation of LTI Systems. 2018 IEEE Conference on Decision and Control (CDC). :4547—4552.

We address the problem of distributed state estimation of a linear dynamical process in an attack-prone environment. A network of sensors, some of which can be compromised by adversaries, aim to estimate the state of the process. In this context, we investigate the impact of making a small subset of the nodes immune to attacks, or “trusted”. Given a set of trusted nodes, we identify separate necessary and sufficient conditions for resilient distributed state estimation. We use such conditions to illustrate how even a small trusted set can achieve a desired degree of robustness (where the robustness metric is specific to the problem under consideration) that could otherwise only be achieved via additional measurement and communication-link augmentation. We then establish that, unfortunately, the problem of selecting trusted nodes is NP-hard. Finally, we develop an attack-resilient, provably-correct distributed state estimation algorithm that appropriately leverages the presence of the trusted nodes.

Kang, Anqi.  2018.  Collaborative Filtering Algorithm Based on Trust and Information Entropy. 2018 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS). 3:262—266.

In order to improve the accuracy of similarity, an improved collaborative filtering algorithm based on trust and information entropy is proposed in this paper. Firstly, the direct trust between the users is determined by the user's rating to explore the potential trust relationship of the users. The time decay function is introduced to realize the dynamic portrayal of the user's interest decays over time. Secondly, the direct trust and the indirect trust are combined to obtain the overall trust which is weighted with the Pearson similarity to obtain the trust similarity. Then, the information entropy theory is introduced to calculate the similarity based on weighted information entropy. At last, the trust similarity and the similarity based on weighted information entropy are weighted to obtain the similarity combing trust and information entropy which is used to predicted the rating of the target user and create the recommendation. The simulation shows that the improved algorithm has a higher accuracy of recommendation and can provide more accurate and reliable recommendation service.

Abusitta, Adel, Bellaiche, Martine, Dagenais, Michel.  2018.  A trust-based game theoretical model for cooperative intrusion detection in multi-cloud environments. 2018 21st Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN). :1—8.

Cloud systems are becoming more complex and vulnerable to attacks. Cyber attacks are also becoming more sophisticated and harder to detect. Therefore, it is increasingly difficult for a single cloud-based intrusion detection system (IDS) to detect all attacks, because of limited and incomplete knowledge about attacks. The recent researches in cyber-security have shown that a co-operation among IDSs can bring higher detection accuracy in such complex computer systems. Through collaboration, a cloud-based IDS can consult other IDSs about suspicious intrusions and increase the decision accuracy. The problem of existing cooperative IDS approaches is that they overlook having untrusted (malicious or not) IDSs that may negatively effect the decision about suspicious intrusions in the cloud. Moreover, they rely on a centralized architecture in which a central agent regulates the cooperation, which contradicts the distributed nature of the cloud. In this paper, we propose a framework that enables IDSs to distributively form trustworthy IDSs communities. We devise a novel decentralized algorithm, based on coalitional game theory, that allows a set of cloud-based IDSs to cooperatively set up their coalition in such a way to make their individual detection accuracy increase, even in the presence of untrusted IDSs.

Yu, Zihuan.  2018.  Research on Cloud Computing Security Evaluation Model Based on Trust Management. 2018 IEEE 4th International Conference on Computer and Communications (ICCC). :1934—1937.

At present, cloud computing technology has made outstanding contributions to the Internet in data unification and sharing applications. However, the problem of information security in cloud computing environment has to be paid attention to and effective measures have to be taken to solve it. In order to control the data security under cloud services, the DS evidence theory method is introduced. The trust management mechanism is established from the source of big data, and a cloud computing security assessment model is constructed to achieve the quantifiable analysis purpose of cloud computing security assessment. Through the simulation, the innovative way of quantifying the confidence criterion through big data trust management and DS evidence theory not only regulates the data credible quantification mechanism under cloud computing, but also improves the effectiveness of cloud computing security assessment, providing a friendly service support platform for subsequent cloud computing service.

2018-08-23
Nizamkari, N. S..  2017.  A graph-based trust-enhanced recommender system for service selection in IOT. 2017 International Conference on Inventive Systems and Control (ICISC). :1–5.

In an Internet of Things (IOT) network, each node (device) provides and requires services and with the growth in IOT, the number of nodes providing the same service have also increased, thus creating a problem of selecting one reliable service from among many providers. In this paper, we propose a scalable graph-based collaborative filtering recommendation algorithm, improved using trust to solve service selection problem, which can scale to match the growth in IOT unlike a central recommender which fails. Using this recommender, a node can predict its ratings for the nodes that are providing the required service and then select the best rated service provider.

Felmlee, D., Lupu, E., McMillan, C., Karafili, E., Bertino, E..  2017.  Decision-making in policy governed human-autonomous systems teams. 2017 IEEE SmartWorld, Ubiquitous Intelligence Computing, Advanced Trusted Computed, Scalable Computing Communications, Cloud Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI). :1–6.

Policies govern choices in the behavior of systems. They are applied to human behavior as well as to the behavior of autonomous systems but are defined differently in each case. Generally humans have the ability to interpret the intent behind the policies, to bring about their desired effects, even occasionally violating them when the need arises. In contrast, policies for automated systems fully define the prescribed behavior without ambiguity, conflicts or omissions. The increasing use of AI techniques and machine learning in autonomous systems such as drones promises to blur these boundaries and allows us to conceive in a similar way more flexible policies for the spectrum of human-autonomous systems collaborations. In coalition environments this spectrum extends across the boundaries of authority in pursuit of a common coalition goal and covers collaborations between human and autonomous systems alike. In social sciences, social exchange theory has been applied successfully to explain human behavior in a variety of contexts. It provides a framework linking the expected rewards, costs, satisfaction and commitment to explain and anticipate the choices that individuals make when confronted with various options. We discuss here how it can be used within coalition environments to explain joint decision making and to help formulate policies re-framing the concepts where appropriate. Social exchange theory is particularly attractive within this context as it provides a theory with “measurable” components that can be readily integrated in machine reasoning processes.

Xia, D., Zhang, Y..  2017.  The fuzzy control of trust establishment. 2017 4th International Conference on Systems and Informatics (ICSAI). :655–659.

In the open network environment, the strange entities can establish the mutual trust through Automated Trust Negotiation (ATN) that is based on exchanging digital credentials. In traditional ATN, the attribute certificate required to either satisfied or not, and in the strategy, the importance of the certificate is same, it may cause some unnecessary negotiation failure. And in the actual situation, the properties is not just 0 or 1, it is likely to between 0 and 1, so the satisfaction degree is different, and the negotiation strategy need to be quantified. This paper analyzes the fuzzy negotiation process, in order to improve the trust establishment in high efficiency and accuracy further.

Xi, X., Zhang, F., Lian, Z..  2017.  Implicit Trust Relation Extraction Based on Hellinger Distance. 2017 13th International Conference on Semantics, Knowledge and Grids (SKG). :223–227.

Recent studies have shown that adding explicit social trust information to social recommendation significantly improves the prediction accuracy of ratings, but it is difficult to obtain a clear trust data among users in real life. Scholars have studied and proposed some trust measure methods to calculate and predict the interaction and trust between users. In this article, a method of social trust relationship extraction based on hellinger distance is proposed, and user similarity is calculated by describing the f-divergence of one side node in user-item bipartite networks. Then, a new matrix factorization model based on implicit social relationship is proposed by adding the extracted implicit social relations into the improved matrix factorization. The experimental results support that the effect of using implicit social trust to recommend is almost the same as that of using actual explicit user trust ratings, and when the explicit trust data cannot be extracted, our method has a better effect than the other traditional algorithms.

Chaturvedi, P., Daniel, A. K..  2017.  Trust aware node scheduling protocol for target coverage using rough set theory. 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT). :511–514.

Wireless sensor networks have achieved the substantial research interest in the present time because of their unique features such as fault tolerance, autonomous operation etc. The coverage maximization while considering the resource scarcity is a crucial problem in the wireless sensor networks. The approaches which address these problems and maximize the network lifetime are considered prominent. The node scheduling is such mechanism to address this issue. The scheduling strategy which addresses the target coverage problem based on coverage probability and trust values is proposed in Energy Efficient Coverage Protocol (EECP). In this paper the optimized decision rules is obtained by using the rough set theory to determine the number of active nodes. The results show that the proposed extension results in the lesser number of decision rules to consider in determination of node states in the network, hence it improves the network efficiency by reducing the number of packets transmitted and reducing the overhead.

Salah, H., Eltoweissy, M..  2017.  Towards Collaborative Trust Management. 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC). :198–208.

Current technologies to include cloud computing, social networking, mobile applications and crowd and synthetic intelligence, coupled with the explosion in storage and processing power, are evolving massive-scale marketplaces for a wide variety of resources and services. They are also enabling unprecedented forms and levels of collaborations among human and machine entities. In this new era, trust remains the keystone of success in any relationship between two or more parties. A primary challenge is to establish and manage trust in environments where massive numbers of consumers, providers and brokers are largely autonomous with vastly diverse requirements, capabilities, and trust profiles. Most contemporary trust management solutions are oblivious to diversities in trustors' requirements and contexts, utilize direct or indirect experiences as the only form of trust computations, employ hardcoded trust computations and marginally consider collaboration in trust management. We surmise the need for reference architecture for trust management to guide the development of a wide spectrum of trust management systems. In our previous work, we presented a preliminary reference architecture for trust management which provides customizable and reconfigurable trust management operations to accommodate varying levels of diversity and trust personalization. In this paper, we present a comprehensive taxonomy for trust management and extend our reference architecture to feature collaboration as a first-class object. Our goal is to promote the development of new collaborative trust management systems, where various trust management operations would involve collaborating entities. Using the proposed architecture, we implemented a collaborative personalized trust management system. Simulation results demonstrate the effectiveness and efficiency of our system.

Rahman, Fatin Hamadah, Au, Thien Wan, Newaz, S. H. Shah, Suhaili, Wida Susanty.  2017.  Trustworthiness in Fog: A Fuzzy Approach. Proceedings of the 2017 VI International Conference on Network, Communication and Computing. :207–211.

Trust management issue in cloud domain has been a persistent research topic discussed among scholars. Similar issue is bound to occur in the surfacing fog domain. Although fog and cloud are relatively similar, evaluating trust in fog domain is more challenging than in cloud. Fog's high mobility support, distributive nature, and closer distance to end user means that they are likely to operate in vulnerable environments. Unlike cloud, fog has little to no human intervention, and lack of redundancy. Hence, it could experience downtime at any given time. Thus it is harder to trust fogs given their unpredictable status. These distinguishing factors, combined with the existing factors used for trust evaluation in cloud can be used as metrics to evaluate trust in fog. This paper discusses a use case of a campus scenario with several fog servers, and the metrics used in evaluating the trustworthiness of the fog servers. While fuzzy logic method is used to evaluate the trust, the contribution of this study is the identification of fuzzy logic configurations that could alter the trust value of a fog.

Dong, Changyu, Wang, Yilei, Aldweesh, Amjad, McCorry, Patrick, van Moorsel, Aad.  2017.  Betrayal, Distrust, and Rationality: Smart Counter-Collusion Contracts for Verifiable Cloud Computing. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :211–227.
Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.
2018-02-14
Wang, Frank, Joung, Yuna, Mickens, James.  2017.  Cobweb: Practical Remote Attestation Using Contextual Graphs. Proceedings of the 2Nd Workshop on System Software for Trusted Execution. :3:1–3:7.

In theory, remote attestation is a powerful primitive for building distributed systems atop untrusting peers. Unfortunately, the canonical attestation framework defined by the Trusted Computing Group is insufficient to express rich contextual relationships between client-side software components. Thus, attestors and verifiers must rely on ad-hoc mechanisms to handle real-world attestation challenges like attestors that load executables in nondeterministic orders, or verifiers that require attestors to track dynamic information flows between attestor-side components. In this paper, we survey these practical attestation challenges. We then describe a new attestation framework, named Cobweb, which handles these challenges. The key insight is that real-world attestation is a graph problem. An attestation message is a graph in which each vertex is a software component, and has one or more labels, e.g., the hash value of the component, or the raw file data, or a signature over that data. Each edge in an attestation graph is a contextual relationship, like the passage of time, or a parent/child fork() relationship, or a sender/receiver IPC relationship. Cobweb's verifier-side policies are graph predicates which analyze contextual relationships. Experiments with real, complex software stacks demonstrate that Cobweb's abstractions are generic and can support a variety of real-world policies.

2018-01-23
Yasin, Muhammad, Sengupta, Abhrajit, Nabeel, Mohammed Thari, Ashraf, Mohammed, Rajendran, Jeyavijayan(JV), Sinanoglu, Ozgur.  2017.  Provably-Secure Logic Locking: From Theory To Practice. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1601–1618.

Logic locking has been conceived as a promising proactive defense strategy against intellectual property (IP) piracy, counterfeiting, hardware Trojans, reverse engineering, and overbuilding attacks. Yet, various attacks that use a working chip as an oracle have been launched on logic locking to successfully retrieve its secret key, undermining the defense of all existing locking techniques. In this paper, we propose stripped-functionality logic locking (SFLL), which strips some of the functionality of the design and hides it in the form of a secret key(s), thereby rendering on-chip implementation functionally different from the original one. When loaded onto an on-chip memory, the secret keys restore the original functionality of the design. Through security-aware synthesis that creates a controllable mismatch between the reverse-engineered netlist and original design, SFLL provides a quantifiable and provable resilience trade-off between all known and anticipated attacks. We demonstrate the application of SFLL to large designs (textgreater100K gates) using a computer-aided design (CAD) framework that ensures attaining the desired security level at minimal implementation cost, 8%, 5%, and 0.5% for area, power, and delay, respectively. In addition to theoretical proofs and simulation confirmation of SFLL's security, we also report results from the silicon implementation of SFLL on an ARM Cortex-M0 microprocessor in 65nm technology.