Visible to the public Trustworthy Systems, Part 1

SoS Newsletter- Advanced Book Block

SoS Logo

Trustworthy Systems, Part 1


Trust is created in information security to assure the identity of external parties.  It is one of the core problems. The growth of large scale distributed systems and outsourcing to cloud increases both the need and the challenge to address trustworthy systems. The works cited here are from 2014 conferences.


Waguespack, L.J.; Yates, D.J.; Schiano, W.T., "Towards a Design Theory for Trustworthy Information Systems," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.3707,3716, 6-9 Jan. 2014. doi:10.1109/HICSS.2014.461 Abstract: The lack of a competent design theory to shape information system security policy and implementation has exacerbated an already troubling lack of security. Information systems remain insecure and therefore untrustworthy even after more than half a century of technological evolution. The issues grow ever more severe as the volume of data grows exponentially and the cloud emerges as a preferred repository. We aspire to advance security design by expanding the mindsets of stakeholder and designer to include a more complete portfolio of factors. The goal of security design is to craft choices that resonate with stake-holders' sense of a trustworthy system. To engender trust, security must be intrinsic to any definition of IS design quality. Thriving Systems Theory (TST) is an information systems design theory focused on reconciling and harmonizing stakeholder intentions. Formulating security design through TST is a starting point for a quality-based security design theory for trustworthy information systems.
Keywords: information systems; investment; security of data; IS design quality; TST; advance security design; design theory; information system security policy; portfolio; quality-based security design theory; technological evolution; thriving systems theory; trustworthy information systems; Communities; Context; Encapsulation; Information systems; Internet; Security; Shape; (ID#: 15-4734)


Haiying Shen; Guoxin Liu; Gemmill, J.; Ward, L., "A P2P-Based Infrastructure for Adaptive Trustworthy and Efficient Communication in Wide-Area Distributed Systems," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.9, pp.2222,2233, Sept. 2014. doi: 10.1109/TPDS.2013.159 Abstract: Tremendous advances in pervasive networking have enabled wide-area distributed systems to connect distributed resources or users such as corporate data centers and high-performance computing centers. These distributed pervasive systems take advantage of resources and enhance collaborations worldwide. However, due to lack of central management, they are severely threatened by a variety of malicious users in today's Internet. Current reputation- and anonymity-based technologies for node communication enhance system trustworthiness. However, most of these technologies gain trustworthiness at the cost of efficiency degradation. This paper presents a P2P-based infrastructure for trustworthy and efficient node communication in wide-area distributed systems. It jointly addresses trustworthiness and efficiency in its operation in order to meet the high-performance requirements of a diversified wealth of distributed pervasive applications. The infrastructure includes two policies: trust/efficiency-oriented request routing and trust-based adaptive anonymous response forwarding. This infrastructure not only offers a trustworthy environment with anonymous communication but also enhances overall system efficiency through harmonious trustworthiness and efficiency trade-offs. Experimental results from simulations and the real-world PlanetLab testbed show the superior performance of the P2P-based infrastructure in achieving both high trustworthiness and high efficiency in comparison to other related approaches.
Keywords: peer-to-peer computing; trusted computing; Internet; P2P-based infrastructure; PlanetLab; adaptive trustworthy; anonymity-based technologies; corporate data centers; distributed pervasive systems; efficiency-oriented request routing; high-performance computing centers; node communication; peer-to-peer infrastructure; pervasive networking; reputation-based technologies; system trustworthiness; trust-based adaptive anonymous response forwarding; wide-area distributed systems; Algorithm design and analysis; Overlay networks; Peer-to-peer computing; Radiation detectors; Routing; Servers; Tunneling; Wide-area distributed systems; anonymity; efficiency; peer to peer networks; reputation systems (ID#: 15-4735)


Banga, G.; Crosby, S.; Pratt, I., "Trustworthy Computing for the Cloud-Mobile Era: A Leap Forward in Systems Architecture.," Consumer Electronics Magazine, IEEE, vol.3, no.4, pp.31,39, Oct. 2014. doi:10.1109/MCE.2014.2338591 Abstract: The past decade has transformed computing in astounding ways: Who could have predicted, back in 2004, that cloud computing was about to change so profoundly to democratize in many respects the availability of computing, storage, and networking? Who could have imagined the transformation of client computing that resulted from the combination of pay-as-you-go cloud infrastructure for application developers and affordable, powerful, touch-enabled mobile devices?
Keywords: cloud computing; software architecture; trusted computing; cloud infrastructure; cloud-mobile era; systems architecture; trustworthy computing; Cloud computing; Computer architecture; Computer security; Mobile communication; Systems analysis and design; Virtual machine monitors (ID#: 15-4736)


Choochotkaew, S.; Piromsopa, K., "Development of a Trustworthy Authentication System in Mobile Ad-Hoc Networks for Disaster Area," Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2014 11th International Conference on, pp.1,6, 14-17 May 2014. doi:10.1109/ECTICon.2014.6839739 Abstract: In this paper, we propose a MANET authentication model for communication between victims in disaster areas. Our model is as secure as the Self-Generated-Certificate Public Key without pairing scheme [1], but does not require a direct connection to a centralized CA. We achieve this by combining two adjusted protocols into two independent authentication modes: main mode and emergency mode. In our scenario, a disaster area is partitioned into two adjacent zones: a damage zone (most infrastructures inside are damaged by a severe disaster), and an infrastructure zone. This partition is based on our observation from many real life disaster situations. A node, called a carrier (rescue node), moves between the two zones in order to relay between them. Our proposed hybrid approach has higher availability and more efficiency than the traditional approaches. In our system, an encrypted message can be used to verify both senders and receivers as well as to preserve confidentiality and integrity of data. The key to the success of our model is the mobility of the rescue nodes. Our model is validated using NS-3 simulator. We present security and efficiency analysis by comparing to the traditional approaches.
Keywords: cryptographic protocols; data integrity; emergency management; mobile ad hoc networks; mobility management (mobile radio);public key cryptography; radio receivers; radio transmitters; telecommunication security; MANET authentication model;NS-3 simulator; adjusted protocols; centralized CA; damage zone; disaster area; disaster situations; efficiency analysis; emergency mode; encrypted message; hybrid approach; independent authentication modes; infrastructure zone; main mode; mobile ad hoc networks; rescue node; security analysis; self-generated-certificate public key; trustworthy authentication system; Authentication; Availability; Encryption; Mobile ad hoc networks; Protocols; Public key; Authentication model; Communication in disaster area; MANET; self-generated-certificate public key; self-organized public key (ID#: 15-4737)


Kroll, J.A.; Stewart, G.; Appel, A.W., "Portable Software Fault Isolation," Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, pp.18,32, 19-22 July 2014. doi: 10.1109/CSF.2014.10 Abstract: We present a new technique for architecture portable software fault isolation (SFI), together with a prototype implementation in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the Comp Cert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of Comp Cert and leveraging Comp Cert's formally proved preservation of the behavior of safe programs, we can obtain binary modules that satisfy the SFI memory safety policy for any of Comp Cert's supported architectures (currently: Power PC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems.
Keywords: program compilers; software fault tolerance; theorem proving; trusted computing; Cminor language; CompCert C compiler; Coq proof assistant; SFI memory safety policy; architecture portable software fault isolation; assembly-level program analysis; compiler intermediate language; trusted computing base; trustworthy SFI systems; Abstracts; Assembly; Computer architecture; Program processors; Safety; Security; Semantics; Architecture Portability; Memory Safety; Software Fault Isolation; Verified Compilers (ID#: 15-4738)


Aktas, Erdem; Afram, Furat; Ghose, Kanad, "Continuous, Low Overhead, Run-Time Validation of Program Executions," Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on,  pp.229,241, 13-17 Dec. 2014. doi:10.1109/MICRO.2014.18 Abstract: The construction of trustworthy systems demands that the execution of every piece of code is validated as genuine, that is, the executed codes do exactly what they are supposed to do. Pre-execution validations of code integrity fail to detect run time compromises like code injection, return and jump-oriented programming, and illegal dynamic linking of program modules. We propose and evaluate a generalized mechanism called REV (for Run-time Execution Validator) that can be easily integrated into a contemporary out-of-order processor to validate, as the program executes, the control flow path and instructions executed along the control flow path. To prevent memory from being tainted by compromised code, REV also prevents updates to the memory from a basic block until its execution has been authenticated. Although control flow signature based authentication of an execution has been suggested before for software testing and for restricted cases of embedded systems, their extensions to out-of-order cores is a non-incremental effort from a micro architectural standpoint. Unlike REV, the existing solutions do not scale with binary sizes, require binaries to be altered or require new ISA support and also fail to contain errors and, in general, impose a heavy performance penalty. We show, using a detailed cycle-accurate micro architectural simulator for an out-of-order pipeline implementing the X86 ISA that the performance overhead of REV is limited to 1.87% on the average across the SPEC 2006 benchmarks.
Keywords: Authentication; Cryptography; Hardware; Kernel; Out of order; Pipelines; Computer Security; Control-Flow Integrity; Control-Flow Validation; Hardware Security; Secure Execution; Trusted Computing (ID#: 15-4739)


Naeen, H.M.; Jalali, M.; Naeen, A.M., "A Trust-Aware Collaborative Filtering System Based on Weighted Items for Social Tagging Systems," Intelligent Systems (ICIS), 2014 Iranian Conference on, pp.1,5, 4-6 Feb. 2014. doi:10.1109/IranianCIS.2014.6802565 Abstract: Collaborative Filtering systems consider users' social environment to predict what each user may like to visit in a social network i.e. they collect and analyze a large amount of information on users' behavior, activities or preferences and then predict or make suggestions to users. These systems use ranks or tags each user assign to different resources to make predictions. Lately, social tagging systems, in which users can insert new contents, tag, organize, share and search for contents, are becoming more popular. These social tagging systems have a lot of valuable information, but the data expansion in them is very fast and this has led to the need for recommender systems that will predict what each user may like or need make these suggestions to them. One of the problems in these systems is: “how much can we rely on the similar users, are they trustworthy?” In this article we use trust metric, which we conclude from users' tagging behavior, beside similarities to give suggestions. Results show considering trust in a collaborative system can lead to better performance in generating suggestions.
Keywords: behavioural sciences; collaborative filtering; recommender systems; social networking (online);trusted computing; data expansion; recommender systems; social network; social tagging systems; trust metric; trust-aware collaborative filtering system; user activities; user behavior; user preferences; user social environment; user tagging behavior; weighted items; Collaboration; Measurement; Motion pictures; Recommender systems; Social network services; Tagging; Collaborative filtering systems; Recommender systems; Tag; Trust (ID#: 15-4740)


Lopez, J.; Xiaoping Che; Maag, S.; Morales, G., "A Distributed Monitoring Approach for Trust Assessment Based on Formal Testing," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.702,707, 13-16 May 2014. doi:10.1109/WAINA.2014.114 Abstract: Communications systems are growing in use and in popularity. While their interactions are becoming more numerous, trust those interactions now becomes a priority. In this paper, we focus on trust management systems based on observations of trustee behaviors. Based on a formal testing methodology, we propose a formal distributed network monitoring approach to analyze the packets exchanged between the trust or, trustee and other points of observation in order to prove the trustee is acting in a trustworthy manner. Based on formal "trust properties", the monitored systems behaviors provide a verdict of trust by analyzing and testing those properties. Finally, our methodology is applied to a real industrial DNS use case scenario.
Keywords: formal specification; program testing; trusted computing; communications systems; formal distributed network monitoring approach; formal testing methodology; formal trust properties; industrial DNS use case scenario; trust assessment; trust management systems; trustee behaviors; Monitoring; Protocols; Prototypes; Security; Servers; Syntactics; Testing; Communication systems; Formal method; Monitoring; Trust (ID#: 15-4741)


Divya, S.V.; Shaji, R.S., "Security in Data Forwarding Through Elliptic Curve Cryptography in Cloud," Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on, pp.1083,1088, 10-11 July 2014. doi:10.1109/ICCICCT.2014.6993122 Abstract: Cloud is an emerging trend in IT which moves the data along its computing away from handy systems into large remote data centers where the management of those resources are not trustworthy. Because of its magnetized features, it is gaining popularity among the IT people and the researches. However building a secure storage system with some functionality is still a challenging task in cloud. Existing methods suffer from inefficiency and delay because the data cannot be forwarded to user without retrieving back and offline verification causes delay. This paper focuses on designing a secure cloud storage system that supports data forwarding function using elliptic curve cryptography. The proposed work also concentrates on Online Alert methodology which indicates the data owner when any attacker tries to modify the data or any malpractice happens during data forwarding. Moreover, our method ensures multi-level security when compared to existing systems.
Keywords: cloud computing; public key cryptography; storage management; cloud computing; cloud storage system; data centers; data forwarding function; data forwarding security; data owner; elliptic curve cryptography; multilevel security; online alert methodology; resource management; Cloud computing; Elliptic curve cryptography; Encryption; Protocols; Servers; Data forwarding; Elliptic Curve Cryptography; Multi-level security.rel; Online Alert Methodology (ID#: 15-4742)


Anisetti, Marco; Ardagna, Claudio A.; Damiani, Ernesto, "A Certification-Based Trust Model for Autonomic Cloud Computing Systems," Cloud and Autonomic Computing (ICCAC), 2014 International Conference on, pp.212,219, 8-12 Sept. 2014. doi:10.1109/ICCAC.2014.8 Abstract: Autonomic cloud computing systems react to events and context changes, preserving a stable quality of service for their tenants. Existing assurance techniques supporting trust relations between parties need to be adapted to scenarios where the assumption of responsibility on trust assertions and related information (e.g., in SLAs and certificates) cannot be done at a single point in time and by a single trusted third party. In this paper, we tackle this problem by proposing a new trust model grounded on a security certification scheme for the cloud. Our model is based on a multiple signatures process including dynamic delegation mechanisms. Our approach supports autonomic cloud computing systems in the management of dynamic content in security certificates, establishing a trustworthy cloud environment.
Keywords: Cloud computing; Clouds; Computational modeling; Context; Mechanical factors; Monitoring; Security; Autonomic Cloud Computing; Certification; Trust Model (ID#: 15-4743)


Zongwei Zhou; Miao Yu; Gligor, V.D., "Dancing with Giants: Wimpy Kernels for On-Demand Isolated I/O," Security and Privacy (SP), 2014 IEEE Symposium on, pp.308,323, 18-21 May 2014. doi: 10.1109/SP.2014.27 Abstract: To be trustworthy, security-sensitive applications must be formally verified and hence small and simple, i.e., wimpy. Thus, they cannot include a variety of basic services available only in large and untrustworthy commodity systems, i.e., in giants. Hence, wimps must securely compose with giants to survive on commodity systems, i.e., rely on giants' services but only after efficiently verifying their results. This paper presents a security architecture based on a wimpy kernel that provides on-demand isolated I/O channels for wimp applications, without bloating the underlying trusted computing base. The size and complexity of the wimpy kernel are minimized by safely outsourcing I/O subsystem functions to an untrusted commodity operating system and exporting driver and I/O subsystem code to wimp applications. Using the USB subsystem as a case study, this paper illustrates the dramatic reduction of wimpy-kernel size and complexity, e.g., over 99% of the USB code base is removed. Performance measurements indicate that the wimpy-kernel architecture exhibits the desired execution efficiency.
Keywords: formal verification; operating systems (computers); peripheral interfaces; software architecture; trusted computing; I/O subsystem functions; USB code base; USB subsystem; commodity systems; formal verification; giants services; on-demand isolated I/O channels; security architecture; security-sensitive applications; trusted computing; trustworthy; untrusted commodity operating system; wimp applications; wimpy kernel complexity; wimpy kernel size; wimpy-kernel architecture; Complexity theory; Hardware; Kernel; Linux; Security; Universal Serial Bus (ID#: 15-4744)


Hussain, S.; Gustavsson, R.; Saleem, A.; Nordstrom, L., "SLA Conceptual Framework for Coordinating and Monitoring Information Flow in Smart Grid," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES, pp.1,5, 19-22 Feb. 2014. doi:10.1109/ISGT.2014.6816470  Abstract: The EU challenges for the future energy systems will change the scene of the energy systems in Europe. A transition from centralized controlled power network to customer oriented Smart Grid operating in distributed and deregulated energy market poses several regulatory, organizational and technical challenges. In such a market scenarios, multiple stakeholders provide services to produce and deliver energy. Due to the inclusion of new stakeholders at multiple levels there is a lack of purposeful monitoring based on pre-negotiated SLAs. Hence, there exists a gap to actively monitor KPIs values among all negotiated SLAs (Service Level Agreements). This paper addresses SLA based active monitoring of information flow. The proposed SLA framework provides monitoring based on negotiated KPIs in an automated and trustworthy way to coordinate information flow. In the end a use case is presented to validate the SLA framework.
Keywords: contracts; power markets; smart power grids; SLA conceptual framework; deregulated energy market; distributed energy market; information flow coordination; information flow monitoring; service level agreement; smart grid; Availability; Business; Interoperability; Monitoring; Quality of service; Smart grids; Coordination; Monitoring; SCADA; SLA; Service Level Agreements; Smart Grid; Stakeholders (ID#: 15-4745)


Leke, C.; Twala, B.; Marwala, T., "Modeling of Missing Data Prediction: Computational Intelligence and Optimization Algorithms," Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, pp.1400, 1404, 5-8 Oct. 2014. doi: 10.1109/SMC.2014.6974111 Abstract: Four optimization algorithms (genetic algorithm, simulated annealing, particle swarm optimization and random forest) were applied with an MLP based auto associative neural network on two classification datasets and one prediction dataset. This work was undertaken to investigate the effectiveness of using auto associative neural networks and optimization algorithms in missing data prediction and classification tasks. If performed appropriately, computational intelligence and optimization algorithm systems could lead to consistent, accurate and trustworthy predictions and classifications resulting in more adequate decisions. The results reveal GA, SA and PSO to be more efficient when compared to RF in terms of predicting the forest area to be affected by fire. GA, SA, and PSO had the same accuracy of 93.3%, while RF showed 92.99% accuracy. For the classification problems, RF showed 93.66% and 92.11% accuracy on the German credit and Heart disease datasets respectively, outperforming GA, SA and PSO.
Keywords: data mining; genetic algorithms; linear programming; neural nets; particle swarm optimisation; pattern classification; simulated annealing; tree searching; German credit datasets; Heart disease datasets; MLP based auto associative neural network; computational intelligence; genetic algorithm; missing data prediction modeling; optimization algorithms; particle swarm optimization; random forest; simulated annealing; trustworthy predictions; Accuracy; Classification algorithms; Genetic algorithms; Neural networks; Optimization; Prediction algorithms; Radio frequency; auto-associative neural networks; classification; missing data; optimization algorithms; prediction (ID#: 15-4746)


Ferdowsi, H.; Jagannathan, S.; Zawodniok, M., "An Online Outlier Identification and Removal Scheme for Improving Fault Detection Performance," Neural Networks and Learning Systems, IEEE Transactions on, vol.25, no.5, pp.908,919, May 2014. doi:10.1109/TNNLS.2013.2283456 Abstract: Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.
Keywords: fault diagnosis; fault tolerant control; neurocontrollers; nonlinear dynamical systems; observers; statistical analysis; FD framework; NN weight update law; OIR scheme; analytical model-based fault detection; axial piston pump; fault detection performance; median; model-based fault diagnosis; model-based observer; moving time window; neural network; nonlinear dynamic system; observer-based techniques; online outlier identification-and-removal scheme; simple linear system; standard deviation; three-tank benchmarking system; Fault diagnosis; Noise; Noise measurement; Observers; Pollution measurement; Vectors; Data analysis; fault diagnosis; neural networks; nonlinear systems; nonlinear systems. (ID#: 15-4747)


Detken, K.-O.; Genzel, C.-H.; Rudolph, C.; Jahnke, M., "Integrity Protection in a Smart Grid Environment for Wireless Access of Smart Meters," Wireless Systems within the Conferences on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS-SWS), 2014 2nd International Symposium on, pp.79,86, 11-12 Sept. 2014. doi:10.1109/IDAACS-SWS.2014.6954628 Abstract: To meet future challenges of energy grids, secure communication between involved control systems is necessary. Therefore the German Federal Office for Information Security (BSI) has published security standards concerning a central communication unit for energy grids called Smart Meter Gateway (SMGW). The present security concept of the SPIDER project takes these standards into consideration but extends their level of information security by integrating elements from the Trusted Computing approach. Additionally, a tamper resistant grid is integrated with chosen hardware modules and a trustworthy boot process is applied. To continually measure the SMGW and smart meter (SM) integrity the approach Trusted Network Connect (TNC) from the Trusted Computing Group (TCG) is used. Hereby a Trusted Core Network (TCN) can be established to protect the smart grid components against IT based attacks. That is necessary, especially by the use of wireless connections between the SMGW and smart meter components.
Keywords: data protection; power engineering computing; security of data; smart meters; smart power grids; BSI; German Federal Office for Information Security; SMGW; TCG; TCN; TNC; central communication unit; energy grids; integrity protection; security standards; smart grid environment; smart meter gateway; smart meters; tamper resistant grid; trusted computing group; trusted core network; trusted network connect; trustworthy boot process; wireless access; Hardware; Monitoring; Security; Smart meters; Software; Standards; Wide area networks; Integrity; Smart Meter Gateway; Smart Meters; Trusted Computing; Trusted Core Network; Trusted Network Connect (ID#: 15-4748)


JAIN, R.; Prabhakar, S., "Guaranteed Authenticity and Integrity of Data from Untrusted Servers," Data Engineering (ICDE), 2014 IEEE 30th International Conference on, pp.1282,1285, March 31 2014-April 4 2014. doi:10.1109/ICDE.2014.6816761 Abstract: Data are often stored at untrusted database servers. The lack of trust arises naturally when the database server is owned by a third party, as in the case of cloud computing. It also arises if the server may have been compromised, or there is a malicious insider. Ensuring the trustworthiness of data retrieved from such untrusted database is of utmost importance. Trustworthiness of data is defined by faithful execution of valid and authorized transactions on the initial data. Earlier work on this problem is limited to cases where data are either not updated, or data are updated by a single trustworthy entity. However, for a truly dynamic database, multiple clients should be allowed to update data without having to route the updates through a central server. In this demonstration, we present a system to establish authenticity and integrity of data in a dynamic database where the clients can run transactions directly on the database server. Our system provides provable authenticity and integrity of data with absolutely no requirement for the server to be trustworthy. Our system also provides assured provenance of data. This demonstration is built using the solutions proposed in our previous work[5]. Our system is built on top of Oracle with no modifications to the database internals. We show that the system can be easily adopted in existing databases without any internal changes to the database. We also demonstrate how our system can provide authentic provenance.
Keywords: data integrity; database management systems; trusted computing; Oracle; cloud computing; data authenticity; data integrity; data provenance; data transactions; data trustworthiness; database internals; database servers; dynamic database; malicious insider; trustworthy entity; Cloud computing; Hardware; Indexes; Protocols; Servers (ID#: 15-4749)


Sousa, S.; Dias, P.; Lamas, D., "A Model for Human-computer Trust: A Key Contribution for Leveraging Trustful Interactions," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp. 1, 6, 18-21 June 2014. doi:10.1109/CISTI.2014.6876935 Abstract: This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.
Keywords: computer mediated communication; human computer interaction; computer science; computer systems; computer-mediated interactions; human-computer iteration; human-computer trust model; participation history; social phenomenon; trustful interaction leveraging; user perceptions; user trust needs; Collaboration; Computational modeling; Computers; Context; Correlation; Educational institutions; Psychology; Collaboration; Engagement; Human-computer trust; Interaction design; Participation (ID#: 15-4750)


Noorian, Z.; Mohkami, M.; Yuan Liu; Hui Fang; Vassileva, J.; Jie Zhang, "SocialTrust: Adaptive Trust Oriented Incentive Mechanism for Social Commerce," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on, vol.2, no., pp.250,257, 11-14 Aug. 2014. doi:10.1109/WI-IAT.2014.105  Abstract: In the absence of legal authorities and enforcement mechanisms in open e-marketplaces, it is extremely challenging for a user to validate the quality of opinions (i.e. Ratings and reviews) of products or services provided by other users (referred as advisers). Rationally, advisers tend to be reluctant to share their truthful experience with others. In this paper, we propose an adaptive incentive mechanism, where advisers are motivated to share their actual experiences with their trustworthy peers (friends/neighbors in the social network) in e-marketplaces (social commerce context), and malicious users will be eventually evacuated from the systems. Experimental results demonstrate the effectiveness of our mechanism in promoting the honesty of users in sharing their past experiences.
Keywords: electronic commerce; incentive schemes; social networking (online);trusted computing; SocialTrust mechanism; adaptive trust oriented incentive mechanism; e-marketplaces; social commerce; Business; Context; Measurement; Monitoring; Quality of service; Servers; Social network services; Trust; electronic commerce; incentive mechanism; reputation systems (ID#: 15-4751)


Hua Chai; Wenbing Zhao, "Towards Trustworthy Complex Event Processing," Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on, pp.758,761, 27-29 June 2014. doi:10.1109/ICSESS.2014.6933677 Abstract: Complex event processing has become an important technology for big data and intelligent computing because it facilitates the creation of actionable, situational knowledge from potentially large amount events in soft realtime. Complex event processing can be instrumental for many mission-critical applications, such as business intelligence, algorithmic stock trading, and intrusion detection. Hence, the servers that carry out complex event processing must be made trustworthy. In this paper, we present a threat analysis on complex event processing systems and describe a set of mechanisms that can be used to control various threats. By exploiting the application semantics for typical event processing operations, we are able to design lightweight mechanisms that incur minimum runtime overhead appropriate for soft realtime computing.
Keywords: Big Data; trusted computing; Big Data; actionable situational knowledge; algorithmic stock trading; application semantics; business intelligence; complex event processing; event processing operations; intelligent computing; intrusion detection; minimum runtime overhead; mission-critical applications; servers; soft realtime computing; threat analysis; trustworthy; Business; Context; Fault tolerance; Fault tolerant systems; Runtime; Servers; Synchronization; Big Data; Business Intelligence; Byzantine Fault Tolerance; Complex Event Processing; Dependable Computing; Trust (ID#: 15-4752)


Addo, I.D.; Ji-Jiang Yang; Ahamed, S.I., "SPTP: A Trust Management Protocol for Online and Ubiquitous Systems," Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual, pp.590,595, 21-25 July 2014. doi:10.1109/COMPSAC.2014.82 Abstract: With the recent proliferation of ubiquitous, mobile and cloud-based systems, security, privacy and trust concerns surrounding the use of emerging technologies in the ensuing wake of the Internet of Things (IoT) continues to mount. In most instances, trust and privacy concerns continuously surface as a key deterrent to the adoption of these emergent technologies. The ensuing literature presents a Secure, Private and Trustworthy protocol (named SPTP) that was prototyped for addressing critical security, privacy and trust concerns surrounding mobile, pervasive and cloud services in Collective Intelligence (CI) scenarios. The efficacy of the protocol and its associated characteristics are evaluated in CI-related scenarios including multimodal monitoring of Elderly people in smart home environments, Online Advertisement targeting in Computational Advertising settings, and affective state monitoring through game play as an intervention for Autism among Children. We present our evaluation criteria for the proposed protocol, our initial results and future work.
Keywords: Internet of Things; cloud computing; data privacy; mobile computing; security of data; trusted computing; CI scenarios; Internet of Things; IoT; SPTP; cloud-based systems; collective intelligence; computational advertising; elderly people; mobile systems; online advertisement; online systems; privacy; security; smart home environments; trust management protocol; ubiquitous systems; Cloud computing; Data privacy; Monitoring; Privacy; Protocols; Security; Senior citizens; Cloud; Collective Intelligence; Mobile Computing; Online Advertising Privacy; Privacy Framework; SPT; Security; Trust Management; Trust and Privacy Protocol; Ubiquitous Computing; mHealth (ID#: 15-4753)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.