Biblio
User engagement is recognized as an important component of the user experience, but relatively little is known about the effect of engagement on the learning outcomes of such interactions. This experimental user study examines the relationship between user engagement (UE) and comprehension in varied academic reading environments. Forty-one university students interacted with one of two sets of texts presented in 4 conditions in the context of preparing for a class assignment. Employing the User Engagement Scale (UES), we found evidence of a relationship between students' comprehension of the texts and their degree of engagement with them. However, this association was confined to one of the UES subscales and was not consistent across levels of engagement. An examination of additional variables found little evidence that system and content characteristics influenced engagement; however, we noted that all students' reported increased knowledge, but topical interest for non-engaged students declined. Results contribute to existing literature by adding further evidence that the relationship between engagement and comprehension is complex and mediated.
Funded under the European Union's Horizon 2020 research and innovation programme, SAFEcrypto will provide a new generation of practical, robust and physically secure post-quantum cryptographic solutions that ensure long-term security for future ICT systems, services and applications. The project will focus on the remarkably versatile field of Lattice-based cryptography as the source of computational hardness, and will deliver optimised public key security primitives for digital signatures and authentication, as well identity based encryption (IBE) and attribute based encryption (ABE). This will involve algorithmic and design optimisations, and implementations of lattice-based cryptographic schemes addressing cost, energy consumption, performance and physical robustness. As the National Institute of Standards and Technology (NIST) prepares for the transition to a post-quantum cryptographic suite B, urging organisations that build systems and infrastructures that require long-term security to consider this transition in architectural designs; the SAFEcrypto project will provide Proof-of-concept demonstrators of schemes for three practical real-world case studies with long-term security requirements, in the application areas of satellite communications, network security and cloud. The goal is to affirm Lattice-based cryptography as an effective replacement for traditional number-theoretic public-key cryptography, by demonstrating that it can address the needs of resource-constrained embedded applications, such as mobile and battery-operated devices, and of real-time high performance applications for cloud and network management infrastructures.
SSL certificates are a core component of the public key infrastructure that underpins encrypted communication in the Internet. In this paper, we report the results of a longitudinal study of the characteristics of SSL certificate chains presented to clients during secure web (HTTPS) connection setup. Our data set consists of 23B SSL certificate chains collected from a global panel consisting of over 2M residential client machines over a period of 6 months. The data informing our analyses provide perspective on the entire chain of trust, including root certificates, across a wide distribution of client machines. We identify over 35M unique certificate chains with diverse relationships at all levels of the PKI hierarchy. We report on the characteristics of valid certificates, which make up 99.7% of the total corpus. We also examine invalid certificate chains, finding that 93% of them contain an untrusted root certificate and we find they have shorter average chain length than their valid counterparts. Finally, we examine two unintended but prevalent behaviors in our data: the deprecation of root certificates and secure traffic interception. Our results support aspects of prior, scan-based studies on certificate characteristics but contradict other findings, highlighting the importance of the residential client-side perspective.
As the Internet becomes an important part of the infrastructure our society depends on, it is crucial to construct networks that are able to work even when part of the network is compromised. This paper presents the first practical intrusion-tolerant network service, targeting high-value applications such as monitoring and control of global clouds and management of critical infrastructure for the power grid. We use an overlay approach to leverage the existing IP infrastructure while providing the required resiliency and timeliness. Our solution overcomes malicious attacks and compromises in both the underlying network infrastructure and in the overlay itself. We deploy and evaluate the intrusion-tolerant overlay implementation on a global cloud spanning East Asia, North America, and Europe, and make it publicly available.
Establishing trust relationships between network participants by having them prove their operating system's integrity via a Trusted Platform Module (TPM) provides interesting approaches for securing local networks at a higher level. In the introduced approach on OSI layer 2, attacks carried out by already authenticated and participating nodes (insider threats) can be detected and prevented. Forbidden activities and manipulations in hard- and software, such as executing unknown binaries, loading additional kernel modules or even inserting unauthorized USB devices, are detected and result in an autonomous reaction of each network participant. The provided trust establishment and authentication protocol operates independently from upper protocol layers and is optimized for resource constrained machines. Well known concepts of backbone architectures can maintain the chain of trust between different kinds of network types. Each endpoint, forwarding and processing unit monitors the internal network independently and reports misbehaviours autonomously to a central instance in or outside of the trusted network.
In the area of the Internet of Things, cloud-based camera surveillance systems are ubiquitously available for industrial and private environments. However, the sensitive nature of the surveillance use case imposes high requirements on privacy/confidentiality, authenticity, and availability of such systems. In this work, we investigate how currently available mass-market camera systems comply with these requirements. Considering two attacker models, we test the cameras for weaknesses and analyze for their implications. We reverse-engineered the security implementation and discovered several vulnerabilities in every tested system. These weaknesses impair the users' privacy and, as a consequence, may also damage the camera system manufacturer's reputation. We demonstrate how an attacker can exploit these vulnerabilities to blackmail users and companies by denial-of-service attacks, injecting forged video streams, and by eavesdropping private video data - even without physical access to the device. Our analysis shows that current systems lack in practice the necessary care when implementing security for IoT devices.
To ensure reliable and predictable service in the electrical grid it is important to gauge the level of trust present within critical components and substations. Although trust throughout a smart grid is temporal and dynamically varies according to measured states, it is possible to accurately formulate communications and service level strategies based on such trust measurements. Utilizing an effective set of machine learning and statistical methods, it is shown that establishment of trust levels between substations using behavioral pattern analysis is possible. It is also shown that the establishment of such trust can facilitate simple secure communications routing between substations.
Static code analysis is a convenient technique to support the development of software. Without prior test setup, information about a later runtime behavior can be inferred and errors in the code can be found before using a regular compiler. Solutions to apply static code analysis to PLC software following the IEC 61131-3 already exist, but using these separate tools usually creates a gap in the development process. In this paper we introduce an architecture to use static analysis directly in a development environment and give instant feedback to the developer while he is still editing the PLC software.
Science gateways bring out the possibility of reproducible science as they are integrated into reusable techniques, data and workflow management systems, security mechanisms, and high performance computing (HPC). We introduce BioinfoPortal, a science gateway that integrates a suite of different bioinformatics applications using HPC and data management resources provided by the Brazilian National HPC System (SINAPAD). BioinfoPortal follows the Software as a Service (SaaS) model and the web server is freely available for academic use. The goal of this paper is to describe the science gateway and its usage, addressing challenges of designing a multiuser computational platform for parallel/distributed executions of large-scale bioinformatics applications using the Brazilian HPC resources. We also present a study of performance and scalability of some bioinformatics applications executed in the HPC environments and perform machine learning analyses for predicting features for the HPC allocation/usage that could better perform the bioinformatics applications via BioinfoPortal.
Using heterogeneous clouds has been considered to improve performance of big-data analytics for healthcare platforms. However, the problem of the delay when transferring big-data over the network needs to be addressed. The purpose of this paper is to analyze and compare existing cloud computing environments (PaaS, IaaS) in order to implement middleware services. Understanding the differences and similarities between cloud technologies will help in the interconnection of healthcare platforms. The paper provides a general overview of the techniques and interfaces for cloud computing middleware services, and proposes a cloud architecture for healthcare. Cloud middleware enables heterogeneous devices to act as data sources and to integrate data from other healthcare platforms, but specific APIs need to be developed. Furthermore, security and management problems need to be addressed, given the heterogeneous nature of the communication and computing environment. The present paper fills a gap in the electronic healthcare register literature by providing an overview of cloud computing middleware services and standardized interfaces for the integration with medical devices.
The growing volume of data and its increasing complexity require even more efficient and faster information retrieval techniques. Approximate nearest neighbor search algorithms based on hashing were proposed to query high-dimensional datasets due to its high retrieval speed and low storage cost. Recent studies promote the use of Convolutional Neural Network (CNN) with hashing techniques to improve the search accuracy. However, there are challenges to solve in order to find a practical and efficient solution to index CNN features, such as the need for a heavy training process to achieve accurate query results and the critical dependency on data-parameters. In this work we execute exhaustive experiments in order to compare recent methods that are able to produces a better representation of the data space with a less computational cost for a better accuracy by computing the best data-parameter values for optimal sub-space projection exploring the correlations among CNN feature attributes using fractal theory. We give an overview of these different techniques and present our comparative experiments for data representation and retrieval performance.
Lo et al. (2011) proposed an efficient key assignment scheme for access control in a large leaf class hierarchy where the alternations in leaf classes are more frequent than in non-leaf classes in the hierarchy. Their scheme is based on the public-key cryptosystem and hash function where operations like modular exponentiations are very much costly compared to symmetric-key encryptions and decryptions, and hash computations. Their scheme performs better than the previously proposed schemes. However, in this paper, we show that Lo et al.’s scheme fails to preserve the forward security property where a security class can also derive the secret keys of its successor classes ’s even after deleting the security class from the hierarchy. We aim to propose a new key management scheme for dynamic access control in a large leaf class hierarchy, which makes use of symmetric-key cryptosystem and one-way hash function. We show that our scheme requires significantly less storage and computational overheads as compared to Lo et al.’s scheme and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against all possible attacks including the forward security. In addition, our scheme supports efficiently dynamic access control problems compared to Lo et al.’s scheme and other related schemes. Thus, higher security along with low storage and computational costs make our scheme more suitable for practical applications compared to other schemes.
We propose an efficient and secure two-server password-only remote user authentication protocol for consumer electronic devices, such as smartphones and laptops. Our protocol works on-top of any existing trust model, like Secure Sockets Layer protocol (SSL). The proposed protocol is secure against dictionary and impersonation attacks.
Technological advances in wearable and implanted medical devices are enabling wireless body area networks to alter the current landscape of medical and healthcare applications. These systems have the potential to significantly improve real time patient monitoring, provide accurate diagnosis and deliver faster treatment. In spite of their growth, securing the sensitive medical and patient data relayed in these networks to protect patients' privacy and safety still remains an open challenge. The resource constraints of wireless medical sensors limit the adoption of traditional security measures in this domain. In this work, we propose a distributed mobile agent based intrusion detection system to secure these networks. Specifically, our autonomous mobile agents use machine learning algorithms to perform local and network level anomaly detection to detect various security attacks targeted on healthcare systems. Simulation results show that our system performs efficiently with high detection accuracy and low energy consumption.
In this study, we are using a multi-party recording as a template for building a parametric speech synthesiser which is able to express different levels of attentiveness in backchannel tokens. This allowed us to investigate i) whether it is possible to express the same perceived level of attentiveness in synthesised than in natural backchannels; ii) whether it is possible to increase and decrease the perceived level of attentiveness of backchannels beyond the range observed in the original corpus.
Phishing attacks have reached record volumes in recent years. Simultaneously, modern phishing websites are growing in sophistication by employing diverse cloaking techniques to avoid detection by security infrastructure. In this paper, we present PhishFarm: a scalable framework for methodically testing the resilience of anti-phishing entities and browser blacklists to attackers' evasion efforts. We use PhishFarm to deploy 2,380 live phishing sites (on new, unique, and previously-unseen .com domains) each using one of six different HTTP request filters based on real phishing kits. We reported subsets of these sites to 10 distinct anti-phishing entities and measured both the occurrence and timeliness of native blacklisting in major web browsers to gauge the effectiveness of protection ultimately extended to victim users and organizations. Our experiments revealed shortcomings in current infrastructure, which allows some phishing sites to go unnoticed by the security community while remaining accessible to victims. We found that simple cloaking techniques representative of real-world attacks- including those based on geolocation, device type, or JavaScript- were effective in reducing the likelihood of blacklisting by over 55% on average. We also discovered that blacklisting did not function as intended in popular mobile browsers (Chrome, Safari, and Firefox), which left users of these browsers particularly vulnerable to phishing attacks. Following disclosure of our findings, anti-phishing entities are now better able to detect and mitigate several cloaking techniques (including those that target mobile users), and blacklisting has also become more consistent between desktop and mobile platforms- but work remains to be done by anti-phishing entities to ensure users are adequately protected. Our PhishFarm framework is designed for continuous monitoring of the ecosystem and can be extended to test future state-of-the-art evasion techniques used by malicious websites.
This paper describes an approach to detecting malicious code introduced by insiders, which can compromise the data integrity in a program. The approach identifies security spots in a program, which are either malicious code or benign code. Malicious code is detected by reviewing each security spot to determine whether it is malicious or benign. The integrity breach conditions (IBCs) for object-oriented programs are specified to identify security spots in the programs. The IBCs are specified by means of the concepts of coupling within an object or between objects. A prototype tool is developed to validate the approach with a case study.
The purpose of this study was to propose a model of development of trust in social robots. Insights in interpersonal trust were adopted from social psychology and a novel model was proposed. In addition, this study aimed to investigate the relationship among trust development and self-esteem. To validate the proposed model, an experiment using a communication robot NAO was conducted and changes in categories of trust as well as self-esteem were measured. Results showed that general and category trust have been developed in the early phase. Self-esteem is also increased along the interactions with the robot.
This publication presents some techniques for insider threats and cryptographic protocols in secure processes. Those processes are dedicated to the information management of strategic data splitting. Strategic data splitting is dedicated to enterprise management processes as well as methods of securely storing and managing this type of data. Because usually strategic data are not enough secure and resistant for unauthorized leakage, we propose a new protocol that allows to protect data in different management structures. The presented data splitting techniques will concern cryptographic information splitting algorithms, as well as data sharing algorithms making use of cognitive data analysis techniques. The insider threats techniques will concern data reconstruction methods and cognitive data analysis techniques. Systems for the semantic analysis and secure information management will be used to conceal strategic information about the condition of the enterprise. Using the new approach, which is based on cognitive systems allow to guarantee the secure features and make the management processes more efficient.