Visible to the public Trustworthy Systems 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo

Trustworthy Systems 2015


Trust is created in information security to assure the identity of external parties.  Trustworthy systems are a key element in the security of cyber physical systems, resiliency, and composability.  The research cited here was presented in 2015.

H. Orojloo and M. A. Azgomi, "Evaluating The Complexity And Impacts Of Attacks on Cyber-Physical Systems," Real-Time and Embedded Systems and Technologies (RTEST), 2015 CSI Symposium on, Tehran, 2015, pp. 1-8. doi: 10.1109/RTEST.2015.7369840

Abstract: In this paper, a new method for quantitative evaluation of the security of cyber-physical systems (CPSs) is proposed. The proposed method models the different classes of adversarial attacks against CPSs, including cross-domain attacks, i.e., cyber-to-cyber and cyber-to-physical attacks. It also takes the secondary consequences of attacks on CPSs into consideration. The intrusion process of attackers has been modeled using attack graph and the consequence estimation process of the attack has been investigated using process model. The security attributes and the special parameters involved in the security analysis of CPSs, have been identified and considered. The quantitative evaluation has been done using the probability of attacks, time-to-shutdown of the system and security risks. The validation phase of the proposed model is performed as a case study by applying it to a boiling water power plant and estimating the suitable security measures.

Keywords: cyber-physical systems; estimation theory; graph theory; probability; security of data; CPS; attack graph; attack probability; consequence estimation process; cross-domain attack; cyber-physical system security;cyber-to-cyber attack; cyber-to-physical attack; security attributes; security risks; time-to-shutdown; Actuators; Computer crime; Cyber-physical systems; Process control; Sensor phenomena and characterization; Cyber-physical systems; attack consequences; modeling; quantitative security evaluation (ID#: 16-9427)



K. G. Lyn, L. W. Lerner, C. J. McCarty and C. D. Patterson, "The Trustworthy Autonomic Interface Guardian Architecture for Cyber-Physical Systems," Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 1803-1810. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.263

Abstract: The growing connectivity of cyber-physical systems (CPSes) has led to an increased concern over the ability of cyber-attacks to inflict physical damage. Current cyber-security measures focus on preventing attacks from penetrating control supervisory networks. These reactive techniques, however, are often plagued with vulnerabilities and zero-day exploits. Embedded processors in CPS field devices often possess little security of their own, and are easily exploited once the network is penetrated. We identify four possible outcomes of a cyber-attack on a CPS embedded processor. We then discuss five trust requirements that a device must satisfy to guarantee correct behavior through the device's lifecycle. Next, we examine the Trustworthy Autonomic Interface Guardian Architecture (TAIGA) which monitors communication between the embedded controller and physical process. This autonomic architecture provides the physical process with a last line of defense against cyber-attacks. TAIGA switches process control to a trusted backup controller if an attack causes a system specification violation. We conclude with experimental results of an implementation of TAIGA on a hazardous cargo-carrying robot.

Keywords: cyber-physical systems; trusted computing; CPS embedded processor; TAIGA; cyber-attacks; cyber-physical systems; cyber-security measures; embedded controller; physical process; reactive techniques; trusted backup controller; trustworthy autonomic interface guardian architecture; Control systems; Process control; Program processors; Sensors; Trojan horses; Cyber-physical systems; autonomic control; embedded device security; resilience; trust (ID#: 16-9428)



Xiaohong Chen, Fan Gu, Mingsong Chen, Dehui Du, Jing Liu and Haiying Sun, "Evaluating Energy Consumption for Cyber-Physical Energy System: An Environment Ontology-Based Approach," Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 5-14. doi: 10.1109/COMPSAC.2015.114

Abstract: Energy consumption evaluation is one of the most important steps in Cyber-Physical Energy System (CPES) development. However, due to the lack of accurate and effective modeling and evaluation approaches considering the uncertainty of environment, it is hard to conduct the quantitative analysis for the energy consumption of CPESs. To address the above issue, this paper proposes an environment-aware energy consumption evaluation framework based on the Statistical Model Checking (SMC). In our framework, the environment uncertainty of CPESs is modeled using the Stochastic Hybrid Automata (SHA). In order to describe various environment modeling patterns, we create a collection of parameterized SHA models and save them to a domain specific environment ontology. Based on the domain environment ontology and user designs in the form of UML sequence diagrams and activity diagrams, our framework can automatically guide the construction of CPES models using networks of SHA and conduct the corresponding energy consumption evaluation. A case study based on an energy-aware building design demonstrates that our approach can not only support the accurate environment modeling with various uncertain factors, but also can be used to reason the relations between the energy consumption and environment uncertainties of CPES designs.

Keywords: energy consumption; power system management; power system security; statistical analysis; stochastic automata; CPES development; SMC; UML sequence diagram; cyber physical energy system; energy aware building design; energy consumption evaluation; environment ontology based approach; environment uncertainty; environment-aware energy consumption; parameterized SHA models; statistical model checking; stochastic hybrid automata; Energy consumption; Monitoring; Ontologies; Synchronization; Temperature measurement; Uncertainty; Unified modeling language; Cyber-Physical Energy Systems; Environment Ontology; Statistical Model Checking; Stochastic Hybrid Automata (SHA); Uncertainty of Environment (ID#: 16-9429)



Bei Cheng, Xiao Wang, Jufu Liu and Dehui Du, "Modana: An Integrated Framework for Modeling and Analysis of Energy-Aware CPSs," Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 127-136. doi: 10.1109/COMPSAC.2015.68

Abstract: Cyber-Physical Systems (CPSs) as advanced embedded systems integrating computation with physical process are increasingly penetrating into our life. Modeling and analysis for such systems closely involved with us are actively researched. A current challenging problem is how to take advantages of existing technologies like SysML/MARTE, Modelica and Statistical Model Checking (SMC) through effective integration. Moreover, the lack of efficient methodologies or tools for modeling and analysis of CPSs makes the gap between design and analysis models hard to bridge. To solve these problems, we present a framework named Modana to achieve an integrated process from modeling with SysML/MARTE to analysis with SMC for CPSs in terms of Non Functional Properties (NFP) such as time, energy, etc. Functional Mock-up Interface (FMI), as a connecting link between modeling and analysis, plays a major role in coordinating various tools for co-simulation to generate traces as the input of statistical model checker. To demonstrate the capability of Modana framework, we model energy-aware buildings as a case study, and discuss the analysis on energy consumption in different scenarios.

Keywords: embedded systems; formal verification; power aware computing; statistical analysis; FMI; Modana; Modelica; NFP; SMC; SysML/MARTE; cyber-physical systems; embedded systems; energy consumption; energy-aware CPS analysis; energy-aware CPS modeling; energy-aware buildings; functional mock-up interface; integrated framework; integrated process; nonfunctional properties; statistical model checking; system analysis; system modeling; Analytical models; Computational modeling; Libraries; Mathematical model; Object oriented modeling; Stochastic processes; Unified modeling language; SysML/MARTE; cyber-physical systems; energy-aware buildings; functional mock-up interface; statistical model checking (ID#: 16-9430)



Tai-Won Um, Gyu Myoung Lee and Jun Kyun Choi, "Strengthening Trust in the Future ICT Infrastructure," ITU Kaleidoscope: Trust in the Information Society (K-2015), 2015, Barcelona, 2015, pp. 1-8.  doi: 10.1109/Kaleidoscope.2015.7383628

Abstract: Moving towards a hyperconnected society in the forthcoming "zettabyte" era requires a trusted ICT infrastructure for sharing information and creating knowledge. To advance the efforts to build converged ICT services and reliable information infrastructures, ITU-T has recently started a work item on future trusted ICT infrastructures. In this paper, we introduce the concept of a social-cyber-physical infrastructure from the social Internet of Things paradigm and present different meanings from various perspectives for a clear understanding of trust. Then, the paper identifies key challenges for a trustworthy ICT infrastructure. Finally, we propose a generic architectural framework for trust provisioning and presents strategies to stimulate activities for future standardization on trust with related standardization bodies.

Keywords: standardisation; trusted computing; ITU-T; converged ICT services; information and communication technology; social Internet-of-Things paradigm; social-cyber-physical infrastructure; standardization; trust provisioning; trusted ICT infrastructure; zettabyte era; Cloud computing; Interconnected systems; Internet of things; Reliability; Security; Standardization; Telecommunications; ICT; Internet of Things; Trust; social-cyber-physical infrastructure (ID#: 16-9431)



Jin-Hee Han, Yongsung Jeon and Jeongnyeo Kim, "Security Considerations for Secure and Trustworthy Smart Home System in the IoT Environment," Information and Communication Technology Convergence (ICTC), 2015 International Conference on, Jeju, 2015, pp. 1116-1118. doi: 10.1109/ICTC.2015.7354752

Abstract: Recently, smart home appliances and wearable devices have been developed through many companies. Most devices can be interacted with various sensors, have communication function to connect the Internet by themselves. Those devices will provide a wide range of services to users through a mutual exchange of information. However, due to the nature of the IoT environment, the appropriate security functions for secure and trustworthy smart home service should be applied extensively because the security threats will be increased and impact of security threats is likely to be expanded. Therefore, in this paper, we describe specifically the security requirements of the components that make up the smart home system.

Keywords: Internet; Internet of Things; computer network security; home automation; trusted computing; Internet; IoT environment; information mutual exchange; secure smart home system; security threat; smart home appliance; trustworthy smart home system; wearable device; Authentication; Data privacy; Internet of things; Logic gates; Servers; Smart homes; IoT; requirements; security; smart home system (ID#: 16-9432)



A. A. Gendreau, "Situation Awareness Measurement Enhanced for Efficient Monitoring in the Internet of Things," Region 10 Symposium (TENSYMP), 2015 IEEE, Ahmedabad, 2015, pp. 82-85. doi: 10.1109/TENSYMP.2015.13

Abstract: The Internet of Things (IoT) is a heterogeneous network of objects that communicate with each other and their owners over the Internet. In the future, the utilization of distributed technologies in combination with their object applications will result in an unprecedented level of knowledge and awareness, creating new business opportunities and expanding existing ones. However, in this paradigm where almost everything can be monitored and tracked, an awareness of the state of the monitoring systems' situation will be important. Given the anticipated scale of business opportunities resulting from new object monitoring and tracking capabilities, IoT adoption has not been as fast as expected. The reason for the slow growth of application objects is the immaturity of the standards, which can be partly attributed to their unique system requirements and characteristics. In particular, the IoT standards must exhibit efficient self-reliant management and monitoring capability, which in a hierarchical topology is the role of cluster heads. IoT standards must be robust, scalable, adaptable, reliable, and trustworthy. These criteria are predicated upon the limited lifetime, and the autonomous nature, of wireless personal area networks (WPANs), of which wireless sensor networks (WSNs) are a major technological solution and research area in the IoT. In this paper, the energy efficiency of a self-reliant management and monitoring WSN cluster head selection algorithm, previously used for situation awareness, was improved upon by sharing particular established application cluster heads. This enhancement saved energy and reporting time by reducing the path length to the monitoring node. Also, a proposal to enhance the risk assessment component of the model is made. We demonstrate through experiments that when benchmarked against both a power and randomized cluster head deployment, the proposed enhancement to the situation awareness metric used less power. Potentially, this approac- can be used to design a more energy efficient cluster-based management and monitoring algorithm for the advancement of security, e.g. Intrusion detection systems (IDSs), and other standards in the IoT.

Keywords: Internet of Things; personal area networks; security of data; wireless sensor networks; Internet of Things; WPAN; WSN; distributed technologies; efficient self-reliant management and monitoring capability; heterogeneous network; object monitoring and tracking capabilities; situation awareness measurement; situation awareness metric; wireless personal area networks; wireless sensor networks; Energy efficiency; Internet of things; Monitoring; Security; Standards; Wireless sensor networks; Internet of Things; Intrusion detection system; Situational awareness; Wireless sensor networks (ID#: 16-9433)



R. Gupta and R. Garg, "Mobile Applications Modelling and Security Handling in Cloud-Centric Internet of Things," Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 285-290. doi: 10.1109/ICACCE.2015.119

Abstract: The Mobile Internet of Things (IoT) applications are already a part of technical world. The integration of these applications with Cloud can increase the storage capacity and help users to collect and process their personal data in an organized manner. There are a number of techniques adopted for sensing, communicating and intelligently transmitting data from mobile devices onto the Cloud in IoT applications. Thus, security must be maintained while transmission. The paper outlines the need for Cloud-centric IoT applications using Mobile phones as the medium for communication. Overview of different techniques to use Mobile IoT applications with Cloud has been presented. Majorly four techniques namely Mobile Sensor Data Processing Engine (MOSDEN), Mobile Fog, Embedded Integrated Systems (EIS) and Dynamic Configuration using Mobile Sensor Hub (MosHub) are discussed and few of the similarities and comparisons between them is mentioned. There is a need to maintain confidentiality and security of the data being transmitted by these methodologies. Therefore, cryptographic mechanisms like Public Key Encryption (PKI)and Digital certificates are used for data mechanisms like Public Key Encryption (PKI) and Digital certificates are used for data management (TSCM) allows trustworthy sensing of data for public in IoT applications. The above technologies are used to implement an application called Smart Helmet by us to bring better understanding of the concept of Cloud IoT and support Assisted Living for the betterment of the society. Thus the Applications makes use of Nordic BLE board transmission and stores data onto the Cloud to be used by large number of people.

Keywords: Internet of Things; cloud computing; data acquisition; embedded systems; mobile computing; public key cryptography; trusted computing; EIS; MOSDEN; MosHub; Nordic BLE board transmission; PKI; Smart Helmet; TSCM; assisted living;cloud-centric Internet of Things; cloud-centric IoT applications; communication; cryptographic mechanisms; data confidentiality; data management; data mechanisms; data security; data transmission; digital certificates; dynamic configuration; embedded integrated systems; mobile Internet of Things; mobile IoT applications; mobile applications modelling; mobile devices; mobile fog; mobile phones; mobile sensor data processing engine; mobile sensor hub; personal data collection; personal data processing; public key encryption; security handling; sensing; storage capacity; trustworthy data; Bluetooth; Cloud computing; Mobile applications; Mobile communication; Mobile handsets; Security; Cloud IoT; Embedded Integrated Systems; Internet of Things; Mobile Applications; Mobile Sensor Data Processing Engine; Mobile Sensor Hub; Nordic BLE board; Public Key Encryption; Smart Helmet (ID#: 16-9434)



B. Mok et al., "Emergency, Automation Off: Unstructured Transition Timing for Distracted Drivers of Automated Vehicles," Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on, Las Palmas, 2015, pp. 2458-2464. doi: 10.1109/ITSC.2015.396

Abstract: In future automated driving systems, drivers will be free to perform other secondary tasks, not needing to stay vigilant in monitoring the car's activity. However, there will still be situations in which drivers are required to take-over control of the vehicle, most likely from a highly distracted state. While highly automated vehicles would ideally accommodate structured takeovers, providing ample time and warning, it is still very important to examine how drivers would behave when they are subjected to an unstructured emergency transition of control. In this study, we observed how participants (N=30) in a driving simulator performed after they experienced a loss of automation. We tested three transition time conditions, with an unstructured transition of control occurring 2 seconds, 5 seconds, or 8 seconds before the participants encountered a road hazard that required the drivers' intervention. Participants were given a passive distraction (watching a video) to do while the automated driving mode was enabled, so they needed to disengage from the task and regain control of the car when the transition occurred. Few drivers in the 2 second condition were able to safely negotiate the road hazard situation, while the majority of drivers in the 5 or 8 second conditions were able to navigate the hazard safely. Similarly, drivers in the 2 second condition rated the vehicle to be less trustworthy than drivers in the 5 and 8 second conditions. From the study results, we are able to narrow down a minimum amount of time in which drivers can take over the control of vehicle safely and comfortably from the automated system in the advent of an impending road hazard.

Keywords: automobiles; digital simulation; driver information systems ;traffic engineering computing; automated driving systems; automated vehicles; distracted drivers; driving simulator; passive distraction; road hazard situation; transition time conditions; unstructured emergency transition; unstructured transition timing; vehicle control; Automation; Hazards; Poles and towers; Roads; Standards; Vehicles; Wheels; Autonomous Driving Simulation; Controlled Study; Driving Performance; Human Factors; Transition of Control (ID#: 16-9435)



M. Shiomi and N. Hagita, "Social Acceptance of a Childcare Support Robot System," Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on, Kobe, 2015, pp. 13-18. doi: 10.1109/ROMAN.2015.7333658

Abstract: This paper investigates people's social acceptance of a childcare support robot system and compares their attitudes to two childcare technologies: anesthesia during labor and baby food (processed food and formula milk), which includes powdered milk and instant food for babies and toddlers. To investigate their social acceptance, we developed scales from three points of view: safety and trustworthy, diligence, and decreasing workload. For this paper, our participants were comprised of 412 people located through a web-based survey and 14 people who experienced the prototype of our childcare support robot system. They answered questionnaires about our three developed scales and an intention to use scale to investigate their social acceptance toward childcare support technologies. The web-based survey results indicate that our system's concept was evaluated lower than current childcare support technologies, but people who experienced our system prototype evaluated it higher than those who filled out web-based surveys.

Keywords: human factors; human-robot interaction; medical robotics; service robots; anesthesia; baby food; childcare support robot system; childcare support technologies; childcare technologies; instant food for babies; people social acceptance; powdered milk; social acceptance; toddlers; Anesthesia; Dairy products; Pediatrics; Prototypes; Robot sensing systems; Safety (ID#: 16-9436)



L. Gu, M. Zhou, Z. Zhang, M. C. Shan, A. Zhou and M. Winslett, "Chronos: An Elastic Parallel Framework for Stream Benchmark Generation and Simulation," Data Engineering (ICDE), 2015 IEEE 31st International Conference on, Seoul, 2015, pp. 101-112. doi: 10.1109/ICDE.2015.7113276

Abstract: In the coming big data era, stress test to IT systems under extreme data volume is crucial to the adoption of computing technologies in every corner of the cyber world. Appropriately generated benchmark datasets provide the possibility for administrators to evaluate the capacity of the systems when real datasets hard obtained have not extreme cases. Traditional benchmark data generators, however, mainly target at producing relation tables of arbitrary size following fixed distributions. The output of such generators are insufficient when it is used to measure the stability of the architecture with extremely dynamic and heavy workloads, caused by complicated/hiden factors in the generation mechanism of real world, e.g. dependency between stocks in the trading market and collaborative human behaviors on the social network. In this paper, we present a new framework, called Chronos, to support new demands on streaming data benchmarking, by generating and simulating realistic and fast data streams in an elastic manner. Given a small group of samples with timestamps, Chronos reproduces new data streams with similar characteristics of the samples, preserving column-wise correlations, temporal dependency and order statistics of the snapshot distributions at the same time. To achieve such realistic requirements, we propose 1) a column decomposition optimization technique to partition the original relation table into small sub-tables with minimal correlation information loss, 2) a generative and extensible model based on Latent Dirichlet Allocation to capture temporal dependency while preserving order statistics of the snapshot distribution, and 3) a new generation and assembling method to efficiently build tuples following the expected distribution on the snapshots. To fulfill the vision of elasticity, we also present a new parallel stream data generation mechanism, facilitating distributed nodes to collaboratively generate tuples with minimal synchronization overhead and excellent load balancing. Our extensive experimental studies on real world data domains confirm the efficiency and effectiveness of Chronos on stream benchmark generation and simulation.

Keywords: Big Data; optimisation; parallel processing; program assemblers; resource allocation; Big Data; Chronos framework; IT systems; assembling method; benchmark data generators; collaborative human behaviors; column decomposition optimization technique; column-wise correlations; cyber world; elastic parallel framework; extreme data volume; fast data streams; latent Dirichlet allocation; load balancing; minimal correlation information loss; minimal synchronization overhead; order statistics; parallel stream data generation mechanism; snapshot distributions; social network; stream benchmark generation; temporal dependency; timestamps; trading market; Benchmark testing; Complexity theory; Computational modeling; Correlation; Distributed databases; Generators (ID#: 16-9437)



R. A. Earnshaw, M. D. Silva and P. S. Excell, "Ten Unsolved Problems with the Internet of Things," 2015 International Conference on Cyberworlds (CW), Visby, 2015, pp. 1-7. doi: 10.1109/CW.2015.28

Abstract: It is estimated that by 2020 that the Internet of Things with embedded and wearable computing will have a major impact on society and be providing beneficial services in a wide variety of applications. It has already accomplished some cost savings in operations and increases in asset efficiency. Increased connectivity and improved safety, security, and reliability are expected to increase potential value in a range of applications from healthcare to transportation. As much of the technology is embedded and invisible it can have the role of a smart assistant working away autonomously and unobtrusively in the background. However, automatic monitoring of activities brings increased potential invasion of personal spaces and personal data. There are significant issues in a number of areas which still have to be addressed in order to ensure safe, reliable and trustworthy systems. A prototype funded by the UK Technology Strategy Board demonstrates the value and advantages of business to business collaboration via the Internet. It also emphasizes the benefits of connectivity and its contribution to sustainability.

Keywords: Internet of Things; business data processing; data privacy; security of data; trusted computing; IlK Technology Strategy Board; Internet; Internet of Things; asset efficiency; automatic activity monitoring; business-to-business collaboration; connectivity improvement; cost saving; personal data; personal spaces; reliability improvement; safety improvement; security improvement; smart assistant; Collaboration; Companies; Internet of things; Monitoring; Real-time systems; big data; business to business collaboration; embedded systems; implanted devices; interoperability; latency; open standards; predictive analytics; privacy; real-time monitoring; security; trust; visual analytics (ID#: 16-9438)



T. Fadai, S. Schrittwieser, P. Kieseberg and M. Mulazzani, "Trust me, I'm a Root CA! Analyzing SSL Root CAs in Modern Browsers and Operating Systems," Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 174-179. doi: 10.1109/ARES.2015.93

Abstract: The security and privacy of our online communications heavily relies on the entity authentication mechanisms provided by SSL. Those mechanisms in turn heavily depend on the trustworthiness of a large number of companies and governmental institutions for attestation of the identity of SSL services providers. In order to offer a wide and unobstructed availability of SSL-enabled services and to remove the need to make a large amount of trust decisions from their users, operating systems and browser manufactures include lists of certification authorities which are trusted for SSL entity authentication by their products. This has the problematic effect that users of such browsers and operating systems implicitly trust those certification authorities with the privacy of their communications while they might not even realize it. The problem is further complicated by the fact that different software vendors trust different companies and governmental institutions, from a variety of countries, which leads to an obscure distribution of trust. To give insight into the trust model used by SSL this thesis explains the various entities and technical processes involved in establishing trust when using SSL communications. It furthermore analyzes the number and origin of companies and governmental institutions trusted by various operating systems and browser vendors and correlates the gathered information to a variety of indexes to illustrate that some of these trusted entities are far from trustworthy. Furthermore it points out the fact that the number of entities we trust with the security of our SSL communications keeps growing over time and displays the negative effects this might have as well as shows that the trust model of SSL is fundamentally broken.

Keywords: certification; cryptographic protocols; data privacy; message authentication; online front-ends; operating systems (computers);trusted computing; CAs; SSL communications; SSL entity authentication; SSL root; SSL-enabled services; browsers; certification authorities; entity authentication mechanisms; online communications; operating systems; privacy; root certificate programs; security; trust model; Browsers; Companies; Government; Indexes; Internet; Operating systems; Security; CA; PKI; trust (ID#: 16-9439)



Jingyao Fan, Qinghua Li and Guohong Cao, "Privacy-Aware and Trustworthy Data Aggregation in Mobile Sensing," Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 31-39. doi: 10.1109/CNS.2015.7346807

Abstract: With the increasing capabilities of mobile devices such as smartphones and tablets, there are more and more mobile sensing applications such as air pollution monitoring and healthcare. These applications usually aggregate the data contributed by mobile users to infer about people's activities or surroundings. Mobile sensing can only work properly if the data provided by users is adequate and trustworthy. However, mobile users may not be willing to submit data due to privacy concerns, and they may be malicious and submit forged data to cause damage to the system. To address these problems, this paper proposes a novel privacy-aware and trustworthy data aggregation protocol for mobile sensing. Our protocol allows the server to aggregate the data submitted by mobile users without knowing the data of individual user. At the same time, if malicious users submit invalid data, they will be detected or the polluted aggregation result will be rejected by the server. In this way, the malicious users' effect on the aggregation result is effectively limited. The detection of invalid data works even if multiple malicious users collude. Security analysis shows that our scheme can achieve the trustworthy and privacy preserving goals, and experimental results show that our scheme has low computation cost and low power consumption.

Keywords: data privacy; mobile handsets; protocols; telecommunication security; trusted computing; invalid data detection; mobile device; mobile sensing; power consumption; privacy-aware data aggregation protocol; security analysis; trustworthy data aggregation protocol; Aggregates; Data privacy; Manganese; Mobile communication; Protocols; Sensors; Servers (ID#: 16-9440)



J. Singh, T. F. J. M. Pasquier and J. Bacon, "Securing Tags to Control Information Flows within the Internet of Things," Recent Advances in Internet of Things (RIoT), 2015 International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/RIOT.2015.7104903

Abstract: To realise the full potential of the Internet of Things (loT), loT architectures are moving towards open and dynamic interoperability, as opposed to closed application silos. This is because functionality is realised through the interactions, i.e. the exchange of data, between a wide-range of 'things'. Data sharing requires management. Towards this, we are exploring distributed, decentralised Information Flow Control (IFC) to enable controlled data flows, end-to-end, according to policy. In this paper we make the case for IFC, as a data-centric control mechanism, for securing loT architectures. Previous research on IFC focuses on a particular system or application, e.g. within an operating system, with little concern for wide-scale, dynamic systems. To render IFC applicable to loT, we present a certificate-based model for secure, trustworthy policy specification, that also reflects real-world loT concerns such as 'thing' ownership. This approach enables decentralised, distributed, verifiable policy specification, crucial for securing the wide-ranging, dynamic interactions of future loT applications.

Keywords: Internet of Things; telecommunication control; telecommunication security; Internet of Things; data-centric control mechanism; information flow control; loT architectures; Production; Reliability; Certificates; Distributed Systems; Information Flow Control; Internet of Things; PKI; Privacy; Security (ID#: 16-9441)



S. Chaudhary and R. Nath, "A New Template Protection Approach for Iris Recognition," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6. doi: 10.1109/ICRITO.2015.7359306

Abstract: Biometric recognition systems, which rely on physical and behavioral features of the human body to recognize a human-being, are used in various areas that require a high degree of security. Among different biometric recognition methods, iris recognition is measured as being the most trustworthy, distinct, consistent, and stable option. Template security is an important aspect of a biometric system. It necessitates the development of an approach that ensures user security and privacy. In this paper, an approach based on steganography is proposed to protect the iris template. Random number based embedding is used in LSB (Least Significant Bit) steganography to enhance security. To provide more security, bits are embedded into LSB's of blue pixel only. The IrisCode bits are embedded across three least significant bits randomly. The resulting template is more secure as original biometric data is not stored in the database rather it is stored after embedding in cover image. The performance of the proposed approach is evaluated and found to be better in terms of Peak Signal to Noise Ratio (PSNR) value, histogram plot and Receiver Operating Characteristic (ROC) curve plot.

Keywords: biometrics (access control); embedded systems; iris recognition; sensitivity analysis; steganography; LSB steganography; PSNR value; ROC curve; behavioral feature; biometric recognition system; iris recognition; least significant bit steganography; peak signal-to-noise ratio value; random number-based embedding; receiver operating characteristic curve; template protection approach; template security; Databases; Feature extraction; Image color analysis; Iris recognition; Security; Transforms; Hamming distance; Iris recognition; IrisCode; Least significant bit substitution; Receiver operating characteristic curve; Steganography (ID#: 16-9442)



S. Benabied, A. Zitouni and M. Djoudi, "A Cloud Security Framework Based on Trust Model and Mobile Agent," Cloud Technologies and Applications (CloudTech), 2015 International Conference on, Marrakech, 2015, pp. 1-8. doi: 10.1109/CloudTech.2015.7336962

Abstract: Cloud computing as a potential paradigm offers tremendous advantages to enterprises. With the cloud computing, the market's entrance time is reduced, computing capabilities is augmented and computing power is really limitless. Usually, to use the full power of cloud computing, cloud users has to rely on external cloud service provider for managing their data. Nevertheless, the management of data and services are probably not fully trustworthy. Hence, data owners are uncomfortable to place their sensitive data outside their own system .i.e., in the cloud., Bringing transparency, trustworthiness and security in the cloud model, in order to fulfill client's requirements are still ongoing. To achieve this goal, our paper introduces two levels security framework: Cloud Service Provider (CSP) and Cloud Service User (CSU). Each level is responsible for a particular task of the security. The CSU level includes a proxy agent and a trust agent, dealing with the first verification. Then a second verification is performed at the CSP level. The framework incorporates a trust model to monitor users' behaviors. The use of mobile agents will exploit their intrinsic features such as mobility, deliberate localization and secure communication channel provision. This model aims to protect user's sensitive information from other internal or external users and hackers. Moreover, it can detect policy breaches, where the users are notified in order to take necessary actions when malicious access or malicious activity would occur.

Keywords: cloud computing; mobile agents; security of data; trusted computing; CSP; CSU; cloud computing; cloud security framework; cloud service provider; cloud service user; data management; mobile agent; policy breach detection; proxy agent; trust agent; trust model; two levels security framework; Cloud computing; Companies; Computational modeling; Mobile agents; Monitoring; Security; Servers; Cloud Computing Security; Cloud computing; Mobile Agent; Security and Privacy; Trust; Trust Model; cloud service provider (ID#: 16-9443)



F. Akram and R. P. Rustagi, "An Efficient Approach Towards Privacy Preservation and Collusion Resistance Attendance System," MOOCs, Innovation and Technology in Education (MITE), 2015 IEEE 3rd International Conference on, Amritsar, 2015, pp. 41-45. doi: 10.1109/MITE.2015.7375285

Abstract: Every organization whether it be an enterprise or academic institute, should maintain a proper documentation of attendance of staff or students for effective functioning of their organization. Generally there are two methods of taking attendance in a classroom- calling out students name one by one or taking students signature on paper. However each of those strategies are time consuming, inefficient and moreover proxy can be easily given by students if strength of class is large since it is not possible to verify each student in person. A good amount of the time is wasted in taking attendance and even then many times attendance don't seem to be marked properly. Proxy of students in classroom is a perpetual problem that has to be addressed. Therefore an attendance management system is required which not only authenticates the student but also detect proxies. In this paper we combine privacy preservation and collusion resistance to propose an attendance system which uses an android application to mark student's attendance in the classroom and detect proxies given by students. It exploits the Bluetooth technology to ensure student is present in class itself rather than in the canteen or in the library. Based on the attendance given by students, a trust factor is assigned to each student to determine how trustworthy a student is.

Keywords: Bluetooth; data privacy; educational institutions; Bluetooth technology; android application; collusion resistance attendance management system; perpetual problem; privacy preservation; student attendance marking; student authentication; student proxy detection; student trustworthiness; trust factor; Bluetooth; Databases; Fingerprint recognition; Organizations; Privacy; Servers; Smart phones; Bluetooth Scanning; Collusion Avoidance; Privacy; Trust Factor (ID#: 16-9444)



J. Classen, J. Braun, F. Volk, M. Hollick, J. Buchmann and M. Muhlhauser, "A Distributed Reputation System for Certification Authority Trust Management," Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 1349-1356. doi: 10.1109/Trustcom.2015.529

Abstract: In the current Web Public Key Infrastructure (Web PKI), few central instances have the power to make trust decisions. From a system's perspective, it has the side effect that every Certification Authority (CA) becomes a single point of failure (SPOF). In addition, trust is no individual matter per user, what makes trust decisions hard to revise. Hence, we propose a method to leverage Internet users and thus distribute CA trust decisions. However, the average user is unable to manually decide which incoming TLS connections are trustworthy and which are not. Therefore, we overcome this issue with a distributed reputation system that facilitates sharing trust opinions while preserving user privacy. We assess our methodology using real-world browsing histories. Our results exhibit a significant attack surface reduction with respect to the current Web PKI, and at the same time we only introduce a minimal overhead.

Keywords: Internet; data privacy; decision making; public key cryptography; trusted computing; CA trust decision; Internet users; SPOF;TLS connections; Web PKI; Web public key infrastructure; attack surface reduction; certification authority trust management; distributed reputation system; single point of failure; trust decision making; trust opinion sharing; user privacy preservation; History Internet; Peer-to-peer computing; Privacy; Protocols; Routing; Security; Web PKI; distributed system; trust management (ID#: 16-9445)



C. L. Claiborne, C. Ncube and R. Dantu, "Random Anonymization of Mobile Sensor Data: Modified Android Framework," Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 182-184. doi: 10.1109/ISI.2015.7165968

Abstract: With the increasing ability to accurately classify activities of mobile users from what was once viewed as innocuous mobile sensor data, the risk of users compromising their privacy has risen exponentially. Currently, mobile owners cannot control how various applications handle the privacy of their sensor data, or even determine if a service provider is adversarial or trustworthy. To address these privacy concerns, third party applications have been designed to allow mobile users to have control over the data that is sent to service providers. However, these applications require users to set flags and parameters that place restrictions on the anonymized or real sensor data that is sent to the requestor. Therefore, in this paper, we introduce a new framework, RANDSOM, that moves the decision-making from the application level to the operating system level.

Keywords: data privacy; mobile computing; smart phones; telecommunication security; RANDSOM framework; application level; mobile sensor data; mobile users; modified Android framework; operating system level; privacy concerns; random anonymization; sensor data privacy; service providers; third party applications; Accelerometers; Data models; Data privacy; Hidden Markov models; Mobile communication; Privacy; Smart phones; Android; RANDSOM; anonymized; pervasive; privacy; provider; smart phone (ID#: 16-9446)



K. Thakker, Chung-Horng Lung and P. Morde, "Secure and Optimal Content-centric Networking Caching Design," Trustworthy Systems and Their Applications (TSA), 2015 Second International Conference on, Hualien, 2015, pp. 36-43. doi: 10.1109/TSA.2015.17

Abstract: Due to accretion demand and size of the contents makes today's Internet architecture inefficient. This host centric model does not seem effective to cater current communication needs where users focus on desired content. As a result, translation between content information and networking domain should take place, typically consisting of an establishment of a delivery path between the content provider and the content consumer. This translation is generally an inefficient constraint, as data location and data popularity are neglected, which leads to over consumption of network resources. The increasing demands of highly scalable and efficient distribution of contents have motivated the development of future Internet architecture based on named data objects. Currently, Content Centric Networking (CCN) is gaining attention as the future Internet architecture where contents themselves are the primary focus, rather than the location of the content. This paper provides an insight into efficient caching management policies used currently for large file caching, our proposed approach along with its justification and validation behind the idea for designing the best caching strategy in CCN. However, caching policies can be misused if attackers use cache as storage to make their own content available for attacks or privacy leaks. We conclude with the need for security mechanisms for protecting the cache and the security measures to prevent any misuse of it.

Keywords: Internet; cache storage; data privacy; security of data; CCN; Internet architecture; host centric model; named data objects; network resource over-consumption; optimal content-centric networking caching design; privacy leaks; secure content-centric networking caching design; Computer architecture; Computers; Internet; Mathematical model; Privacy; Routing protocols; Security; Content delivery networking (CDN); Content-centric networking (CCN); caching; peer-assisted content delivery; software defined networking (SDN) (ID#: 16-9447)



J. Bushey, "Trustworthy Citizen-Generated Images and Video on Social Media Platforms," System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 1553-1564. doi: 10.1109/HICSS.2015.189

Abstract: The convergence of digital cameras into mobile phones with Internet connectivity and the proliferation of social media platforms for accessing, sharing, managing and storing digital images and videos has transformed news reportage and provided the opportunity for citizen journalists to capture and disseminate visual documentation, which shapes contemporary events, informs future decisions and over time, becomes part of the historical record and societal memory. Or does it? What are the obstacles to ongoing access and long-term preservation of citizen-generated content in social media platforms? This paper provides a multidisciplinary approach to exploring the trustworthiness of online content, utilizing literature from the fields of journalism and the law, as well as findings from archival studies on record-making and recordkeeping in the digital environment. Recommendations to citizen journalists are provided to assist in the capture and storage of trustworthy digital images in social media platforms.

Keywords: Internet; cameras; mobile handsets; social networking (online);trusted computing; Internet connectivity; digital cameras; mobile phones; social media platforms; trustworthy citizen-generated images; trustworthy citizen-generated video; visual documentation; Digital images; Law; Media; Privacy; Reliability; Visualization; Authenticity; Citizen Journalism; Digital Images; Legal Evidence; Social Media; Trustworthiness (ID#: 16-9448)



M. Javanmard, M. A. Salehi and S. Zonouz, "TSC: Trustworthy and Scalable Cytometry," High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1356-1360. doi: 10.1109/HPCC-CSS-ICESS.2015.125

Abstract: Accurate flow cytometry analyses for disease diagnosis purposes requires powerful computational and storage resources that are rarely available in clinical settings. The emerging high-performance cloud computing technologies could potentially address the above-mentioned scalability challenge, however, potentially untrusted cloud infrastructures increases the security and privacy concerns significantly as the attackers may gain knowledge about the patient identity and medical information and affect the consequent course of treatment. In this paper, we present TSC, a trustworthy scalable Cloud-based solution to provide remote cytometry analysis capabilities. TSC enables the medical laboratories to upload the acquired high-frequency raw measurements to the cloud for remote cytometry analysis with high-confidence data security guarantees. In particular, using fundamental cryptographic security solutions, such as the trusted platform module framework, TSC eliminates any possibility of unauthorized sensitive patient data exfiltration to untrusted parties, e.g., malicious or compromised cloud providers. Our evaluation results show that TSC effectively facilitates scalable and efficient disease diagnoses while preserving the patient privacy and treatment correctness.

Keywords: cloud computing; cryptography; data privacy; diseases; medical information systems; patient diagnosis; patient treatment; trusted computing; TSC; computational resources; disease diagnosis; flow cytometry analysis; fundamental cryptographic security solutions; high-confidence data security; high-frequency raw measurements; high-performance cloud computing technologies; medical information; patient privacy; patient treatment correctness; remote cytometry analysis; remote cytometry analysis capabilities; scalability challenge; storage resources; trusted platform module framework; trustworthy scalable could-based solution; unauthorized sensitive patient data exfiltration; untrusted cloud infrastructures; untrusted parties; Cloud computing; Cryptography; Diseases; Electrodes; Proteins; Sensors; Cloud computing; Cytometry; Security; Trusted Platform Module (TPM) (ID#: 16-9449)



F. Schuster et al., "VC3: Trustworthy Data Analytics in the Cloud Using SGX," Security and Privacy (SP), 2015 IEEE Symposium on, San Jose, CA, 2015, pp. 38-54. doi: 10.1109/SP.2015.10

Abstract: We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5% with write integrity and 8% with read/write integrity.

Keywords: cloud computing; data analysis; data integrity; trusted computing; SGX; TCB; VC3;average runtime overhead; base security guarantees; cloud; hypervisor; memory regions; read-write integrity; region self-integrity invariants; secure distributed MapReduce computations; trustworthy data analytics; unmodified Hadoop; Encryption; Operating systems; Program processors; Protocols; Virtual machine monitors (ID#: 16-9450)



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.