Visible to the public International Conferences: Cloud Engineering (IC2E), 2015 Arizona

SoS Newsletter- Advanced Book Block

SoS Logo

International Conferences: Cloud Engineering (IC2E), 2015 Arizona


2015 IEEE International Conference on Cloud Engineering (IC2E) was held 9-13 March 2015 in Tempe, Arizona. The conference addresses cloud computing as

“a new paradigm for the use and delivery of information technology (IT), including on-demand access, economies of scale, and dynamic sourcing options. In the cloud context, a wide range of IT resources and capabilities, including servers, networking, storage, middleware, data, security, applications, and business processes, are available as services enabled for rapid provisioning, flexible pricing, elastic scaling, and resilience. These new forms of IT services are challenging conventional wisdom and practices. Fully reaping the benefits of cloud computing calls for holistic treatment of key technical and business issues, as well as for engineering methodology that draws upon innovations from diverse areas of computer science and business informatics. “

The conference home page is available at:  Articles cited here are deemed of interest to the Cyber Physical Systems Science of Security community.


Youngchoon Park, "Connected Smart Buildings, a New Way to Interact with Buildings," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 5, 5, 9-13 March 2015. doi: 10.1109/IC2E.2015.57
Abstract: Summary form only given. Devices, people, information and software applications rarely live in isolation in modern building management. For example, networked sensors that monitor the performance of a chiller are common and collected data are delivered to building automation systems to optimize energy use. Detected possible failures are also handed to facility management staffs for repairs. Physical and cyber security services have to be incorporated to prevent improper access of not only HVAC (Heating, Ventilation, Air Conditioning) equipment but also control devices. Harmonizing these connected sensors, control devices, equipment and people is a key to provide more comfortable, safe and sustainable buildings. Nowadays, devices with embedded intelligences and communication capabilities can interact with people directly. Traditionally, few selected people (e.g., facility managers in building industry) have access and program the device with fixed operating schedule while a device has a very limited connectivity to an operating environment and context. Modern connected devices will learn and interact with users and other connected things. This would be a fundamental shift in ways in communication from unidirectional to bi-directional. A manufacturer will learn how their products and features are being accessed and utilized. An end user or a device on behalf of a user can interact and communicate with a service provider or a manufacturer without go though a distributer, almost real time basis. This will requires different business strategies and product development behaviors to serve connected customers' demands. Connected things produce enormous amount of data that result many questions and technical challenges in data management, analysis and associated services. In this talk, we will brief some of challenges that we have encountered In developing connected building solutions and services. More specifically, (1) semantic interoperability requirements among smart s- nsors, actuators, lighting, security and control and business applications, (2) engineering challenges in managing massively large time sensitive multi-media data in a cloud at global scale, and (3) security and privacy concerns are presented.
Keywords: HVAC; building management systems; intelligent sensors; HVAC; actuators; building automation systems; building management; business strategy; chiller performance; connected smart buildings; control devices; cyber security services; data management; facility management staffs; heating-ventilation-air conditioning equipment; lighting; networked sensors; product development behaviors; service provider; smart sensors; time sensitive multimedia data; Building automation; Business; Conferences; Intelligent sensors; Security; Building Management; Cloud; Internet of Things (ID#: 15-5429)


Singh, J.; Pasquier, T.F.J.-M.; Bacon, J.; Eyers, D., "Integrating Messaging Middleware and Information Flow Control," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 54, 59, 9-13 March 2015. doi: 10.1109/IC2E.2015.13
Abstract: Security is an ongoing challenge in cloud computing. Currently, cloud consumers have few mechanisms for managing their data within the cloud provider's infrastructure. Information Flow Control (IFC) involves attaching labels to data, to govern its flow throughout a system. We have worked on kernel-level IFC enforcement to protect data flows within a virtual machine (VM). This paper makes the case for, and demonstrates the feasibility of an IFC-enabled messaging middleware, to enforce IFC within and across applications, containers, VMs, and hosts. We detail how such middleware can integrate with local (kernel) enforcement mechanisms, and highlight the benefits of separating data management policy from application/service-logic.
Keywords: cloud computing; data protection; middleware; security of data; virtual machines; VM; application logic; cloud computing; cloud consumers; cloud provider infrastructure; data flow protection; data management policy; information flow control; kernel enforcement mechanisms; kernel-level IFC enforcement; local enforcement mechanisms; messaging middleware integration; service-logic; virtual machine; Cloud computing; Context; Kernel; Runtime; Security; Servers; Information Flow Control; cloud computing; distributed systems; middleware; policy; security (ID#: 15-5430)


Routray, R., "Cloud Storage Infrastructure Optimization Analytics," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 92, 92, 9-13 March 2015. doi: 10.1109/IC2E.2015.83
Abstract: Summary form only given. Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.
Keywords: cloud computing; data analysis; optimisation; Analytics as a Service; OpenStack based platform; SaaS model; Software as a Service; cloud computing; cloud delivery models; cloud storage infrastructure optimization analytics; data analytical capabilities; data analytics; data planes; management metric data system; management platform system; operational enterprise data center; performance optimizations;software defined environments; value proposition; Big data; Cloud computing; Computer science;Conferences; Optimization; Software as a service; Storage management (ID#: 15-5431)


Strizhov, M.; Ray, I., "Substring Position Search over Encrypted Cloud Data Using Tree-Based Index," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 165, 174, 9-13 March 2015. doi: 10.1109/IC2E.2015.33
Abstract: Existing Searchable Encryption (SE) solutions are able to handle simple boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involves identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE) to overcome the existing gap. Our solution efficiently finds occurrences of a substrings over encrypted cloud data. We formally define the leakage functions and security properties of SSP-SSE. Then, we prove that the proposed scheme is secure against chosen-keyword attacks that involve an adaptive adversary. Our analysis demonstrates that SSP-SSE introduces very low overhead on computation and storage.
Keywords: cloud computing; cryptography; query processing; trees (mathematics); DNA data; SSP-SSE; adaptive adversary; boolean search queries; chosen-keyword attacks; cloud data; leakage functions; multikeyword queries; security properties; single keyword queries; substring position search; substring position searchable symmetric encryption; tree-based index; Cloud computing; Encryption; Indexes; Keyword search; Probabilistic logic; cloud computing; position heap tree; searchable symmetric encryption; substring position search (ID#: 15-5432)


Qingji Zheng; Shouhuai Xu, "Verifiable Delegated Set Intersection Operations on Outsourced Encrypted Data," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 175, 184, 9-13 March 2015. doi: 10.1109/IC2E.2015.38
Abstract: We initiate the study of the following problem: Suppose Alice and Bob would like to outsource their encrypted private data sets to the cloud, and they also want to conduct the set intersection operation on their plaintext data sets. The straightforward solution for them is to download their outsourced cipher texts, decrypt the cipher texts locally, and then execute a commodity two-party set intersection protocol. Unfortunately, this solution is not practical. We therefore motivate and introduce the novel notion of Verifiable Delegated Set Intersection on outsourced encrypted data (VDSI). The basic idea is to delegate the set intersection operation to the cloud, while (i) not giving the decryption capability to the cloud, and (ii) being able to hold the misbehaving cloud accountable. We formalize security properties of VDSI and present a construction. In our solution, the computational and communication costs on the users are linear to the size of the intersection set, meaning that the efficiency is optimal up to a constant factor.
Keywords: cryptographic protocols; set theory; VDSI; encrypted private data sets; intersection protocol; outsourced cipher texts; outsourced encrypted data; plaintext data sets; set intersection operation; verifiable delegated set intersection operations; Cloud computing; Encryption; Gold; Polynomials; Protocols; outsourced encrypted data; verifiable outsourced computing; verifiable set intersection (ID#: 15-5433)


Berger, S.; Goldman, K.; Pendarakis, D.; Safford, D.; Valdez, E.; Zohar, M., "Scalable Attestation: A Step Toward Secure and Trusted Clouds," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 185, 194, 9-13 March 2015. doi: 10.1109/IC2E.2015.32
Abstract: In this work we present Scalable Attestation, a method which combines both secure boot and trusted boot technologies, and extends them up into the host, its programs, and up into the guest's operating system and workloads, to both detect and prevent integrity attacks. Anchored in hardware, this integrity appraisal and attestation protects persistent data (files) from remote attack, even if the attack is root privileged. As an added benefit of a hardware rooted attestation, we gain a simple hardware based geolocation attestation to help enforce regulatory requirements. This design is implemented in multiple cloud test beds based on the QEMU/KVM hypervisor, Open Stack, and Open Attestation, and is shown to provide significant additional integrity protection at negligible cost.
Keywords: cloud computing; operating systems (computers);security of data; trusted computing; Open Attestation; Open Stack; QEMU/KVM hypervisor; cloud test beds; guest operating system; hardware based geolocation attestation; hardware rooted attestation; integrity attack detection; integrity attack prevention; integrity protection; regulatory requirements; scalable attestation; secure boot; secure clouds; trusted boot technologies; trusted clouds; Appraisal; Hardware; Kernel; Linux; Public key; Semiconductor device measurement; Attestation; Integrity; Security (ID#: 15-5434)


Kanstren, T.; Lehtonen, S.; Savola, R.; Kukkohovi, H.; Hatonen, K., "Architecture for High Confidence Cloud Security Monitoring," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 195, 200, 9-13 March 2015. doi: 10.1109/IC2E.2015.21
Abstract: Operational security assurance of a networked system requires providing constant and up-to-date evidence of its operational state. In a cloud-based environment we deploy our services as virtual guests running on external hosts. As this environment is not under our full control, we have to find ways to provide assurance that the security information provided from this environment is accurate, and our software is running in the expected environment. In this paper, we present an architecture for providing increased confidence in measurements of such cloud-based deployments. The architecture is based on a set of deployed measurement probes and trusted platform modules (TPM) across both the host infrastructure and guest virtual machines. The TPM are used to verify the integrity of the probes and measurements they provide. This allows us to ensure that the system is running in the expected environment, the monitoring probes have not been tampered with, and the integrity of measurement data provided is maintained. Overall this gives us a basis for increased confidence in the security of running parts of our system in an external cloud-based environment.
Keywords: cloud computing; security of data; virtual machines; TPM; external cloud-based environment; external hosts; guest virtual machines; high confidence cloud security monitoring; host infrastructure; measurement probes; networked system; operational security assurance; operational state; trusted platform modules; Computer architecture; Cryptography; Monitoring; Probes; Servers; Virtual machining; TPM; cloud; monitoring; secure element; security assurance (ID#: 15-5435)


Calyam, P.; Seetharam, S.; Homchaudhuri, B.; Kumar, M., "Resource Defragmentation Using Market-Driven Allocation in Virtual Desktop Clouds," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 246, 255, 9-13 March 2015. doi: 10.1109/IC2E.2015.37
Abstract: Similar to memory or disk fragmentation in personal computers, emerging "virtual desktop cloud" (VDC) services experience the problem of data center resource fragmentation which occurs due to on-the-fly provisioning of virtual desktop (VD) resources. Irregular resource holes due to fragmentation lead to sub-optimal VD resource allocations, and cause: (a)decreased user quality of experience (QoE), and (b) increased operational costs for VDC service providers. In this paper, we address this problem by developing a novel, optimal "Market-Driven Provisioning and Placement" (MDPP) scheme that is based upon distributed optimization principles. The MDPP scheme channelizes inherent distributed nature of the resource allocation problem by capturing VD resource bids via a virtual market to explore soft spots in the problem space, and consequently defragments a VDC through cost-aware utility-maximal VD re-allocations or migrations. Through extensive simulations of VD request allocations to multiple data centers for diverse VD application and user QoE profiles, we demonstrate that our MDPP scheme outperforms existing schemes that are largely based on centralized optimization principles. Moreover, MDPP scheme can achieve high VDC performance and scalability, measurable in terms of a 'Net Utility' metric, even when VD resource location constraints are imposed to meet orthogonal security objectives.
Keywords: cloud computing; computer centres; microcomputers; quality of experience; resource allocation; MDPP scheme; VD request allocation simulations; VD resource on-the-fly provisioning; VDC service providers; centralized optimization principles; cost-aware utility-maximal VD re-allocations; data center resource fragmentation; disk fragmentation; distributed optimization principles; irregular resource holes; market-driven allocation; market-driven provisioning and placement scheme; memory fragmentation; multiple data centers; net utility metric; operational costs; orthogonal security; personal computers; sub-optimal VD resource allocation; user QoE profiles; user quality of experience; virtual desktop clouds services; Bandwidth; Joints; Measurement; Optimization; Resource management; Scalability; Virtual machining (ID#: 15-5436)


Pasquier, T.F.J.-M.; Singh, J.; Bacon, J., "Information Flow Control for Strong Protection with Flexible Sharing in PaaS," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 279, 282, 9-13 March 2015. doi: 10.1109/IC2E.2015.64
Abstract: The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.
Keywords: cloud computing; data protection; operating systems (computers); virtual machines; IFC-enabled cloud platform; OS-level; PaaS; cloud isolation mechanisms; data-centric mechanism; fine grained data control; flexible data sharing mechanism; hypervisor; information flow control; virtual machines; Cloud computing; Computers; Containers; Context; Kernel; Security (ID#: 15-5437)


Tawalbeh, L.; Haddad, Y.; Khamis, O.; Aldosari, F.; Benkhelifa, E., "Efficient Software-Based Mobile Cloud Computing Framework," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 317, 322, 9-13 March 2015. doi: 10.1109/IC2E.2015.48
Abstract: This paper proposes an efficient software based data possession mobile cloud computing framework. The proposed design utilizes the characteristics of two frameworks. The first one is the provable data possession design built for resource-constrained mobile devices and it uses the advantage of trusted computing technology, and the second framework is a lightweight resilient storage outsourcing design for mobile cloud computing systems. Our software based framework utilizes the strength aspects in both mentioned frameworks to gain better performance and security. The evaluation and comparison results showed that our design has better flexibility and efficiency than other related frameworks.
Keywords: cloud computing; data handling; mobile computing; outsourcing; resource constrained mobile devices; software based data possession mobile cloud computing framework; software based framework; storage outsourcing design; trusted computing technology; Cloud computing; Computational modeling; Encryption; Mobile communication; Mobile handsets; Servers; Mobile Cloud Computing; Security; Software Defined Storage; Software Defined Systems; Trusted Cloud Computing (ID#: 15-5438)


Slominski, A.; Muthusamy, V.; Khalaf, R., "Building a Multi-tenant Cloud Service from Legacy Code with Docker Containers," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 394, 396, 9-13 March 2015. doi: 10.1109/IC2E.2015.66
Abstract: In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services.Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.
Keywords: Java; application program interfaces; cloud computing; specification languages; Beta Workflow service; Cloudant persistence layer; HTTP REST requests;IBM Bluemix Workflow Service; Javascript code; Javascript-based domain specific language; REST API; docker containers; legacy Web application; legacy codebase; multitenant cloud service; reusable architectural pattern; Browsers; Cloud computing; Containers; Engines; Memory management; Organizations; Security (ID#: 15-5439)


Paul, M.; Collberg, C.; Bambauer, D., "A Possible Solution for Privacy Preserving Cloud Data Storage," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 397, 403, 9-13 March 2015. doi: 10.1109/IC2E.2015.103
Abstract: Despite the economic advantages of cloud data storage, many corporations have not yet migrated to this technology. While corporations in the financial sector cite data security as a reason, corporations in other sectors cite privacy concerns for this reluctance. In this paper, we propose a possible solution for this problem inspired by the HIPAA safe harbor methodology for data anonymization. The proposed technique involves using a hash function that uniquely identifies the data and then splitting data across multiple cloud providers. We propose that such a "Good Enough" approach to privacy-preserving cloud data storage is both technologically feasible and financially advantageous. Following this approach addresses concerns about privacy harms resulting from accidental or deliberate data spills from cloud providers. The "Good Enough" method will enable firms to move their data into the cloud without incurring privacy risks, enabling them to realize the economic advantages provided by the pay-per-use model of cloud data storage.
Keywords: cloud computing; data privacy; security of data; HIPAA safe harbor methodology; data anonymization; data security; data splitting; financial sector; good enough approach; multiple cloud providers; pay-per-use model; privacy concerns; privacy preserving cloud data storage; Cloud computing; Data privacy; Indexes; Memory; Privacy; Security; Data Privacy; Cloud; Obfuscation (ID#: 15-5440)


Mutkoski, S., "National Cloud Computing Principles: Guidance for Public Sector Authorities Moving to the Cloud," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 404, 409, 9-13 March 2015. doi: 10.1109/IC2E.2015.104
Abstract: Governments around the world are actively seeking to leverage the many benefits of cloud computing while also ensuring that they manage risks that deployment of the new technologies can raise. While laws and regulations related to the privacy and security of government data may already exist, many were drafted in the "pre-cloud" era and could therefore benefit from an update and revision. This paper explores some of the concepts that should be incorporated into new or amended laws that seek to guide public sector entities as they move their data and workloads to the cloud.
Keywords: cloud computing; legislation; government data; national cloud computing legislation principles; precloud era; public sector authorities; Certification; Cloud computing; Computational modeling; Data privacy; Government; Legislation; Security; Cloud Computing; Public Sector; Regulation and Legislation; Risk Management; Security (ID#: 15-5441)


Pasquier, T.F.J.-M.; Powles, J.E., "Expressing and Enforcing Location Requirements in the Cloud Using Information Flow Control," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 410, 415, 9-13 March 2015. doi: 10.1109/IC2E.2015.71
Abstract: The adoption of cloud computing is increasing and its use is becoming widespread in many sectors. As cloud service provision increases, legal and regulatory issues become more significant. In particular, the international nature of cloud provision raises concerns over the location of data and the laws to which they are subject. In this paper we investigate Information Flow Control (IFC) as a possible technical solution to expressing, enforcing and demonstrating compliance of cloud computing systems with policy requirements inspired by data protection and other laws. We focus on geographic location of data, since this is the paradigmatic concern of legal/regulatory requirements on cloud computing and, to date, has not been met with robust technical solutions and verifiable data flow audit trails.
Keywords: cloud computing; data protection; geography; law; IFC; cloud computing; cloud service provision; data protection; geographic data location; information flow control; legal issues; legal/regulatory requirements; location requirement enforcement; location requirement expression; policy requirements; regulatory issues; verifiable data flow audit trails; Cloud computing; Companies; Context; Europe; Law; Security (ID#: 15-5442)


D'Errico, M.; Pearson, S., "Towards a Formalised Representation for the Technical Enforcement of Privacy Level Agreements," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 422, 427, 9-13 March 2015. doi: 10.1109/IC2E.2015.72Abstract: Privacy Level Agreements (PLAs) are likely to be increasingly adopted as a standardized way for cloud providers to describe their data protection practices. In this paper we propose an ontology-based model to represent the information disclosed in the agreement to turn it into a means that allows software tools to use and further process that information for different purposes, including automated service offering discovery and comparison. A specific usage of the PLA ontology is presented, showing how to link high level policies to operational policies that are then enforced and monitored. Through this established link, cloud users gain greater assurance that what is expressed in such agreements is actually being met, and thereby can take this information into account when choosing cloud service providers. Furthermore, the created link can be used to enable policy enforcement tools to add semantics to the evidence they produce; this mainly takes the form of logs that are associated with the specific policy of which execution they provide evidence. Furthermore, the use of the ontology model allows a means of enabling interoperability among tools that are in charge of the enforcement and monitoring of possible violations to the terms of the agreement.
Keywords: data protection; ontologies (artificial intelligence); open systems; software tools; PLA ontology; cloud providers; data protection practices; formalised representation; high level policies; interoperability; ontology-based model; operational policies; policy enforcement tools; privacy level agreements; software tools; technical enforcement; Data models; Data privacy; Engines; Monitoring; Ontologies; Privacy; Programmable logic arrays; privacy policy; assurance; policy enforcement; Privacy Level Agreement (ID#: 15-5443)


Adelyar, S.H., "Towards Secure Agile Agent-Oriented System Design," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 499, 501, 9-13 March 2015. doi: 10.1109/IC2E.2015.95
Abstract: Agile methods are criticized to be inadequate for developing secure digital services. Currently, the software research community only partially studies security for agile practices. Our more holistic approach is identifying the security challenges / benefits of agile practices that relate to the core "embrace-changes" principle. For this case-study based research, we consider eXtreme Programming (XP) for a holistic security integration into agile practices.
Keywords: object-oriented programming; security of data; software agents; software prototyping; XP; embrace-change principle; extreme programming; holistic security integration; secure agile agent-oriented system design; secure digital services; software research community; Agile software development; Cloud computing; Context; Planning; Programming; Security; Agile; Embrace-changes; Security; Challenges; Benefits (ID#: 15-5444)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.