Visible to the public Data Deletion, 2014

SoS Newsletter- Advanced Book Block

SoS Logo

Data Deletion, 2014

The problem of "forgetting," that is, eliminating links and references used on the Internet to focus on a specific topic or reference, is an important issue related to privacy. "Forgetting," essentially a problem in data deletion, has many implications for security and for data structures, including distributed file structures. Of particular interest is the problem data deletion in the cloud. Articles published in 2014 are cited here.

Reardon, J.; Basin, D.; Capkun, S., "On Secure Data Deletion," Security & Privacy, IEEE, vol.12, no.3, pp.37,44, May-June 2014. doi: 10.1109/MSP.2013.159
Abstract: Secure data deletion is the task of deleting data from a physical medium, such as a hard drive, phone, or blackboard, so that the data is irrecoverable. This irrecoverability distinguishes secure deletion from regular file deletion, which deletes unneeded data only to reclaim resources. Users securely delete data to prevent adversaries from gaining access to it. In this article, we explore approaches to securely delete digital data, describe different adversaries' capabilities, and show how secure deletion approaches can be integrated into systems at different interface levels to protect against specific adversaries.
Keywords: data protection; security of data; adversary access prevention; data protection; regular file deletion; secure data deletion; Computer security; Data processing; File systems; Flash memories; Forensics; Media; File systems; Flash memories; Forensics; Hardware; Media; Privacy; Security; null (ID#: 15-5660)


Zhen Mo; Qingjun Xiao; Yian Zhou; Shigang Chen, "On Deletion of Outsourced Data in Cloud Computing," Cloud Computing (CLOUD), 2014 IEEE 7th International Conference on, pp. 344, 351, June 27 2014-July 2 2014. doi: 10.1109/CLOUD.2014.54
Abstract: Data security is a major concern in cloud computing. After clients outsource their data to the cloud, will they lose control of the data? Prior research has proposed various schemes for clients to confirm the existence of their data on the cloud servers, and the goal is to ensure data integrity. This paper investigates a complementary problem: When clients delete data, how can they be sure that the deleted data will never resurface in the future if the clients do not perform the actual data removal themselves? How to confirm the non-existence of their data when the data is not in their possession? One obvious solution is to encrypt the outsourced data, but this solution has a significant technical challenge because a huge amount of key materials may have to be maintained if we allow fine-grained deletion. In this paper, we explore the feasibility of relieving clients from such a burden by outsourcing keys (after encryption) to the cloud. We propose a novel multi-layered key structure, called Recursively Encrypted Red-black Key tree (RERK), that ensures no key materials will be leaked, yet the client is able to manipulate keys by performing tree operations in collaboration with the servers. We implement our solution on the Amazon EC2. The experimental results show that our solution can efficiently support the deletion of outsourced data in cloud computing.
Keywords: cloud computing; cryptography; data integrity;trees (mathematics);Amazon EC2;RERK;cloud computing; data integrity; data security; encryption; fine-grained deletion; multilayered key structure; outsourced data detection; recursively encrypted red-black key tree; Data privacy; Encryption; Materials; Polynomials; Servers  (ID#: 15-5661)


Li Chaoling; Chen Yue; Zhou Yanzhou, "A Data Assured Deletion Scheme In Cloud Storage," Communications, China, vol. 11, no. 4, pp.98,110, April 2014. doi: 10.1109/CC.2014.6827572
Abstract: In order to provide a practicable solution to data confidentiality in cloud storage service, a data assured deletion scheme, which achieves the fine grained access control, hopping and sniffing attacks resistance, data dynamics and deduplication, is proposed. In our scheme, data blocks are encrypted by a two-level encryption approach, in which the control keys are generated from a key derivation tree, encrypted by an All-Or-Nothing algorithm and then distributed into DHT network after being partitioned by secret sharing. This guarantees that only authorized users can recover the control keys and then decrypt the outsourced data in an owner-specified data lifetime. Besides confidentiality, data dynamics and deduplication are also achieved separately by adjustment of key derivation tree and convergent encryption. The analysis and experimental results show that our scheme can satisfy its security goal and perform the assured deletion with low cost.
Keywords: authorisation; cloud computing; cryptography; storage management; DHT network; all-or-nothing algorithm; cloud storage; convergent encryption; data assured deletion scheme; data confidentiality; data deduplication; data dynamics; fine grained access control; key derivation tree; owner-specified data lifetime; sniffing attack resistance; two-level encryption approach; Artificial neural networks; Encryption; cloud storage; data confidentiality; data dynamics; secure data assured deletion (ID#: 15-5662)


Zhen Mo; Yan Qiao; Shigang Chen, "Two-Party Fine-Grained Assured Deletion of Outsourced Data in Cloud Systems," Distributed Computing Systems (ICDCS), 2014 IEEE 34th International Conference on, pp. 308, 317, June 30 2014-July 3 2014. doi: 10.1109/ICDCS.2014.39
Abstract: With clients losing direct control of their data, this paper investigates an important problem of cloud systems: When clients delete data, how can they be sure that the deleted data will never resurface in the future if the clients do not perform the actual data removal themselves? How to guarantee inaccessibility of deleted data when the data is not in their possession? Using a novel key modulation function, we design a solution for two-party fine-grained assured deletion. The solution does not rely on any third-party server. Each client only keeps one or a small number of keys, regardless of how big its file system is. The client is able to delete any individual data item in any file without causing significant overhead, and the deletion is permanent - no one can recover already-deleted data, not even after gaining control of both the client device and the cloud server. We validate our design through experimental evaluation.
Keywords: cloud computing; file servers; outsourcing; storage management; already-deleted data; client device; cloud server; cloud systems; data removal; modulation function; outsourced data; third-party server; two-party fine-grained assured deletion; Cryptography; Distributed databases; Modulation; Outsourcing; Radio frequency; Servers (ID#: 15-5663)


Zhangjie Fu; Xinyue Cao; Jin Wang; Xingming Sun, "Secure Storage of Data in Cloud Computing," Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2014 Tenth International Conference on, pp. 783, 786, 27-29 Aug. 2014. doi: 10.1109/IIH-MSP.2014.199
Abstract: Cloud storage brings convenient storage of data, at the same time there are also hidden security issues. Data storage security includes legal access to the data stored in the cloud, namely access authorization and authentication security data sharing and the encryption of stored data to ensure data confidentiality, consisting of the accessibility to effective cryptographic data and inaccessibility to the deleted cryptographic data, using tamper-proof technology to ensure the integrity of the data, as well as using tracking technology to ensure data traceability. This paper focuses on the file systems with secure data deletion. We design a file system which supports secure deletion of data. It uses CP-ABE which supports fine-grained access policy to encrypt files.
Keywords: cloud computing; cryptography; data integrity; storage management; CP-ABE; access authorization; authentication security data sharing; cloud computing; cloud storage; cryptographic data; data confidentiality; data integrity; data storage security; data traceability; file encryption; file systems; fine-grained access policy; legal access; secure data deletion; stored data encryption; tamper-proof technology; Access control; Cloud computing; Encryption; File systems; Secure storage; access control; data integrity; key manage; secure storage of data; tracking technology (ID#: 15-5664)


Luo Yuchuan; Fu Shaojing; Xu Ming; Wang Dongsheng, "Enable Data Dynamics For Algebraic Signatures Based Remote Data Possession Checking In The Cloud Storage," Communications, China, vol. 11, no.11, pp. 114, 124, Nov. 2014. doi: 10.1109/CC.2014.7004529
Abstract: Cloud storage is one of the main application of the cloud computing. With the data services in the cloud, users is able to outsource their data to the cloud, access and share their outsourced data from the cloud server anywhere and anytime. However, this new paradigm of data outsourcing services also introduces new security challenges, among which is how to ensure the integrity of the outsourced data. Although the cloud storage providers commit a reliable and secure environment to users, the integrity of data can still be damaged owing to the carelessness of humans and failures of hardwares/softwares or the attacks from external adversaries. Therefore, it is of great importance for users to audit the integrity of their data outsourced to the cloud. In this paper, we first design an auditing framework for cloud storage and proposed an algebraic signature based remote data possession checking protocol, which allows a third-party to auditing the integrity of the outsourced data on behalf of the users and supports unlimited number of verifications. Then we extends our auditing protocol to support data dynamic operations, including data update, data insertion and data deletion. The analysis and experiment results demonstrate that our proposed schemes are secure and efficient.
Keywords: cloud computing; data integrity; outsourcing; protocols; storage management; algebraic signature based remote data possession checking protocol; auditing framework; auditing protocol; cloud computing; cloud server; cloud storage providers; data deletion; data dynamic operations; data insertion; data outsourcing services; outsourced data integrity; Cloud computing; Data models; Data storage; Galois fields; Protocols; Security; Servers; algebraic signatures; cloud computing; cloud storage; data dynamics; data integrity (ID#: 15-5665)


Vanitha, M.; Kavitha, C., "Secured Data Destruction In Cloud Based Multi-Tenant Database Architecture," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921774
Abstract: Cloud computing falls into two general categories. Applications being delivered as service and hardware and data centers that provides those services [1]. Cloud storage evolves from just a storage model to a new service model where data is being managed, maintained, and stored in multiple remote severs for back-up reasons. Cloud platform server clusters are running in network environment and it may contain multiple users' data and the data may be scattered in different virtual data centers. In a multi-user shared cloud computing platform users are only logically isolated, but data of different users may be stored in same physical equipment. These equipments can be rapidly provisioned, implemented, scaled up or down and decommissioned. Current cloud providers do not provide the control or at least the knowledge over the provided resources to their customers. The data in cloud is encrypted during rest, transit and back-up in multi tenant storage. The encryption keys are managed per customer. There are different stages of data life cycle Create, Store, Use, Share, Archive and Destruct. The final stage is overlooked [2], which is the complex stage of data in cloud. Data retention assurance may be easier for the cloud provider to demonstrate while the data destruction is extremely difficult. When the SLA between the customer and the cloud provider ends, today in no way it is assured that the particular customers' data is completely destroyed or destructed from the cloud provider's storage. The proposed method identifies way to track individual customers' data and their encryption keys and provides solution to completely delete the data from the cloud provider's multi-tenant storage architecture. It also ensures deletion of data copies as there are always possibilities of more than one copy of data being maintained for back-up purposes. The data destruction proof shall also be provided to customer making sure that the owner's data is completely removed.
Keywords: cloud computing; contracts; database management systems; file organisation; private key cryptography; public key cryptography; SLA; cloud computing; data copy deletion; encryption keys; multitenant database architecture; multitenant storage architecture; secured data destruction; Cloud computing; Computer architecture; Computers; Encryption; Informatics; Public key; attribute based encryption; data retention; encryption; file policy (ID#: 15-5666)


Alnemr, R.; Pearson, S.; Leenes, R.; Mhungu, R., "COAT: Cloud Offerings Advisory Tool," Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, pp. 95, 100, 15-18 Dec. 2014. doi: 10.1109/CloudCom.2014.100
Abstract: There is a pressing need to make the differences between cloud offerings more transparent to cloud customers. Examples of properties that vary across cloud service providers (and that are reflected in cloud contracts) include subcontracting, location of data centres, use restriction, applicable law, data backup, encryption, remedies, storage period, monitoring/audits, breach notification, demonstration of compliance, dispute resolution, data portability, law enforcement access and data deletion from servers. In this paper we present our Cloud Offerings Advisory Tool (COAT), which matches user requirements to cloud offers and performs a comparison of these cloud offerings. It makes the non-functional requirements listed above more transparent to cloud customers, offering advice and guidance about the implications and thereby helping the cloud customers choose what is most appropriate.
Keywords: cloud computing; COAT; cloud customers; cloud offerings advisory tool; cloud service providers; nonfunctional requirements; Cloud computing; Contracts; Data privacy; Encryption; Law; Privacy; Accountability; Cloud Computing; Contracts; Legal; Non-functional Requirements; Privacy; Security; Transparency (ID#: 15-5667)


Corena, J.C.; Basu, A.; Nakano, Y.; Kiyomoto, S.; Miyake, Y., "Data Storage on the Cloud under User Control," Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, pp. 739, 742, 15-18 Dec. 2014. doi: 10.1109/CloudCom.2014.113
Abstract: Cloud services provide advantages in terms of service scalability and availability of users' data, but increase concerns about the control that a user has over her own data. These concerns include not just issues related to access to the information itself, but issues about the effective deletion of the information by the cloud in compliance with the user's right to deletion. In this on-going work, we present a mechanism that allows users to control access to and deletion of their information stored on the cloud. Our construction separates the user's content into several encoded pieces most of which are stored by a cloud provider. The remaining encoded pieces are stored by the user and are served directly from the user's infrastructure to the persons interested in viewing the content. The encoding must satisfy the property that without the pieces stored in the user's infrastructure none of the data is revealed. This property is found in several constructions related to secret sharing. We evaluate the practical feasibility of our proposal by developing an image sharing mechanism and simulating the user infrastructure using a single-board computer connected to the home Internet connection of one of the authors.
Keywords: authorisation; cloud computing; data privacy; storage management; cloud services; data storage; image sharing mechanism; secret sharing; user access control; user data privacy; user infrastructure simulation; Cloud computing; Cryptography; Facebook; Manganese; Proposals; Transforms; cloud; privacy; security; storage (ID#: 15-5668)


Jiansheng Wei; Hong Jiang; Ke Zhou; Dan Feng, "Efficiently Representing Membership for Variable Large Data Sets," Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no. 4, pp. 960, 970, April 2014. doi: 10.1109/TPDS.2013.66
Abstract: Cloud computing has raised new challenges for the membership representation scheme of storage systems that manage very large data sets. This paper proposes DBA, a dynamic Bloom filter array aimed at representing membership for variable large data sets in storage systems in a scalable way. DBA consists of dynamically created groups of space-efficient Bloom filters (BFs) to accommodate changes in set sizes. Within a group, BFs are homogeneous and the data layout is optimized at the bit level to enable parallel access and thus achieve high query performance. DBA can effectively control its query accuracy by partially adjusting the error rate of the constructing BFs, where each BF only represents an independent subset to help locate elements and confirm membership. Further, DBA supports element deletion by introducing a lazy update policy. We prototype and evaluate our DBA scheme as a scalable fast index in the MAD2 deduplication storage system. Experimental results reveal that DBA (with 64 BFs per group) shows significantly higher query performance than the state-of-the-art approach while scaling up to 160 BFs. DBA is also shown to excel in scalability, query accuracy, and space efficiency by theoretical analysis and experimental evaluation.
Keywords: cloud computing; data handling; data structures; query processing;BF;MAD2 deduplication storage system; cloud computing; data layout; dynamic Bloom filter; membership representation scheme; query accuracy; query performance; storage systems; variable large data sets; Arrays; Distributed databases; Error analysis; Indexes; Peer-to-peer computing; Random access memory; Servers; Bloom filter; Data management; fast index; membership representation (ID#: 15-5669)


Zhou Lei; Zhaoxin Li; Yu Lei; Yanling Bi; Luokai Hu; Wenfeng Shen, "An Improved Image File Storage Method Using Data Deduplication," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 638, 643, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.82
Abstract: Recent years have seen a rapid growth in the number of virtual machines and virtual machine images that are managed to support infrastructure as a service (IaaS). For example, Amazon Elastic Compute Cloud (EC2) has 6,521 public virtual machine images. This creates several challenges in management of image files in a cloud computing environment. In particular, a large amount of duplicate data that exists in image files consumes significant storage space. To address this problem, we propose an effective image file storage technique using data deduplication with a modified fixed-size block scheme. When a user requests to store an image file, this technique first calculates the fingerprint for the image file, and then compares the fingerprint with the fingerprints in a fingerprint library. If the fingerprint of the image is already in the library, a pointer to the existing fingerprint is used to store this image. Otherwise this image will be processed using the fixed-size block image segmentation method. We design a metadata format for image files to organize image file blocks and a new MD5 index table of image files to reduce their retrieval time. The experiments show that our technique can significantly reduce the transmission time of image files that have already existed in storage. Also the deletion rate for image groups which have the same version of operating systems but different versions of software applications is up about 58%.
Keywords: cloud computing; image segmentation; meta data; visual databases; data deduplication; fingerprint library; fixed-size block image segmentation method; image file blocks; image file fingerprint; image file storage method; metadata format; modified fixed-size block scheme; transmission time reduction; Educational institutions; Fingerprint recognition; Image storage; Libraries; Operating systems; Servers; Virtual machining; cloud computing; data deduplication; image files (ID#: 15-5670)


Zhangjie Fu; Lin Xin; Jin Wang; Xingming Sun, "Data Access Control for Multi-authority Cloud Storage Systems," Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2014 Tenth International Conference on, pp. 714, 717, 27-29 Aug. 2014. doi: 10.1109/IIH-MSP.2014.184
Abstract: Ciphertext-Policy Attribute-based Encryption (CP-ABE) is one of the most suitable technologies for data access control in cloud storage systems. This paper firstly presents the Attribute-based Encryption (ABE), secure deletion and secret-sharing schemes. Then we construct a CP-ABE model using secret-sharing methods to insure its security. At last, we propose an improved scheme on Data Access Control for Multi-Authority Cloud Storage Systems DAC-MACS to insure the security of the central authority (CA).
Keywords: authorisation; cloud computing; cryptography; storage management; CP-ABE model; DAC-MACS;central authority; ciphertext-policy attribute-based encryption; data access control for multiauthority cloud storage systems; secret-sharing methods; secret-sharing schemes; secure deletion schemes; Access control; Cloud computing; Computers; Encryption; Sun; Access control; CP-ABE; Secret-sharing; Secure deletion (ID#: 15-5671)


Jinbo Xiong; Ximeng Liu; Zhiqiang Yao; Jianfeng Ma; Qi Li; Kui Geng; Chen, P.S., "A Secure Data Self-Destructing Scheme in Cloud Computing," Cloud Computing, IEEE Transactions on , vol. 2, no. 4, pp. 448, 458, Oct.-Dec. 1 2014. doi: 10.1109/TCC.2014.2372758\
Abstract: With the rapid development of versatile cloud services, it becomes increasingly susceptible to use cloud services to share data in a friend circle in the cloud computing environment. Since it is not feasible to implement full lifecycle privacy security, access control becomes a challenging task, especially when we share sensitive data on cloud servers. In order to tackle this problem, we propose a key-policy attribute-based encryption with time-specified attributes (KP-TSABE), a novel secure data self-destructing scheme in cloud computing. In the KP-TSABE scheme, every ciphertext is labeled with a time interval while private key is associated with a time instant. The ciphertext can only be decrypted if both the time instant is in the allowed time interval and the attributes associated with the ciphertext satisfy the key's access structure. The KP-TSABE is able to solve some important security problems by supporting user-defined authorization period and by providing fine-grained access control during the period. The sensitive data will be securely self-destructed after a user-specified expiration time. The KP-TSABE scheme is proved to be secure under the decision l-bilinear Diffie-Hellman inversion (l-Expanded BDHI) assumption. Comprehensive comparisons of the security properties indicate that the KP-TSABE scheme proposed by us satisfies the security requirements and is superior to other existing schemes.
Keywords: authorisation; cloud computing; data privacy; inverse problems; public key cryptography; access control; cloud computing environment; data self-destructing scheme security; decision l-bilinear Diffie-Hellman inversion; key-policy attribute-based encryption with time-specified attribute KP-TSABE; l-expanded BDHI assumption; lifecycle privacy security; user-defined authorization period; Authorization; Cloud computing; Computer security; Data privacy; Encryption; Sensitive data; assured deletion; cloud computing; fine-grained access control; privacy-preserving; secure self-destructing (ID#: 15-5672)


Rui Wang; Qimin Peng; Xiaohui Hu, "A Hypergraph-Based Service Dependency Model For Cloud Services," Multisensor Fusion and Information Integration for Intelligent Systems (MFI), 2014 International Conference on, pp. 1, 6, 28-29 Sept. 2014. doi: 10.1109/MFI.2014.6997658
Abstract: Cloud computing is known as a new computing paradigm that utilizes existing cloud services as fundamental elements for developing distributed applications based on the so-called “use, not own” manner. A dependency is a relation between services wherein a change to one of the services implies a potential change to the others. In this paper, services are classified into three layers in accordance with different business requirements. Services exist in different areas of the static domain, the user application in dynamic domain. User applications are implemented by way of choosing services in business layer and application layer. A hypergraph-based service model is used to represent architecture of multi-tenancy applications. Using the properties of hypergraph, we can solve the service about addition, deletion, replacement, migration and other problems. This model can implement extensible software architecture and assure the adaptive evolution of the large-scale complex software systems.
Keywords: business data processing; cloud computing; software architecture; adaptive evolution; application layer; business layer; business requirements; cloud computing paradigm; cloud services; dynamic domain; extensible software architecture; hypergraph-based service dependency model; large scale complex software systems; multitenancy applications; static domain; user applications; Adaptation models; Business; Computational modeling; Computer architecture; Software architecture; Software as a service; Cloud Computing; hypergraph-based service model; software architecture (ID#: 15-5673)


Higai, A.; Takefusa, A.; Nakada, H.; Oguchi, M., "A Study of Effective Replica Reconstruction Schemes at Node Deletion for HDFS," Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on, pp. 512, 521, 26-29 May 2014. doi: 10.1109/CCGrid.2014.31
Abstract: Distributed file systems, which manage large amounts of data over multiple commercially available machines, have attracted attention as a management and processing system for big data applications. A distributed file system consists of multiple data nodes and provides reliability and availability by holding multiple replicas of data. Due to system failure or maintenance, a data node may be removed from the system and the data blocks the removed data node held are lost. If data blocks are missing, the access load of the other data nodes that hold the lost data blocks increases, and as a result the performance of data processing over the distributed file system decreases. Therefore, replica reconstruction is an important issue to reallocate the missing data blocks in order to prevent such performance degradation. The Hadoop Distributed File System (HDFS) is a widely used distributed file system. In the HDFS replica reconstruction process, source and destination data nodes for replication are selected randomly. We found that this replica reconstruction scheme is inefficient because data transfer is biased. Therefore, we propose two more effective replica reconstruction schemes that aim to balance the workloads of replication processes. Our proposed replication scheduling strategy assumes that nodes are arranged in a ring and data blocks are transferred based on this one-directional ring structure to minimize the difference of the amount of transfer data of each node. Based on this strategy, we propose two replica reconstruction schemes, an optimization scheme and a heuristic scheme. We have implemented the proposed schemes in HDFS and evaluated them on an actual HDFS cluster. From the experiments, we confirm that the replica reconstruction throughput of the proposed schemes show a 45% improvement compared to that of the default scheme. We also verify that the heuristic scheme is effective because it shows performance comparable to the optimization scheme and can be mo- e scalable than the optimization scheme.
Keywords: Big Data; file organisation; optimisation; HDFS; Hadoop distributed file system; access load; data transfer; heuristic scheme; node deletion; optimization scheme; replica reconstruction scheme; replication scheduling strategy; Availability; Big data; Data transfer; Distributed databases; Optimization; Structural rings; Throughput; HDFS; distributed file system; heuristic; optimization; reconstruction; replica (ID#: 15-5674)


Yusoh, Z.I.M.; Maolin Tang, "Composite SaaS Scaling In Cloud Computing Using A Hybrid Genetic Algorithm," Evolutionary Computation (CEC), 2014 IEEE Congress on, pp. 1609,1616, 6-11 July 2014. doi: 10.1109/CEC.2014.6900614
Abstract: A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled - replicated or deleted, to accommodate the user's load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem's knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.
Keywords: cloud computing; genetic algorithms; resource allocation; SaaS resource management; application component; cloud computing; composite SaaS component deletion; composite SaaS component replication; composite SaaS component scaling; data component; higher-level functional software; hybrid genetic algorithm; resource underutilisation avoidance; software-as-a-service; user load; Biological cells; Scalability; Servers; Sociology; Software as a service; Statistics; Time factors; Cloud Computing; Clustering; Composite SaaS; Grouping Genetic Algorithm (ID#: 15-5675)


Abdulsalam, S.; Lakomski, D.; Qijun Gu; Tongdan Jin; Ziliang Zong, "Program Energy Efficiency: The Impact Of Language, Compiler And Implementation Choices," Green Computing Conference (IGCC), 2014 International, pp. 1, 6, 3-5 Nov. 2014. doi: 10.1109/IGCC.2014.7039169
Abstract: Today reducing the energy usage of computing systems becomes a paramount task, no matter they are lightweight mobile devices, complex cloud computing platforms or large-scale supercomputers. Many existing studies in green computing focus on making the hardware more energy efficient. This is understandable because software running on low-power hardware will automatically consume less energy. Little work has been done to explore how software developers can play a more proactive role in saving energy by writing greener code. In fact, very few programmers consider energy-efficiency when writing code and even fewer know how to evaluate and improve the energy-efficiency of their code. In this paper, we quantitatively study the impact of languages (C/C++/Java/Python), compiler optimization (GNU C/C++ compiler with O1, O2, and O3 flags) and implementation choices (e.g. using malloc instead of new to create dynamic arrays and using vector vs. array for Quicksort) on the energy-efficiency of three well-known programs: Fast Fourier Transform, Linked List Insertion/Deletion and Quicksort. Our experiments show that by carefully selecting an appropriate language, optimization flag and data structure, significant energy can be conserved for solving the same problem with identical input size.
Keywords: data structures; fast Fourier transforms; green computing; power aware computing; program compilers; programming languages; sorting; Quicksort; code energy-efficiency; compiler choices; compiler optimization; complex cloud computing platforms; computing system energy usage reduction; data structure; dynamic arrays; fast Fourier transform; green computing; greener code writing; implementation choices; language choices; large-scale supercomputers; light-weight mobile devices; linked list insertion-deletion; optimization flag; program energy efficiency; software developers; Arrays; Java; Libraries; Optimization; Resource management; Software; Vectors; energy-efficient programming; green computing; software optimization (ID#: 15-5676)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.