Visible to the public Locking 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo

Locking 2015


In computer science, a lock is a timing mechanism designed to enforce a control policy. Locks have some advantages and many disadvantages. To be efficient, they typically require hardware support. For the Science of Security community, locking is relevant to policy-based governance, resilience, cyber physical systems, and composability. This research was presented in 2015.

I. Singh, K. Mishra, A. M. Alberti, A. Jara and D. Singh, "A Novel Privacy and Security Framework for the Cloud Network Services," Advanced Communication Technology (ICACT), 2015 17th International Conference on, Seoul, 2015, pp. 363-367. doi: 10.1109/ICACT.2015.7224820

Abstract: This paper presents an overview of security and it's issues in cloud computing. Nowadays cloud computing has tremendous usage in so many fields such as financial management, communications and collaboration, office productivity suits, accounting applications, customer relationship management, online storage management, human resource and employment. Owing to increase in use of these services by companies, several security issues have emerged and this challenges cloud computing architectures to secure, protect and process user's data. These services have certain cons like security, lock-in, lack of control, and reliability. Privacy and security are the major concerns in cloud computing services. In this paper, we have designed a novel secure framework for cloud services, as well as presented a critical analysis of CCMP (Counter with Cipher Block Message Authentication Code Protocol) protocol for secure data management of cloud services.

Keywords: cloud computing; computer network security; cryptographic protocols; data privacy; CCMP protocol; cloud computing; cloud network services; counter with cipher block message authentication code protocol; privacy framework; secure data management; security framework; Cloud computing; Encryption; Payloads; Radiation detectors; Servers; CCMP; Cloud Computing; Security; Services (ID#: 16-9471)



I. Singh, K. N. Mishra, A. Alberti, D. Singh and A. Jara, "A Novel Privacy and Security Framework for the Cloud Network Services," Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 301-305. doi: 10.1109/IMIS.2015.93

Abstract: This paper presents an overview of security and it's issues in cloud computing. Nowadays cloud computing has tremendous usage in so many fields such as financial management, communications and collaboration, office productivity suits, accounting applications, customer relationship management, online storage management, human resource and employment. Owing to increase in use of these services by companies, several security issues have emerged and this challenges cloud computing architectures to secure, protect and process user's data. These services have certain cons like security, lock-in, lack of control, and reliability. Privacy and security are the major concerns in cloud computing services. In this paper, we have designed a novel secure framework for cloud services, as well as presented a critical analysis of CCMP (Counter with Cipher Block Message Authentication Code Protocol) protocol for secure data management of cloud services.

Keywords: cloud computing; computer network reliability; computer network security; cryptographic protocols; data privacy; message authentication; software architecture; CCMP Protocol; accounting applications; cloud computing architectures; cloud network services; communications-and-collaboration; counter-with-cipher block message authentication code protocol; customer relationship management; employment; financial management; human resource; office productivity suits; online storage management; privacy framework; secure data management; security framework; user data processing; user data protection; Cloud computing; Encryption; Payloads; Radiation detectors; Servers; CCMP; Cloud Computing; Security; Services (ID#: 16-9472)



Seongyeol Oh, Joon-Sung Yang, A. Bianchi and Hyoungshick Kim, "Devil in a Box: Installing Backdoors in Electronic Door Locks," Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 139-144. doi: 10.1109/PST.2015.7232965

Abstract: Electronic door locks must be carefully designed to allow valid users to open (or close) a door and prevent unauthorized people from opening (or closing) the door. However, lock manufacturers have often ignored the fact that door locks can be modified by attackers in the real world. In this paper, we demonstrate that the most popular electronic door locks can easily be compromised by inserting a malicious hardware backdoor to perform unauthorized operations on the door locks. Attackers can replay a valid DC voltage pulse to open (or close) the door in an unauthorized manner or capture the user's personal identification number (PIN) used for the door lock.

Keywords: electronic engineering computing; electronic products; keys (locking); security of data; DC voltage pulse; PIN; backdoors installation; electronic door locks; lock manufacturers; malicious hardware backdoor; personal identification number; Batteries; Bluetooth; Central Processing Unit; Consumer electronics; Solenoids; Voltage measurement; Wires (ID#: 16-9473)



N. W. Lo, C. K. Yu and C. Y. Hsu, "Intelligent Display Auto-Lock Scheme for Mobile Devices," Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 48-54. doi: 10.1109/AsiaJCIS.2015.30

Abstract: In recent years people in modern societies have heavily relied on their own intelligent mobile devices such as smartphones and tablets to get personal services and improve work efficiency. In consequence, quick and simple authentication mechanisms along with energy saving consideration are generally adopted by these smart handheld devices such as screen auto-lock schemes. When a smart device activates its screen lock mode to protect user privacy and data security on this device, its screen auto-lock scheme will be executed at the same time. Device user can setup the length of time period to control when to activate the screen lock mode of a smart device. However, it causes inconvenience for device users when a short time period is set for invoking screen auto-lock. How to get balance between security and convenience for individual users to use their own smart devices has become an interesting issue. In this paper, an intelligent display (screen) auto-lock scheme is proposed for mobile users. It can dynamically adjust the unlock time period setting of an auto-lock scheme based on derived knowledge from past user behaviors.

Keywords: authorisation; data protection; display devices; human factors; mobile computing; smart phones; authentication mechanisms; data security; energy saving; intelligent display auto-lock scheme; intelligent mobile devices; mobile users; personal services; screen auto-lock schemes; smart handheld devices; smart phones; tablets; unlock time period; user behaviors; user convenience; user privacy protection; user security; work efficiency improvement; Authentication; IEEE 802.11 Standards; Mathematical model; Smart phones; Time-frequency analysis; Android platform; display auto-lock; smartphone (ID#: 16-9474)



S. Sengupta, K. M. Annervaz, A. Saxena and S. Paul, "Data Vaporizer - Towards a Configurable Enterprise Data Storage Framework in Public Cloud," Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 73-80. doi: 10.1109/CLOUD.2015.20

Abstract: We propose a novel cloud-based data storage solution framework named Data Vaporizer (DV). The proposed framework provides many unique features such as storing data over multiple clouds or storage zones, resistance against organized vendor attacks, maintaining data integrity and confidentiality through client-side processing, fault-tolerance against failure of one or more cloud storage locations and avoids vendor lock-in of data. Data Vaporizer is highly configurable to meet various client data encryption requirements, compliance to industry standards and fault tolerance constraints depending on the nature and sensitivity of the data. To enhance the level of security and reliability, especially to protect data against malicious attacks and secure key management in cloud, DV uses advanced techniques of secret sharing of the keys. The architecture and optimality of data placement and efficient key management algorithm of DV ensure that the solution is highly scalable. The data foot print and subsequent cost incurred by our storage solution is minimal, considering the benefits provided. The initial response for the adoption of DV in actual client scenarios is promising.

Keywords: cloud computing; data integrity; security of data; storage management; client data encryption; cloud storage; confidentiality through client-side processing; configurable enterprise data storage framework; data integrity; data vaporizer; key management algorithm; keys secret sharing; public cloud; resistance against organized vendor attacks; storage zones; Cloud computing; Encoding; Encryption; Fault tolerance; Fault tolerant systems; Industries; cloud storage; data archival; enterprise data; fault-tolerance; integrity; optimal storage; privacy; secret key sharing; secure multi-party computation (ID#: 16-9475)



S. R. Bandre, "Design and Implementation of Smartphone Authentication System Based on Color-Code," Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-5. doi: 10.1109/PERVASIVE.2015.7087038

Abstract: Smartphones are used as a communication channel for exchanging user data and coordinating their business work. Users are more concerned about their private data that are stored on their portable devices, but unfortunately these devices are more prone to attacks by malicious users. The objective of this paper is to provide a new authentication system based on Color-Code such that it will preserve user's privacy and improve smartphone security. Every individual person has his own choice to select different colors. A color sequence can be a variety of unique color combinations and they are easy to remember. A user would specify desired Color-Code sequence as a passkey to authenticate user on the device. In order to fortify smartphone security from malicious users, this system uses random colors to increase difficulty of brute force attack. This system is based on multi-phase security schema which authenticates users and safeguards their privacy on a smartphone.

Keywords: data privacy; mobile computing; smart phones; authenticate user; authentication system; brute force attack; business work; color code sequence; color sequence; communication channel; exchanging user data; malicious user attack; malicious users; multiphase security schema; portable devices; private data; smartphone authentication system; smartphone security; Authentication; Bipartite graph; Color; Graphical user interfaces; Image color analysis; Privacy; Authentication; Color-code; Lock-screen; Mobile Device; Smartphone (ID#: 16-9476)



A. Tashkandi and I. Al-Jabri, "Cloud Computing Adoption by Higher Education Institutions in Saudi Arabia: Analysis Based on TOE," Cloud Computing (ICCC), 2015 International Conference on, Riyadh, 2015, pp. 1-8.doi: 10.1109/CLOUDCOMP.2015.7149634

Abstract: (1) Background, Motivation and Objective: Academic study of Cloud Computing within Saudi Arabia is an emerging research field. Saudi Arabia represents the largest economy in the Arabian Gulf region. This positions it as a potential market of cloud computing technologies. Adoption of new innovations should be preceded by analysis of the added value, challenges and adequacy from technological, organizational and environmental perspectives. (2) Statement of Contribution/Method: This cross-sectional exploratory empirical research is based on Technology, Organization and Environment model targeting higher education institutions. In this study, the factors that influence the adoption by higher education institutions were analyzed and tested using Partial Least Square. (3) Results, Discussion and Conclusions: Three factors were found significant in this context. Relative Advantage, Data Privacy and Complexity are the most significant factors. The model explained 43% of the total adoption measure variation. Significant differences in the areas of cloud computing compatibility, complexity, vendor lock-in and peer pressure between large and small institutions were revealed. Items for future cloud computing research were explored through open-ended questions. Adoption of cloud services by higher education institutions has been started. It was found that the adoption rate among large universities is higher than small higher education institutions. Improving the network and Internet Infrastructure in Saudi Arabia at an affordable cost is a pre-requisite for cloud computing adoption. Cloud service provider should address the privacy and complexity concerns raised by non-adopters. Future information systems that are potential for hosting in cloud were prioritized.

Keywords: cloud computing; computer aided instruction; data privacy; educational institutions; further education; Arabian Gulf region; Internet infrastructure; Saudi Arabia; TOE model; cloud computing; data privacy; higher education institutions; partial least square; technology, organization and environment model; universities; Cloud computing; Complexity theory; Computational modeling; Context; Education; Organizations; Technological innovation (ID#: 16-9477)



C. Rathgeb, J. Wagner, B. Tams and C. Busch, "Preventing the Cross-Matching Attack in Bloom Filter-Based," Biometrics and Forensics (IWBF), 2015 International Workshop on, Gjovik, 2015, pp. 1-6. doi: 10.1109/IWBF.2015.7110226

Abstract: Deployments of biometric technologies are already widely disseminated, i.e. the protection of biometric reference data becomes vital in order to safeguard individuals' privacy. Biometric template protection techniques are designed to protect biometric templates in an irreversible and unlinkable manner (ISO/IEC IS 24745). In addition, these schemes are required to maintain key system properties, e.g. biometric performance or authentication speed. Recently, template protection schemes based on Bloom filters have been introduced and applied to various biometric characteristics, such as iris or face. While a Bloom filter-based representation of biometric templates is irreversible the originally proposed system has been exposed to be vulnerable to cross-matching attacks. In this paper we address this issue and demonstrate that any kind of Bloom filter-based representation of biometric templates can be transformed to an unordered set of integer values which enables a locking of irreversible templates in a fuzzy vault scheme from Dodis et al. which can be secured against known cross-matching attacks. In addition, experiments which are carried out on a publicly available iris database, show that the proposed scheme retains the biometric performance of the original system.

Keywords: data protection; data structures; fuzzy set theory; iris recognition; Bloom filter-based biometric template representation; Bloom filter-based cancelable biometrics; biometric reference data protection; biometric template protection techniques; cross-matching attack; fuzzy vault scheme; iris database; Feature extraction; Indexes;Iris recognition; Security; Bloom filter; Template protection; cross-matching; fuzzy vault; iris biometrics (ID#: 16-9478)



M. Tanwar, R. Duggal and S. K. Khatri, "Unravelling Unstructured Data: A Wealth of Information in Big Data," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2015 4th International Conference on, Noida, 2015, pp. 1-6. doi: 10.1109/ICRITO.2015.7359270

Abstract: Big Data is data of high volume and high variety being produced or generated at high velocity which cannot be stored, managed, processed or analyzed using the existing traditional software tools, techniques and architectures. With big data many challenges such as scale, heterogeneity, speed and privacy are associated but there are opportunities as well. Potential information is locked in big data which if properly leveraged will make a huge difference to business. With the help of big data analytics, meaningful insights can be extracted from big data which is heterogeneous in nature comprising of structured, unstructured and semi-structured content. One prime challenge in big data analytics is that nearly 95% data is unstructured. This paper describes what big data and big data analytics is. A review of different techniques and approaches to analyze unstructured data is given. This paper emphasizes the importance of analysis of unstructured data along with structured data in business to extract holistic insights. The need for appropriate and efficient analytical methods for knowledge discovery from huge volumes of heterogeneous data in unstructured formats has been highlighted.

Keywords: Big Data; data mining; software architecture; software tools; text analysis; Big Data analytics; heterogeneous data; knowledge discovery; semistructured content; software architectures; software techniques; software tools; unstructured data analysis; Audio Analytics; Big Data; Social Media Analytics; Text Analytics; Unstructured data; Video Analytics (ID#: 16-9479)



A. M. Khan, F. Freitag and L. Rodrigues, "Current Trends and Future Directions in Community Edge Clouds," Cloud Networking (CloudNet), 2015 IEEE 4th International Conference on, Niagara Falls, ON, 2015, pp. 239-241. doi: 10.1109/CloudNet.2015.7335315

Abstract: Cloud computing promises access to computing resources that is cost-effective, elastic and easily scalable. With few key cloud providers in the field, despite the benefits, there are issues like vendor lock-in, privacy and control over data. In this paper we focus on alternative models of cloud computing, like the community clouds at the edge which are built collaboratively using the resources contributed by the users, either through solely relying on users' machines, or using them to augment existing cloud infrastructures. We study community network clouds in the context of other initiatives in community cloud computing, mobile cloud computing, social cloud computing, and volunteer computing, and analyse how the context of community networks can support the community clouds.

Keywords: cloud computing; mobile computing; volunteer computing; cloud infrastructure; community edge cloud; community network cloud computing; mobile cloud computing; social cloud computing; user machine; volunteer computing; Cloud computing; Computational modeling; Computer architecture; Context; Mobile communication; Resource management; cloud computing; community clouds (ID#: 16-9480)



A. Dabrowski, I. Echizen and E. R. Weippl, "Error-Correcting Codes as Source for Decoding Ambiguity," Security and Privacy Workshops (SPW), 2015 IEEE, San Jose, CA, 2015, pp. 99-105. doi: 10.1109/SPW.2015.28

Abstract: Data decoding, format, or language ambiguities have been long known for amusement purposes. Only recently it came to attention that they also pose a security risk. In this paper, we present decoder manipulations based on deliberately caused ambiguities facilitating the error correction mechanisms used in several popular applications. This can be used to encode data in multiple formats or even the same format with different content. Implementation details of the decoder or environmental differences decide which data the decoder locks onto. This leads to different users receiving different content based on a language decoding ambiguity. In general, ambiguity is not desired, however in special cases it can be particularly harmful. Format dissectors can make wrong decisions, e.g. A firewall scans based on one format but the user decodes different harmful content. We demonstrate this behavior with popular barcodes and argue that it can be used to deliver exploits based on the software installed, or use probabilistic effects to divert a small percentage of users to fraudulent sites.

Keywords: bar codes; decoding; encoding; error correction codes; fraud; security of data; barcodes; data decoding; data encoding; decoder manipulations; error correction mechanisms; error-correcting codes; format dissectors; fraudulent sites; language decoding ambiguity; security risk; Decoding; Error correction codes; Security; Software; Standards; Synchronization; Visualization; Barcode; Error Correcting Codes; LangSec; Language Security; Packet-in-Packet; Protocol decoding ambiguity; QR; Steganography (ID#: 16-9481)



P. C. Prokopiou, P. E. Caines and A. Mahajan, "An Estimation Based Allocation Rule with Super-Linear Regret and Finite Lock-On Time for Time-Dependent Multi-Armed Bandit Processes," Electrical and Computer Engineering (CCECE), 2015 IEEE 28th Canadian Conference on, Halifax, NS, 2015, pp. 1299-1306. doi: 10.1109/CCECE.2015.7129466

Abstract: The multi-armed bandit (MAB) problem has been an active area of research since the early 1930s. The majority of the literature restricts attention to i.i.d. or Markov reward processes. In this paper, the finite-parameter MAB problem with time-dependent reward processes is investigated. An upper confidence bound (UCB) based index policy, where the index is computed based on the maximum-likelihood estimate of the unknown parameter, is proposed. This policy locks on to the optimal arm in finite expected time but has a super-linear regret. As an example, the proposed index policy is used for minimizing prediction error when each arm is a auto-regressive moving average (ARMA) process.

Keywords: Markov processes; autoregressive moving average processes; maximum likelihood estimation; resource allocation; ARMA process; Markov reward process; UCB based index policy; auto-regressive moving average process; estimation based allocation rule; finite expected time; finite lock-on time; finite-parameter MAB problem; maximum-likelihood estimation; prediction error minimization; superlinear regret; time-dependent multiarmed bandit process; time-dependent reward process; upper confidence bound based index policy; Computers; Indexes; Markov processes; Maximum likelihood estimation; Resource management; Technological innovation (ID#: 16-9482)



P. Lindgren, M. Lindner, A. Lindner, D. Pereira and L. M. Pinho, "RTFM-Core: Language and Implementation," Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference on, Auckland, 2015, pp. 990-995. doi: 10.1109/ICIEA.2015.7334252

Abstract: Robustness, real-time properties and resource efficiency are key properties to embedded devices of the CPS/IoT era. In this paper we propose a language approach RTFM-core, and show its potential to facilitate the development process and provide highly efficient and statically verifiable implementations. Our programming model is reactive, based on the familiar notions of concurrent tasks and (single-unit) resources. The language is kept minimalistic, capturing the static task, communication and resource structure of the system. Whereas C-source can be arbitrarily embedded in the model, and/or externally referenced, the instep to mainstream development is minimal, and a smooth transition of legacy code is possible. A prototype compiler implementation for RTFM-core is presented. The compiler generates C-code output that compiled together with the RTFM-kernel primitives runs on bare metal. The RTFM-kernel guarantees deadlock-lock free execution and efficiently exploits the underlying interrupt hardware for static priority scheduling and resource management under the Stack Resource Policy. This allows a plethora of well-known methods to static verification (response time analysis, stack memory analysis, etc.) to be readily applied. The proposed language and supporting tool-chain is demonstrated by showing the complete process from RTFM-core source code into bare metal executables for a lightweight ARM-Cortex M3 target.

Keywords: C language; operating system kernels; program compilers; resource allocation; scheduling; ARM-Cortex M3 target; C-code output generation; C-source; RTFM-core language; RTFM-core source code; RTFM-kernel primitives; bare metal executables; deadlock-lock free execution; interrupt hardware; legacy code transition; prototype compiler implementation; reactive programming model; resource management; stack resource policy; static priority scheduling; static verification; Grammar; Hardware; Instruction sets; Job shop scheduling; Metals; Programming; Synchronization (ID#: 16-9483)



P. Lindgren, M. Lindner, A. Lindner, D. Pereira and L. M. Pinho, "Well-formed Control Flow for Critical Sections in RTFM-core," Industrial Informatics (INDIN), 2015 IEEE 13th International Conference on, Cambridge, 2015, pp. 1438-1445. doi: 10.1109/INDIN.2015.7281944

Abstract: The mainstream of embedded software development as of today is dominated by C programming. To aid the development, hardware abstractions, libraries, kernels and lightweight operating systems are commonplace. Such kernels and operating systems typically impose a thread based abstraction to concurrency. However, in general thread based programming is hard, plagued by race conditions and dead-locks. For this paper we take an alternative outset in terms of a language abstraction, RTFM-core, where the system is modelled directly in terms of tasks and resources. In compliance to the Stack Resource Policy (SRP) model, the language enforces (well-formed) LIFO nesting of claimed resources, thus SRP based analysis and scheduling can be readily applied. For the execution onto bare-metal single core architectures, the rtfm-core compiler performs SRP analysis on the model and render an executable that is deadlock free and (through RTFM-kernel primitives) exploits the underlying interrupt hardware for efficient scheduling. The RTFM-core language embeds C-code and links to C-object files and libraries, and is thus applicable to the mainstream of embedded development. However, while the language enforces well-formed resource management, control flow in the embedded C-code may violate the LIFO nesting requirement. In this paper we address this issue by lifting a subset of C into the RTFM-core language allowing arbitrary control flow at the model level. In this way well-formed LIFO nesting can be enforced, and models ensured to be correct by construction. We demonstrate the feasibility by means of a prototype implementation in the rtfm-core compiler. Additionally, we develop a set of running examples and show in detail how control flow is handled at compile time and during run-time execution.

Keywords: C language; embedded systems; program compilers; program control structures; scheduling; C programming; C-object files; C-object libraries; LIFO nesting requirement; RTFM-core compiler; SRP model; bare-metal single core architectures; control flow; embedded C-code; embedded software development; general thread based programming; language abstraction; last-in-first-out nesting requirement; lightweight operating systems; resource management; stack resource policy model; thread based abstraction; Concurrent computing; Hardware; Kernel; Libraries; Programming; Switches; Synchronization (ID#: 16-9484)



Bowu Zhang, Jinho Hwang, L. Ma and T. Wood, "Towards Security-Aware Virtual Server Migration Optimization to the Cloud," Autonomic Computing (ICAC), 2015 IEEE International Conference on, Grenoble, 2015, pp. 71-80. doi: 10.1109/ICAC.2015.45

Abstract: Cloud computing, featured by shared servers and location independent services, has been widely adopted by various businesses to increase computing efficiency, and reduce operational costs. Despite significant benefits and interests, enterprises have a hard time to decide whether or not to migrate thousands of servers into the cloud because of various reasons such as lack of holistic migration (planning) tools, concerns on data security and cloud vendor lock-in. In particular, cloud security has become the major concern for decision makers, due to the nature weakness of virtualization -- the fact that the cloud allows multiple users to share resources through Internet-facing interfaces can be easily taken advantage of by hackers. Therefore, setting up a secure environment for resource migration becomes the top priority for both enterprises and cloud providers. To achieve the goal of security, security policies such as firewalls and access control have been widely adopted, leading to significant cost as additional resources need to employed. In this paper, we address the challenge of the security-aware virtual server migration, and propose a migration strategy that minimizes the migration cost while promising the security needs of enterprises. We prove that the proposed security-aware cost minimization problem is NP hard and our solution can achieve an approximate factor of 2. We perform an extensive simulation study to evaluate the performance of the proposed solution under various settings. Our simulation results demonstrate that our approach can save 53%moving cost for a single enterprise case, and 66% for multiple enterprises case comparing to a random migration strategy.

Keywords: cloud computing; cost reduction; resource allocation; security of data; virtualisation; Internet-facing interfaces; NP hard problem; cloud computing; cloud security; cloud vendor lock-in; data security; moving cost savings; resource migration; resource sharing; security policy; security-aware cost minimization problem; security-aware virtual server migration optimization; virtualization; Approximation algorithms; Approximation methods; Cloud computing; Clustering algorithms; Home appliances; Security; Servers; Cloud Computing; Cloud Migration; Cloud Security; Cost Minimization (ID#: 16-9485)



A. Atalar, A. Gidenstam, P. Renaud-Goud and P. Tsigas, "Modeling Energy Consumption of Lock-Free Queue Implementations," Parallel and Distributed Processing Symposium (IPDPS), 2015 IEEE International, Hyderabad, 2015, pp. 229-238. doi: 10.1109/IPDPS.2015.31

Abstract: This paper considers the problem of modelling the energy behaviour of lock-free concurrent queue data structures. Our main contribution is a way to model the energy behaviour of lock-free queue implementations and parallel applications that use them. Focusing on steady state behaviour we decompose energy behaviour into throughput and power dissipation which can be modeled separately and later recombined into several useful metrics, such as energy per operation. Based on our models, instantiated from synthetic benchmark data, and using only a small amount of additional application specific information, energy and throughput predictions can be made for parallel applications that use the respective data structure implementation. To model throughput we propose a generic model for lock-free queue throughput behaviour, based on combination of the dequeuers' throughput and enqueuers' throughput. To model power dissipation we commonly split the contributions from the various computer components into static, activation and dynamic parts, where only the dynamic part depends on the actual instructions being executed. To instantiate the models a synthetic benchmark explores each queue implementation over the dimensions of processor frequency and number of threads. Finally, we show how to make predictions of application throughput and power dissipation for a parallel application using lock-free queue requiring only a limited amount of information about the application work done between queue operations. Our case study on a Mandelbrot application shows convincing prediction results.

Keywords: data structures; energy consumption; parallel processing; power aware computing; queueing theory; Mandelbrot application; computer components; data structure implementation; dynamic parts; energy behavior; energy consumption modeling; lock-free concurrent queue data structures; lock-free queue implementations; lock-free queue throughput behavior; parallel applications; power dissipation; steady state behavior; synthetic benchmark data; Benchmark testing; Computational modeling; Data models; Data structures; Instruction sets; Power dissipation; Throughput; analysis; concurrent data structures; energy; lock-free; modeling; power; queue; throughput (ID#: 16-9486)



S. Kolb, J. Lenhard and G. Wirtz, "Application Migration Effort in the Cloud - The Case of Cloud Platforms," Cloud Computing (CLOUD), 2015 IEEE 8th International Conference on, New York City, NY, 2015, pp. 41-48. doi: 10.1109/CLOUD.2015.16

Abstract: Over the last years, the utilization of cloud resources has been steadily rising and an increasing number of enterprises are moving applications to the cloud. A leading trend is the adoption of Platform as a Service to support rapid application deployment. By providing a managed environment, cloud platforms take away a lot of complex configuration effort required to build scalable applications. However, application migrations to and between clouds cost development effort and open up new risks of vendor lock-in. This is problematic because frequent migrations may be necessary in the dynamic and fast changing cloud market. So far, the effort of application migration in PaaS environments and typical issues experienced in this task are hardly understood. To improve this situation, we present a cloud-to-cloud migration of a real-world application to seven representative cloud platforms. In this case study, we analyze the feasibility of the migrations in terms of portability and the effort of the migrations. We present a Docker-based deployment system that provides the ability of isolated and reproducible measurements of deployments to platform vendors, thus enabling the comparison of platforms for a particular application. Using this system, the study identifies key problems during migrations and quantifies these differences by distinctive metrics.

Keywords: cloud computing; software cost estimation; software metrics; Docker-based deployment system; PaaS; Platform as a Service; application migration; cloud cost development effort; cloud market; cloud resource utilisation; cloud-to-cloud migration; complex configuration; distinctive metrics; portability; rapid application deployment; scalable applications; Automation; Containers; Measurement; Pricing; Rails; Case Study; Cloud Computing; Metrics; Migration; Platform as a Service; Portability (ID#: 16-9487)



A. Del Giudice, G. Graditi, A. Pietrosanto and V. Paciello, "Power Quality in Smart Distribution Grids," Measurements & Networking (M&N), 2015 IEEE International Workshop on, Coimbra, 2015, pp. 1-6. doi: 10.1109/IWMN.2015.7322967

Abstract: Demand Side Management requires both an adequate architecture to observe and control the status of the power grid and precise and real time measurements on which rely. In case of frequency fluctuations, precision is no more guaranteed so without adding more hardware the authors exploit FFT interpolation to estimate the real frequency of electrical signals. After the discovery phase follows the measurement phase in which a low cost Smart Meter computes the metrics specified in the following chapters. Finally, a comparison among measures taken by a reference instrument and the proposed meter is reported.

Keywords: distribution networks; fast Fourier transforms; interpolation; power supply quality; power system measurement; smart meters; FFT interpolation; electrical signal frequency; frequency fluctuations; power grid; power quality; smart distribution grids; smart meter; Current measurement; Frequency estimation; Harmonic analysis; Phasor measurement units; Power measurement; Voltage measurement; Demand Side Management; FFT; Frequency lock; Power Quality; Smart Grid; Smart Metering (ID#: 16-9488)



J. Dworak and A. Crouch, "A Call to Action: Securing IEEE 1687 and the Need for an IEEE Test Security Standard," VLSI Test Symposium (VTS), 2015 IEEE 33rd, Napa, CA, 2015, pp. 1-4. doi: 10.1109/VTS.2015.7116256

Abstract: Today's chips often contain a wealth of embedded instruments, including sensors, hardware monitors, built-in self-test (BIST) engines, etc. They may process sensitive data that requires encryption or obfuscation and may contain encryption keys and ChipIDs. Unfortunately, unauthorized access to internal registers or instruments through test and debug circuitry can turn design for testability (DFT) logic into a backdoor for data theft, reverse engineering, counterfeiting, and denial-of-service attacks. A compromised chip also poses a security threat to any board or system that includes that chip, and boards have their own security issues. We will provide an overview of some chip and board security concerns as they relate to DFT hardware and will briefly review several ways in which the new IEEE 1687 standard can be made more secure. We will then discuss the need for an IEEE Security Standard that can provide solutions and metrics for providing appropriate security matched to the needs of a real world environment.

Keywords: built-in self test; cryptography; design for testability; reverse engineering; BIST; ChipID; DFT hardware; DFT logic; IEEE 1687;IEEE test security standard; built-in self-test; data theft; denial-of-service attacks; design for testability; embedded instruments; encryption keys; hardware monitors; internal registers; reverse engineering; Encryption; Instruments; Microprogramming; Ports (Computers);Registers; Standards; BIST; DFT; IEEE Standard; IJTAG; JTAG; LSIB; P1687; lock; scan; security; trap (ID#: 16-9489)



Lei Yuan, Hong Chen, Bingtao Ren and Haiyan Zhao, "Model Predictive Slip Control for Electric Vehicle with Four In-Wheel Motors," Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 7895-7900. doi: 10.1109/ChiCC.2015.7260894

Abstract: In order to solve the problem that the electric vehicle wheels may lock up when braking or spin out when accelerating on low-friction coefficient [low-μ] roads, this paper presented a nonlinear model predictive controller for slip control of electric vehicle equipped with four in-wheel motors. The advantage of the proposed nonlinear model predictive control based (NMPC) slip controller is that it can act not only as an anti-lock braking system (ABS) by preventing the tires from locking up when braking, but also as a traction control system (TCS) by preventing the tires from spinning out when accelerating. Besides, the proposed slip controller is also capable of assisting the hydraulic brake system of the vehicle by automatically distributing the braking torque between the wheels using the available braking torque of the in-wheel motors. In this regard, the proposed NMPC slip controller guarantees the optimal traction or braking torque on each wheel on low-μ road condition by individually controlling the slip ratio of each tire within the stable zone with a much faster response time, while considering actuator limitations and wheel slip constraints and performance metrics. The performance of the proposed controller is confirmed by running the electric vehicle model with four individually driven in-wheel motors built in AMESim, through several test maneuvers in the co-simulation environment of AMESim and Simulink.

Keywords: braking; electric vehicles; nonlinear control systems; optimal control; predictive control; torque control; traction; ABS; AMESim; NMPC slip controller; TCS; actuator limitations; anti-lock braking system; braking torque; electric vehicle wheels; hydraulic brake system; in-wheel motors; low-friction coefficient roads; nonlinear model predictive control based slip controller; optimal traction; slip ratio; traction control system; wheel slip constraints; Acceleration; Roads; Tires; Torque; Traction motors; Vehicles; Wheels; Electric vehicle; NMPC; co-simulation; constraint; in-wheel motor; slip control (ID#: 16-9490)



A. Ashraf, K. O. Davis, K. Ogutman, W. V. Schoenfeld and M. D. Eisaman, "Hyperspectral Laser Beam Induced Current System for Solar Cell Characterization," Photovoltaic Specialist Conference (PVSC), 2015 IEEE 42nd, New Orleans, LA, 2015, pp. 1-4. doi: 10.1109/PVSC.2015.7356129

Abstract: We introduce a novel hyperspectral laser beam induced current (LBIC) system that uses a supercontinuum laser that can be tuned from 400nm - 1200nm with diffracted limited spot size. The solar cell is light biased while simultaneously being illuminated by a chopped laser beam at a given wavelength. Current-voltage measurements performed by measuring the current perturbation due to the laser using a lock-in amplifier allow us to extract performance metrics at a specific lateral position and depth (by tuning the wavelength of the laser) while the device is at operating conditions. These parameters are simultaneously compared to material deformations as determined from the doping density, and the built-in voltage. Concurrently we also probe lateral recombination variation by measuring the activation energy thereby providing a comprehensive and unique analysis.

Keywords: OBIC; solar cells; supercontinuum generation; activation energy; chopped laser beam; diffracted limited spot size; doping density; hyperspectral laser beam induced current system; lateral recombination variation; lock-in amplifier; solar cell characterization; supercontinuum laser; Current measurement; Laser beams; Measurement by laser beam; Photovoltaic cells; Resistance; Temperature measurement; Wavelength measurement; hyperspectral; lbic; photovoltaic (ID#: 16-9491)



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.