Visible to the public Resiliency 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo




Resiliency is one of the five hard problems of the Science of Security. Research work in this area has been growing. The work cited here was presented in 2015.

J. Rajamäki, “Cyber Security Education as a Tool for Trust-Building in Cross-Border Public Protection and Disaster Relief Operations,” Global Engineering Education Conference (EDUCON), 2015 IEEE, Tallinn, 2015, pp. 371-378. doi: 10.1109/EDUCON.2015.7095999
Abstract: Public protection and disaster relief (PPDR) operations are increasingly more dependent on networks and data processing infrastructure. Incidents such as natural hazards and organized crime do not respect national boundaries. As a consequence, there is an increased need for European collaboration and information sharing related to public safety communications (PSC) and information exchange technologies and procedures - and trust is the keyword here. According to our studies, the topic “trust-building” could be seen as the most important issue with regard to multi-agency PPDR cooperation. Cyber security should be seen as a key enabler for the development and maintenance of trust in the digital world. It is important to complement the currently dominating “cyber security as a barrier” perspective by emphasizing the role of “cyber security as an enabler” of new business, interactions, and services - and recognizing that trust is a positive driver for growth. Public safety infrastructure is becoming more dependent on unpredictable cyber risks. Everywhere, present computing means that PPDR agencies do not know when they are using dependable devices or services, and there are chain reactions of unpredictable risks. If cyber security risks are not made ready, PPDR agencies, like all organizations, will face severe disasters over time. Investing in systems that improve confidence and trust can significantly reduce costs and improve the speed of interaction. From this perspective, cyber security should be seen as a key enabler for the development and maintenance of trust in the digital world, and it has the following themes: security technology, situation awareness, security management and resiliency. Education is the main driver for complementing the currently dominating “cyber security as a barrier” perspective by emphasizing the role of “cyber security as an enabler”.
Keywords: computer aided instruction; computer science education; emergency management; trusted computing; PPDR operation; PSC; cross-border public protection operation; cyber security education; cybersecurity-as-a-barrier perspective; cybersecurity-as-an-enabler perspective; disaster relief operation; information exchange; multiagency PPDR cooperation; public safety communications; resiliency theme; security management theme; security technology theme; situation awareness theme; trust building; Computer security; Education; Europe; Organizations; Safety; Standards organizations; cyber security; education; public protection and disaster relief; trust-building (ID#: 16-9574)


T. Aoyama, H. Naruoka, I. Koshijima, W. Machii and K. Seki, “Studying Resilient Cyber Incident Management from Large-Scale Cyber Security Training,” Control Conference (ASCC), 2015 10th Asian, Kota Kinabalu, 2015, pp. 1-4.
doi: 10.1109/ASCC.2015.7244713
Abstract: The study on human contribution to cyber resilience is unexplored terrain in the field of critical infrastructure security. So far cyber resilience has been discussed as an extension of the IT security research. The current discussion is focusing on technical measures and policy preparation to mitigate cyber security risks. In this human-factor based study, the methodology to achieve high resiliency of the organization by better management is discussed. A field observation was conducted in the large-scale cyber security hands-on training at ENCS (European Network for Cyber Security, The Hague, NL) to determine management challenges that could occur in a real-world cyber incident. In this paper, the possibility to extend resilience-engineering framework to assess organization's behavior in cyber crisis management is discussed.
Keywords: human factors; risk management; security of data; ENCS; European Network for Cyber Security; NL; The Hague; cyber crisis management; cyber incident management; cyber resilience; cyber security risk management human-factor; large-scale cyber security hands-on training; resilience-engineering framework; Computer security; Games; Monitoring; Organizations; Resilience; Training; critical infrastructure; cyber security; management; resilience engineering (ID#: 16-9575)


T. Hayajneh, T. Zhang and B. J. Mohd, “Security Issues in WSNs with Cooperative Communication,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 451-456. doi: 10.1109/CSCloud.2015.78
Abstract: Cooperative communication is a technique that helps to improve the communication performance in wireless networks. It allows the nodes to rely on their neighbors when transmitting packets providing some diversity gain. Wireless sensor networks (WSNs) can benefit from cooperative communication to, which was proven by other researcher in the field. In this paper we consider security issues in WSNs with cooperative communications. We study such issues at each of the main protocol layers: physical layer, data link layer, network layer, services (topology) layer, and application layer. For each layer, we clarify the main task, enumerate the main attacks and threats, specify the primary security approaches and techniques, if any, and discuss possible new attacks and problems that may arise with the use of cooperative communications. Further, we showed for some attacks (e.g. jamming, packet dropping, and wormhole) that using cooperative communication improves the network resiliency and reliability. This paper builds the foundations and clarifies the specifications for a needed security protocol in WSNs with cooperative communications that can enhance its performance and resiliency against cyber-attacks.
Keywords: cooperative communication; protocols; telecommunication network reliability; telecommunication security; wireless sensor networks; WSN; application layer; cyber-attack; data link layer; network layer; physical layer; security issue; service layer; wireless sensor network reliability; Cooperative communication; Jamming; Protocols; Relays; Security; Sensors; Wireless sensor networks; Cooperative Communication; Security attacks; resiliency (ID#: 16-9576)


L. Kypus, L. Vojtech and J. Hrad, “Security of ONS Service for Applications of the Internet of Things and Their Pilot Implementation in Academic Network,” Carpathian Control Conference (ICCC), 2015 16th International, Szilvasvarad, 2015,
pp. 271-276. doi: 10.1109/CarpathianCC.2015.7145087
Abstract: The aim of the Object name services (ONS) project was to find a robust and stable way of automated communication to utilize name and directory services to support radio-frequency identification (RFID) ecosystem, mainly in the way that can leverage open source and standardized services and capability to be secured. All this work contributed to the new RFID services and Internet of Things (IoT) heterogeneous environments capabilities presentation. There is an increasing demand of transferred data volumes associated with each and every IP or non-IP discoverable objects. For example RFID tagged objects and sensors, as well as the need to bridge remaining communication compatibility issues between these two independent worlds. RFID and IoT ecosystems require sensitive implementation of security approaches and methods. There are still significant risks associated with their operations due to the content nature. One of the reasons of past failures could be lack of security as the integral part of design of each particular product, which is supposed to build ONS systems. Although we focused mainly on the availability and confidentiality concerns in this paper, there are still some remaining areas to be researched. We tried to identify the hardening impact by metrics evaluating operational status, resiliency, responsiveness and performance of managed ONS solution design. Design of redundant and hardened testing environment under tests brought us the visibility into the assurance of the internal communication security and showed behavior under the load of the components in such complex information service, with respect to an overall quality of the delivered ONS service.
Keywords: Internet of Things; radiofrequency identification; telecommunication security; ONS service; RFID; academic network; object name services; radio-frequency identification; Operating systems; Protocols; Radiofrequency identification; Security; Servers; Standards; Virtual private networks; IPv6; ONS; security hardening (ID#: 16-9577)


A. Fressancourt and M. Gagnaire, “A SDN-Based Network Architecture for Cloud Resiliency,” Consumer Communications and Networking Conference (CCNC), 2015 12th Annual IEEE, Las Vegas, NV, 2015, pp. 479-484. doi: 10.1109/CCNC.2015.7158022
Abstract: In spite of their commercial success, Cloud services are still subject to two major weak points: data security and infrastructure resiliency. In this paper, we propose an original Cloud network architecture aiming at improving the resiliency of Cloud network infrastructures interconnecting remote data centers. The main originality of this architecture consists in exploiting the principles of Software Defined Networking (SDN) in order to adapt the rerouting strategies in case of network failure according to a set of requirements. In existing Cloud networks configurations, network recovery after a fiber cut is achieved by means of the usage of redundant bandwidth capacity preplanned through backup links. Such an approach has two drawbacks. First, it induces at a large scale a non-negligible additional cost for the Cloud Service Providers (CSP). Second, the pre-computation of the rerouting strategy may not be suited to the specific quality of service requirements of the various data flows that were transiting on the failing link. To prevent these two drawbacks, we propose that CSPs deploy their services in several redundant data centers and make sure that those data centers are properly interconnected via the Internet. For that purpose, we propose that a CSP may use the services of multiple (typically two) Internet Service Providers to interconnect its data centers via the Internet. In practice, we propose that a set of “routing inflection points” may form an overlay network exploiting a specific routing strategy. We propose that this overlay is coordinated by a Software Defined Networking-based centralized controller. Thus, such a CSP may choose the network path between two data centers the most suited to the underlying traffic QoS requirement. The proposed approach enables this CSP a certain independency from its network providers. In this paper, we present this new Cloud architecture. We outline how our approach mixes concepts taken from both SDN an- Segment Routing. Unlike the protection techniques used by existing CSPs, we explain how this approach can be used to implement fast rerouting strategy for inter-data center data exchanges.
Keywords: cloud computing; computer network security; quality of service; software defined networking; telecommunication network routing; telecommunication traffic; CSP; Internet service providers; SDN based network architecture; cloud network architecture; cloud network infrastructures; cloud networks configurations; cloud resiliency; cloud service providers; cloud services; data centers; data security; fiber cut; network failure; remote data centers; rerouting strategy; routing strategy; traffic QoS requirement; Computer architecture; Internet; Multiprotocol label switching; Peer-to-peer computing; Routing; Routing protocols; Servers; Overlay; Resiliency; Segment Routing; Software-Defined Networks (ID#: 16-9578)


J. D. Ansilla, N. Vasudevan, J. JayachandraBensam and J. D. Anunciya, “Data Security in Smart Grid with Hardware Implementation Against DoS Attacks,” Circuit, Power and Computing Technologies (ICCPCT), 2015 International Conference on, Nagercoil, 2015, pp. 1-7. doi: 10.1109/ICCPCT.2015.7159274
Abstract: Cultivation of Smart Grid refurbish with brisk and ingenious. The delinquent breed and sow mutilate in massive. This state of affair coerces security as a sapling which incessantly is to be irrigated with Research and Analysis. The Cyber Security is endowed with resiliency to the SYN flooding induced Denial of Service attack in this work. The proposed secure web server algorithm embedded in the LPC1768 processor ensures the smart resources to be precluded from the attack.
Keywords: Internet; computer network security; power engineering computing; smart power grids; DoS attacks; LPC1768 processor; SYN flooding; cybersecurity; data security; denial of service attack; secure Web server algorithm; smart grid; smart resources; Computer crime; Computers; Floods; IP networks; Protocols; Servers; ARM Processor; DoS; Hardware Implementation; SYNflooding; Smart Grid (ID#: 16-9579)


A. S. Prasad, D. Koll and X. Fu, “On the Security of Software-Defined Networks,” Software Defined Networks (EWSDN), 2015 Fourth European Workshop on, Bilbao, 2015, pp. 105-106. doi: 10.1109/EWSDN.2015.70
Abstract: To achieve a widespread deployment of Software-Defined Networks (SDNs) these networks need to be secure against internal and external misuse. Yet, currently, compromised end hosts, switches, and controllers can be easily exploited to launch a variety of attacks on the network itself. In this work we discuss several attack scenarios, which — although they have a serious impact on SDN — have not been thoroughly addressed by the research community so far. We evaluate currently existing solutions against these scenarios and formulate the need for more mature defensive means.
Keywords: computer network security; software defined networking; SDNs; software-defined network security; Computer crime; Fabrication; Network topology; Switches; Topology; Resiliency; SDN; Security; Software-defined Networks (ID#: 16-9580)


F. Machida, M. Fujiwaka, S. Koizumi and D. Kimura, “Optimizing Resiliency of Distributed Video Surveillance System for Safer City,” Software Reliability Engineering Workshops (ISSREW), 2015 IEEE International Symposium on, Gaithersburg, MD, 2015, pp. 17-20. doi: 10.1109/ISSREW.2015.7392029
Abstract: Real-time video surveillance system is becoming an important function to keep safe city by monitoring the places attracting crowds. The system needs to be resilient so that it can timely detect abnormal events and swiftly deliver alerts to security agencies whenever events occur. In this paper, we present an architecture of resilient video surveillance system that persists even when the workload of video analysis surges by any changes in a target physical area. The proposed architecture is based on a distributed computing platform which can allocate virtual machines dynamically in response to local demand increase. In order to estimate the necessary amount of resources for video analysis, we propose a socio-ICT model that consists of a system dynamics and a queueing model. A simulation study on the model demonstrates how our platform can adapt to the changes in the target physical area and improves the resiliency.
Keywords: distributed processing; queueing theory; video signal processing; video surveillance; virtual machines; distributed computing platform; distributed video surveillance system; information and communications technology; queueing model; resilient video surveillance system; socio-ICT model; system dynamics; video analysis; virtual machines; Cities and towns; Computational modeling; Face; Streaming media; System dynamics; Video surveillance; distributed system; resiliency (ID#: 16-9581)


O. Popescu and D. C. Popescu, “Sub-Band Precoded OFDM for Enhanced Physical Layer Resiliency in Wireless Communication Systems,” Communications and Networking (BlackSeaCom), 2015 IEEE International Black Sea Conference on, Constanta, 2015, pp.1-4. doi: 10.1109/BlackSeaCom.2015.7185074
Abstract: Orthogonal Frequency Division Multiplexing (OFDM) is the preferred modulation scheme for current and future broadband wireless systems, and has been incorporated in many standards. In spite of its many benefits, which include robust performance in noise, fading channels and uncorrelated interference, OFDM systems have poor performance in the presence of jamming. In this paper we study the use of sub-band precoding to protect OFDM systems against jamming attacks and enhance their physical layer security.
Keywords: OFDM modulation; fading channels; radiofrequency interference; telecommunication security; broadband wireless systems; fading channels; jamming attacks; modulation scheme; orthogonal frequency division multiplexing; physical layer resiliency; physical layer security; subband precoded OFDM; uncorrelated interference; wireless communication systems; Bandwidth; Bit error rate; Discrete Fourier transforms; Jamming; OFDM; Physical layer; Wireless communication; jamming environment; precoding
(ID#: 16-9582)


R. Garcia and C. E. Chow, “Identity Considerations for Public Sector Hybrid Cloud Computing Solutions,” Computer Communication and Informatics (ICCCI), 2015 International Conference on, Coimbatore, 2015, pp. 1-8. doi: 10.1109/ICCCI.2015.7218091
Abstract: Cloud computing is a relatively new paradigm that provides increased flexibility and resiliency in information technology service delivery. The inherent elasticity and cost savings in public cloud computing have attracted many in the private and ever-cautious public sectors. Presuming the construct of a hybrid cloud offering composed of a combined public and private cloud solution, the public sector (government) has a daunting dilemma in balancing resilience and security. Key to the public sector requirement is continuity of mission critical and mission support operations. When involved in national security, disaster response, defense, or homeland security missions, the criticality of service availability is elevated. Delays in service provisioning and collaboration are often as a result of identity management issues within participating organizations. This paper details the public sector cloud security and usability needs, draws on previous definitions and models for cloud security, and provides an identity management architectural model for use in some of the most critical of public sector mission sets.
Keywords: cloud computing; security of data; combined public-private cloud solution; cost savings; defense security missions; disaster response; homeland security missions; identity management; identity management architectural model; information technology service delivery; mission critical continuity; mission critical operations; mission support operations; national security; public sector cloud security; public sector hybrid cloud computing solutions; resilience balancing; security balancing; service availability criticality; service provisioning; Cloud computing; Computational modeling; Government; Mission critical systems; Mobile communication; Security; availability; confidentiality; hybrid cloud; information security management systems; integrity; public sector; security (ID#: 16-9583)


T. Xu and M. Potkonjak, “Digital PUF Using Intentional Faults,” Quality Electronic Design (ISQED), 2015 16th International Symposium on, Santa Clara, CA, 2015, pp. 448-451. doi: 10.1109/ISQED.2015.7085467
Abstract: Digital systems have numerous advantages over analog systems including robustness, resiliency against operational variations. However, one of the most popular hardware security primitive, PUF, has been an analog component. In this paper, we propose the concept of digital PUF where the core idea is to intentionally use high-risk synthesis to induce defects in circuits. Due to the effect of process variation, each manufactured digital implementation is unique with high probability. Compared to the traditional delay based PUF, the induced defects in circuit are permanent defects that guarantee the fault-based digital PUF resilient against operational variations. Meanwhile, our proposed design takes advantage of the digital functionality of the circuits, thus, easy to be integrated with digital logic. We experiment on the standard array multiplier module. Our standard security analysis indicates ideal security properties of the digital PUF.
Keywords: copy protection; integrated circuit design; logic design; fault based digital PUF; hardware security primitive; high risk synthesis; induced circuit defect; intentional fault; operational variation; physical unclonable function; process variation; standard array multiplier module; Adders; Bridge circuits; Circuit faults; Hamming distance; Logic gates; Security; Wires; Intentional Faults; Physical Unclonable Function (PUF); Security; Testing (ID#: 16-9584)


J. Rajamäki and R. Pirinen, “Critical Infrastructure Protection: Towards a Design Theory for Resilient Software-Intensive Systems,” Intelligence and Security Informatics Conference (EISIC), 2015 European, Manchester, 2015, pp. 184-184. doi: 10.1109/EISIC.2015.32
Abstract: Modern societies are highly dependent on different critical software-intensive information systems that support society. Designing security for these information systems has been particularly challenging since the technologies that make up these systems. Revolutionary advances in hardware, networking, information and human interface technologies require new ways of thinking about how these resilient software-intensive systems (SIS) are conceptualized, built and evaluated. Our research in this area is to develop a design theory (DT) for resilient SISs so that communities developing and operating different information technologies can share knowledge and best practices using a common frame of reference.
Keywords: critical infrastructures; data protection; safety-critical software; security of data; software fault tolerance; DT; SIS; critical infrastructure protection; critical software-intensive information system; design theory; software-intensive system resiliency; Computer security; Information security; Privacy; Software; Systematics; cyber security; design theory; software-intensive systems; trust-building (ID#: 16-9585)


M. S. Bruno, “Resilience Engineering: A Report on the Needs of the Stakeholder Communities and the Prospects for Responsive Educational Programs,” Interactive Collaborative Learning (ICL), 2015 International Conference on, Florence, 2015, pp. 699-702. doi: 10.1109/ICL.2015.7318113
Abstract: Recent natural and man-made disruptions in major urban areas around the globe have over the last decade spurred widespread interest in the improvement of community resilience. We here define “community” in general terms ranging from local neighborhoods to a nation (and beyond). Resilience as articulated in this manner is not easily quantified, standardized, measured, and modeled. Success will require the integration of seemingly disparate disciplines (e.g., behavioral psychology and software engineering), the involvement of widely diverse stakeholders (e.g., power authorities and the insurance industry), and perhaps even the invention of new fields of study (e.g., measurement science). Given the vast scope of this domain, and the numerous activities in the area already planned or underway around the world, it is essential that a careful assessment be conducted with the aim of identifying the applications of Resilience Engineering; the gaps in our ability to understand, communicate and improve community resilience; and the potential need for — and design of — an academic program aimed at the development of resilience professionals. Stevens Institute of Technology has since 1992 been working with Federal, State and local government officials and industry representatives to improve the resiliency of coastal communities to threats posed by natural hazards including tropical and extra-tropical storms, and flooding. These activities have included the development and delivery of a number of different coastal hazards educational programs, some tailored to engineers and planners, others to government and industry decision-makers and policy makers, and still others to the general public. Over the last 8 years, in large part due to Stevens' leadership of the National Center for Maritime Security, our work in hazard mitigation and resiliency has evolved into an All Hazards approach that includes threats posed by both natural hazards and man-made events. Recently- and in partnership with Lloyd's Register Foundation (LRF), Stevens hosted an international workshop to examine the role of Resilience Engineering in improving the resilience of communities and engineered systems in the range of sectors of interest to the LRF. The workshop included the participation of experts from around the world representing a diverse array of disciplines relevant to resiliency. A report summarizing the workshop findings was prepared that identified the research and education areas most needed to effectively enhance community resilience. In the present paper, we are taking this examination to the logical next step - does Resilience Engineering merit consideration as a new field of study? If yes, at what level and in what format should it be delivered in order to ensure the essential multi-disciplinary treatment of this complex topic? We examine a few of the emerging programs elsewhere around the world to provide a framework of possible approaches. Important international initiatives such as the Rockefeller Foundation's 100 Resilient Cities are opening doors to the creation of new government positions (e.g., Chief Resilience Officer); the same development has been occurring in private industry for several years. Our experience at Stevens suggests strongly that any such academic program must be tied to research, preferably research that is applications-oriented and impactful. Strong collaboration with government and private sector stakeholders is essential to success.
Keywords: education; social sciences; National Center for Maritime Security; Stevens Institute of Technology; all hazards approach; coastal communities; community resilience; hazard mitigation; hazard resiliency; resilience engineering; responsive educational programs; stakeholder communities; Conferences; Education; Hazards; Hurricanes; Industries; Resilience; Stakeholders; community; curriculum;engineering; hazards; infrastructure; resiliency; systems (ID#: 16-9586)


P. R. Vamsi and K. Kant, “A Taxonomy of Key Management Schemes of Wireless Sensor Networks,” Advanced Computing & Communication Technologies (ACCT), 2015 Fifth International Conference on, Haryana, 2015, pp. 690-696. doi: 10.1109/ACCT.2015.109
Abstract: Research on secure key management in Wireless Sensor Networks (WSNs) using cryptography has gained significant attention in the research community. However, ensuring security with public key cryptography is a challenging task due to resource limitation of sensor nodes in WSNs. In recent years, numerous researchers have proposed efficient lightweight key management schemes to establish secure communication. In this paper, the authors provide a study on several key management schemes developed for WSNs and their taxonomy with respect to various network and security metrics.
Keywords: public key cryptography; telecommunication security; wireless sensor networks; WSN; network metrics; resource limitation; secure communication; secure key management Schemes; security metrics; sensor nodes; Authentication; Cryptography; Peer-to-peer computing; Polynomials; Protocols; Wireless sensor networks; Key distribution; Key management; Resiliency; Security (ID#: 16-9587)


R. Rastgoufard, I. Leevongwat and P. Rastgoufard, “Impact of Hurricanes on Gulf Coast Electric Grid Islanding of Industrial Plants,” Power Systems Conference (PSC), 2015 Clemson University, Clemson, SC, 2015, pp. 1-5. doi: 10.1109/PSC.2015.7101692
Abstract: The purpose of this study is to determine the impact of seasonal hurricanes and tropical storms on the security and delivery of electricity in Gulf Coast states' electric grid to industrial customers. The Gulf Coast in general and the state of Louisiana in particular include a relatively high number of industrial plants that are connected to the electric grid, and continuity of electricity to these industrial plants is of vital importance in flow of energy from Gulf Coast states to the rest of the nation. The purpose of this paper is to identify the tropical storms and their characteristics in the last fifty years, to determine the existing industrial plants, including their products and services, and to develop an algorithm that results in the impact of tropical storms and hurricanes on continuity of electricity to any specific industrial plant in the Gulf Coast geographical area. The study determines “islanding” of part of the grid that may result from the impact of the simulated tropical storms and would provide sufficient information to plant managers for continuing or halting their plant operation in time prior to the landing of tropical storms or hurricanes. This paper includes a summary of simulation results of 50 historical and 26 hypothetical tropical storms on selected industrial plants in the geographical area. To display the usefulness and practicality of the developed algorithm, we will include results of one case study in the state of Louisiana. Further work on use of the developed algorithm in hardening and development of resilient transmission system in Gulf Coast states will be reported in future publications.
Keywords: distributed power generation; industrial plants; power distribution faults; storms; Louisiana; electric grid islanding; geographical area; gulf coast; industrial plants; resilient transmission system; seasonal hurricanes; tropical storms; Biological system modeling; Hurricanes; Industrial plants; Power transmission lines; Storms; Tropical cyclones; Wind speed; power system analysis; power system resiliency; power system security and reliability; transmission system hardening (ID#: 16-9588)


E. W. Fulp, H. D. Gage, D. J. John, M. R. McNiece, W. H. Turkett and X. Zhou, “An Evolutionary Strategy for Resilient Cyber Defense,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417814
Abstract: Many cyber attacks can be attributed to poorly configured software, where administrators are often unaware of insecure settings due to the configuration complexity or the novelty of an attack. A resilient configuration management approach would address this problem by updating configuration settings based on current threats while continuing to render useful services. This responsive and adaptive behavior can be obtained using an evolutionary algorithm, where security measures of current configurations are employed to evolve new configurations. Periodically, these configurations are applied across a collection of computers, changing the systems' attack surfaces and reducing their exposure to vulnerabilities. The effectiveness of this evolutionary strategy for defending RedHat Linux Apache web-servers is analyzed experimentally through a study of configuration fitness, population diversity, and resiliency observations. Configuration fitness reflects the level of system confidentiality, integrity and availability; whereas, population diversity gauges the heterogeneous nature of the configuration sets. The computers' security depends upon the discovery of a diverse set of highly fit parameter configurations. Resilience reflects the evolutionary algorithm's adaptability to new security threats. Experimental results indicate the approach is able to determine and maintain secure parameter settings when confronted with a variety of simulated attacks over time.
Keywords: Internet; Linux; evolutionary computation; security of data; RedHat Linux Apache web-servers; adaptive behavior; configuration complexity; configuration management approach; cyber attacks; evolutionary algorithm; evolutionary strategy; resilient cyber defense; security threats; Biological cells; Computers; Guidelines; Security; Sociology; Software; Statistics (ID#: 16-9589)


B. Bhargava, P. Angin, R. Ranchal and S. Lingayat, “A Distributed Monitoring and Reconfiguration Approach for Adaptive Network Computing,” Reliable Distributed Systems Workshop (SRDSW), 2015 IEEE 34th Symposium on, Montreal, QC, 2015, pp. 31-35. doi: 10.1109/SRDSW.2015.16
Abstract: The past decade has witnessed immense developments in the field of network computing thanks to the rise of the cloud computing paradigm, which enables shared access to a wealth of computing and storage resources without needing to own them. While cloud computing facilitates on-demand deployment, mobility and collaboration of services, mechanisms for enforcing security and performance constraints when accessing cloud services are still at an immature state. The highly dynamic nature of networks and clouds makes it difficult to guarantee any service level agreements. On the other hand, providing quality of service guarantees to users of mobile and cloud services that involve collaboration of multiple services is contingent on the existence of mechanisms that give accurate performance estimates and security features for each service involved in the composition. In this paper, we propose a distributed service monitoring and dynamic service composition model for network computing, which provides increased resiliency by adapting service configurations and service compositions to various types of changes in context. We also present a greedy dynamic service composition algorithm to reconfigure service orchestrations to meet user-specified performance and security requirements. Experiments with the proposed algorithm and the ease-of-deployment of the proposed model on standard cloud platforms show that it is a promising approach for agile and resilient network computing.
Keywords: cloud computing; quality of service; security of data; software fault tolerance; software prototyping; agile network computing; distributed service monitoring; dynamic service composition model; greedy dynamic service composition algorithm; quality of service; security requirement; service orchestration reconfiguration; Cloud computing; Context; Heuristic algorithms; Mobile communication; Monitoring; Quality of service; Security; adaptability; agile computing; monitoring; resilience; service-oriented computing (ID#: 16-9590)


L. F. Cómbita, J. Giraldo, A. A. Cárdenas and N. Quijano, “Response and Reconfiguration of Cyber-Physical Control Systems: A Survey,” Automatic Control (CCAC), 2015 IEEE 2nd Colombian Conference on, Manizales, 2015, pp. 1-6. doi: 10.1109/CCAC.2015.7345181
Abstract: The integration of physical systems with distributed embedded computing and communication devices offers advantages on reliability, efficiency, and maintenance. At the same time, these embedded computers are susceptible to cyber-attacks that can harm the performance of the physical system, or even drive the system to an unsafe state; therefore, it is necessary to deploy security mechanisms that are able to automatically detect, isolate, and respond to potential attacks. Detection and isolation mechanisms have been widely studied for different types of attacks; however, automatic response to attacks has attracted considerably less attention. Our goal in this paper is to identify trends and recent results on how to respond and reconfigure a system under attack, and to identify limitations and open problems. We have found two main types of attack protection: (i) preventive, which identifies the vulnerabilities in a control system and then increases its resiliency by modifying either control parameters or the redundancy of devices; (ii) reactive, which responds as soon as the attack is detected (e.g., modifying the non-compromised controller actions).
Keywords: embedded systems; game theory; redundancy; security of data; attack protection; cyber-physical control system; detection mechanism; distributed embedded computing; embedded computer; isolation mechanism; reliability; security mechanism; Actuators; Game theory; Games; Security; Sensor systems (ID#: 16-9591)


S. Trajanovski, F. A. Kuipers, Y. Hayel, E. Altman and P. Van Mieghem, “Designing Virus-Resistant Networks: A Game-Formation Approach,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 294-299. doi: 10.1109/CDC.2015.7402216
Abstract: Forming, in a decentralized fashion, an optimal network topology while balancing multiple, possibly conflicting objectives like cost, high performance, security and resiliency to viruses is a challenging endeavor. In this paper, we take a game-formation approach to network design where each player, for instance an autonomous system in the Internet, aims to collectively minimize the cost of installing links, of protecting against viruses, and of assuring connectivity. In the game, minimizing virus risk as well as connectivity costs results in sparse graphs. We show that the Nash Equilibria are trees that, according to the Price of Anarchy (PoA), are close to the global optimum, while the worst-case Nash Equilibrium and the global optimum may significantly differ for small infection rate and link installation cost. Moreover, the types of trees, in both the Nash Equilibria and the optimal solution, depend on the virus infection rate, which provides new insights into how viruses spread: for high infection rate τ, the path graph is the worst- and the star graph is the best-case Nash Equilibrium. However, for small and intermediate values of τ, trees different from the path and star graphs may be optimal.
Keywords: computer network security; computer viruses; game theory; telecommunication network topology; Internet; Nash equilibria; autonomous system; decentralized fashion; game-formation approach; global optimum; optimal network topology; price of anarchy; virus resiliency; virus security; virus-resistant network design; worst-case Nash equilibrium; Games; Nash equilibrium; Network topology; Peer-to-peer computing; Security; Stability analysis; Viruses (medical) (ID#: 16-9592)


C. Aduba and C.-h. Won, “Resilient Cumulant Game Control for Cyber-Physical Systems,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287422
Abstract: In this paper, we investigate the resilient cumulant game control problem for a cyber-physical system. The cyberphysical system is modeled as a linear hybrid stochastic system with full-state feedback. We are interested in 2-player cumulant Nash game for a linear Markovian system with quadratic cost function where the players optimize their system performance by shaping the distribution of their cost function through cost cumulants. The controllers are optimally resilient against control feedback gain variations. We formulate and solve the coupled first and second cumulant Hamilton-Jacobi-Bellman (HJB) equations for the dynamic game. In addition, we derive the optimal players strategy for the second cost cumulant function. The efficiency of our proposed method is demonstrated by solving a numerical example.
Keywords: Markov processes; game theory; optimisation; security of data; HJB equation; Hamilton-Jacobi-Bellman equation; Nash game; control feedback gain variation; cumulant game control resiliency; cyber-physical system; full-state feedback; linear Markovian system; linear hybrid stochastic system; quadratic cost function optimization; security vulnerability; Cost function; Cyber-physical systems; Games; Mathematical model; Nash equilibrium; Trajectory (ID#: 16-9593)


S. Nabavi and A. Chakrabortty, “An Intrusion-Resilient Distributed Optimization Algorithm for Modal Estimation in Power Systems,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 39-44. doi: 10.1109/CDC.2015.7402084
Abstract: In this paper we present an intrusion-resilient distributed algorithmic approach to estimate the electro-mechanical oscillation modes of a large power system using Synchrophasor measurements. For this, we first show how to distribute the centralized Prony method over a network consisting of several computational areas using a distributed variant of alternating direction method of multipliers (D-ADMM). We then add a cross-verification step to show the resiliency of this algorithm against the cyber-attacks that may happen in the form of data manipulation. We illustrate the robustness of our method in face of intrusion for a case study on IEEE 68-bus power system.
Keywords: optimisation; phasor measurement; power engineering computing; power system state estimation; security of data; D-ADMM; IEEE 68-bus power system; centralized Prony method; cross-verification step; cyber-attacks; distributed variant of alternating direction method of multipliers; electro-mechanical oscillation modes; intrusion-resilient distributed optimization algorithm; large power system; modal estimation; power systems; synchrophasor measurements; Computer architecture; Damping; Distributed databases; Estimation; Handheld computers; Phasor measurement units; Power systems (ID#: 16-9594)


G. Torres et al., “Distributed StealthNet (D-SN): Creating a Live, Virtual, Constructive (LVC) Environment for Simulating Cyber-Attacks for Test and Evaluation (T&E),” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 1284-1291. doi: 10.1109/MILCOM.2015.7357622
Abstract: The Services have become increasingly dependent on their tactical networks for mission command functions, situational awareness, and target engagements (terminal weapon guidance). While the network brings an unprecedented ability to project force by all echelons in a mission context, it also brings the increased risk of cyber-attack on the mission operation. With both this network use and vulnerability in mind, it is necessary to test new systems (and networked Systems of Systems (SoS)) in a cyber-vulnerable network context. A new test technology, Distributed-StealthNet (D-SN), has been created by the Department of Defense Test Resource Management Center (TRMC) to support SoS testing with cyber-attacks against mission threads. D-SN is a simulation/emulation based virtual environment that can provide a representation of a full scale tactical network deployment (both Radio Frequency (RF) segments and wired networks at command posts). D-SN has models of real world cyber threats that affect live tactical systems and networks. D-SN can be integrated with live mission Command and Control (C2) hardware and then a series of cyber-attacks using these threat models can be launched against the virtual network and the live hardware to determine the SoS's resiliency to sustain the tactical mission. This paper describes this new capability and the new technologies developed to support this capability.
Keywords: command and control systems; computer network security; military communication; wide area networks; C2 hardware; Command and Control hardware; D-SN; LVC environment; T&E; TRMC; cyberattack simulation; cybervulnerable network context; department of defense test resource management center; distributed stealthnet; live, virtual, constructive environment; tactical network; test and evaluation; Computational modeling; Computer architecture; Computers; Hardware; Ports (Computers); Real-time systems; Wide area networks (ID#: 16-9595)


Y. Wang, I. R. Chen and J. H. Cho, “Trust-Based Task Assignment in Autonomous Service-Oriented Ad Hoc Networks,” Autonomous Decentralized Systems (ISADS), 2015 IEEE Twelfth International Symposium on, Taichung, 2015, pp.71-77. doi: 10.1109/ISADS.2015.19
Abstract: We propose and analyze a trust management protocol for autonomous service-oriented mobile ad hoc networks (MANETs) populated with service providers (SPs) and service requesters (SRs). We demonstrate the resiliency and convergence properties of our trust protocol design for service-oriented MANETs in the presence of malicious nodes performing opportunistic service attacks and slandering attacks. Further, we consider a situation in which a mission comprising dynamically arriving tasks must achieve multiple conflicting objectives, including maximizing the mission reliability, minimizing the utilization variance, and minimizing the delay to task completion. We devise a trust-based heuristic algorithm to solve this multi-objective optimization problem with a linear runtime complexity, thus allowing dynamic node-to-task assignment to be performed at runtime. Through extensive simulation, we demonstrate that our trust-based node-to-task assignment algorithm outperforms a non-trust-based counterpart using blacklisting techniques while performing close to the ideal solution quality with perfect knowledge of node reliability over a wide range of environmental conditions.
Keywords: access protocols; mobile ad hoc networks; optimisation; telecommunication security; autonomous service-oriented mobile ad hoc networks MANET; blacklisting techniques; dynamic node-to-task assignment; linear runtime complexity; malicious nodes; multiobjective optimization problem; opportunistic service attacks; service providers; service requesters; slandering attacks; trust management protocol; trust-based heuristic algorithm; trust-based task assignment; Ad hoc networks; Heuristic algorithms; Mobile computing; Peer-to-peer computing; Protocols; Reliability; Runtime; multi-objective optimization; performance analysis; service-oriented mobile ad hoc networks; task assignment; trust (ID#: 16-9596)


S. Subha and U. G. Sankar, “Message Authentication and Wormhole Detection Mechanism in Wireless Sensor Network,” Intelligent Systems and Control (ISCO), 2015 IEEE 9th International Conference on, Coimbatore, 2015, pp. 1-4. doi: 10.1109/ISCO.2015.7282382
Abstract: One of the most effective way to prevent unauthorized and corrupted message being forwarded in wireless sensor network. But there is high computational and communication overhead in addition to lack of scalability and resilience to node compromise attacks. So to address these issues, a polynomial-based scheme was recently introduced. However, this scheme and its extensions all have the weakness of a built-in threshold determined by the degree of the polynomial. when the number of messages transmitted is larger than this threshold, the adversary can fully recover the polynomial. In the existing system, an unconditionally secure and efficient source anonymous message authentication (SAMA) scheme is presented which is based on the optimal modified Elgamal signature (MES) scheme on elliptic curves. This MES scheme is secure against adaptive chosen-message attacks in the random oracle model. This scheme enables the intermediate nodes to authenticate the message so that all corrupted message can be detected and dropped to conserve the sensor power. While achieving compromise resiliency, flexible-time authentication and source identity protection, this scheme does not have the threshold problem. While enabling intermediate nodes authentication, this scheme allows any node to transmit an unlimited number of messages without suffering the threshold problem. But by using this method the black hole and gray hole attacks are detected but wormhole attack is doesn't detect. Because the wormhole attack is one of the harmful attacks which degrade the network performance. So, in the proposed system, one innovative technique is introduced which is called an efficient wormhole detection mechanism in the wireless sensor networks. In this method, considers the RTT between two successive nodes and those nodes' neighbor number which is needed to compare those values of other successive nodes. The identification of wormhole attacks is based on the two faces. The first consideration is that the transmission time between two wormhole attack affected nodes is considerable higher than that between two normal neighbor nodes. The second detection mechanism is based on the fact that by introducing new links into the network, the adversary increases the number of neighbors of the nodes within its radius. An experimental result shows that the proposed method achieves high network performance.
Keywords: polynomials; telecommunication security; wireless sensor networks; MES scheme; SAMA; adaptive chosen message attacks; black hole attacks; corrupted message; elliptic curves; gray hole attacks; message authentication; modified Elgamal signature; node compromise attacks; polynomial based scheme; random oracle model; source anonymous message authentication; unauthorized message; unlimited number; wireless sensor network; wormhole detection mechanism; Computational modeling; Cryptography; Scalability; Terminology; Hop-by-hop authentication; public-key cryptosystem; source privacy (ID#: 16-9597)


G. Patounas, Y. Zhang and S. Gjessing, “Evaluating Defence Schemes Against Jamming in Vehicle Platoon Networks,” Intelligent Transportation Systems (ITSC), 2015 IEEE 18th International Conference on, Las Palmas, 2015, pp. 2153-2158. doi: 10.1109/ITSC.2015.348
Abstract: This paper studies Intelligent Transportation Systems (ITS), Vehicular Ad hoc Networks (VANETs) and their role in future transport. It focuses on prevention, detection and mitigation of denial of service attacks in a vehicle platoon. We evaluate the physical workings of a vehicle platoon as well as the wireless communication between the vehicles and the possibility of malicious interference. Defence methods against jamming attacks are implemented and tested. These include methods for interference reduction, data redundancy and warning systems based on on-board vehicle sensors. The results presented are positive and the defences successful in increasing a vehicle platoon's resiliency to attacks.
Keywords: intelligent transportation systems; jamming; radiofrequency interference; telecommunication security; vehicular ad hoc networks; ITS; VANET; data redundancy; defence methods; denial of service attacks; evaluating defence schemes; interference reduction; jamming attacks; malicious interference; on-board vehicle sensors; vehicle platoon networks; vehicular ad hoc networks; warning systems; wireless communication; Array signal processing; Global Positioning System; Interference; Jamming; Sensors; Vehicles; Wireless communication (ID#: 16-9598)


C. Lee, H. Shim and Y. Eun, “Secure and Robust State Estimation Under Sensor Attacks, Measurement Noises, and Process Disturbances: Observer-Based Combinatorial Approach,” Control Conference (ECC), 2015 European, Linz, 2015, pp. 1872-1877. doi: 10.1109/ECC.2015.7330811
Abstract: This paper presents a secure and robust state estimation scheme for continuous-time linear dynamical systems. The method is secure in that it correctly estimates the states under sensor attacks by exploiting sensing redundancy, and it is robust in that it guarantees a bounded estimation error despite measurement noises and process disturbances. In this method, an individual Luenberger observer (of possibly smaller size) is designed from each sensor. Then, the state estimates from each of the observers are combined through a scheme motivated by error correction techniques, which results in estimation resiliency against sensor attacks under a mild condition on the system observability. Moreover, in the state estimates combining stage, our method reduces the search space of a minimization problem to a finite set, which substantially reduces the required computational effort.
Keywords: continuous time systems; error correction; linear systems; observers; redundancy; robust control; security; Luenberger observer; bounded estimation error; continuous-time linear dynamical system; error correction technique; observer-based combinatorial approach; robust state estimation; search space; secure state estimation; sensor attack; Indexes; Minimization; Noise measurement; Observers; Redundancy; Robustness (ID#: 16-9599)


K. Futamura et al., “vDNS Closed-Loop Control: A Framework for an Elastic Control Plane Service,” Network Function Virtualization and Software Defined Network (NFV-SDN), 2015 IEEE Conference on, San Francisco, CA, 2015, pp. 170-176. doi: 10.1109/NFV-SDN.2015.7387423
Abstract: Virtual Network Functions (VNFs) promise great efficiencies in deploying and operating new services, in terms of performance, resiliency and cost. However, today most operational VNF clouds are still generally static after their initial instantiation, thus not realizing many of the potential benefits of virtualization and enhanced orchestration. In this paper, we explore a large-scale operational instantiation of a virtual Domain Name System (vDNS) and present an analytical framework and platform to improve its efficiency during normal and adverse network traffic conditions, such as those caused by Distributed Denial-of-Service (DDoS) attacks and site failures. Using dynamic virtual machine instantiation, we show that under normal daily cycles we can run vDNS resolvers at higher target load, increasing the transactional efficiency of the underlying hardware by more than 10%, and improving client latency due to lower recursion rates. We demonstrate a method of reducing reaction time and service impacts due to malicious network traffic, such as during a DDoS event, by automatically redeploying virtual resources at selected nodes in the network. We quantify the tradeoff between spare hardware costs and latency under site failures, taking advantage of SDN controller-based flow redirection. This work is part of AT&T's ongoing network transformation through network function virtualization (NFV), software-defined networking (SDN), and enhanced orchestration.
Keywords: cloud computing; computer network security; software defined networking; virtualisation; AT&T; DDoS event; SDN; SDN controller-based flow redirection; VNF clouds; distributed denial-of-service attacks; dynamic virtual machine instantiation; elastic control plane service; large-scale operational instantiation; malicious network traffic; network function virtualization; network traffic conditions; network transformation; reaction time; recursion rates; site failures; software-defined networking; transactional efficiency; vDNS closed-loop control; vDNS resolvers; virtual domain name system; virtual network functions; Computer crime; Conferences; Hardware; Servers; Software defined networking; Telemetry (ID#: 16-9600)


J. Schneider, C. Romanowski, R. K. Raj, S. Mishra and K. Stein, “Measurement of Locality Specific Resilience,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225332
Abstract: Resilience has been defined at the local, state, and national levels, and subsequent attempts to refine the definition have added clarity. Quantitative measurements, however, are crucial to a shared understanding of resilience. This paper reviews the evolution of resiliency indicators and metrics and suggests extensions to current indicators to measure functional resilience at a jurisdictional or community level. Using a management systems approach, an input/output model may be developed to demonstrate abilities, actions, and activities needed to support a desired outcome. Applying systematic gap analysis and an improvement cycle with defined metrics, the paper proposes a model to evaluate a community's operational capability to respond to stressors. As each locality is different-with unique risks, strengths, and weaknesses-the model incorporates these characteristics and calculates a relative measure of maturity for that community. Any community can use the resulting model output to plan and improve its resiliency capabilities.
Keywords: emergency management; social sciences; community operational capability; functional resilience measurement; locality specific resilience measurement; quantitative measurement; resiliency capability; resiliency indicators; resiliency metrics; systematic gap analysis; Economics; Emergency services; Hazards; Measurement; Resilience; Standards; Training; AHP; community resilience; operational resilience modeling; resilience capability metrics (ID#: 16-9601)


K. Kaminska, “Tapping into Social Media and Digital Humanitarians for Building Disaster Resilience in Canada,” Humanitarian Technology Conference (IHTC2015), 2015 IEEE Canada International, Ottawa, ON, 2015, pp. 1-4. doi: 10.1109/IHTC.2015.7274444
Abstract: Social media offers the opportunity to connect with the public, improve situational awareness, and to reach people quickly with alerts, warnings and preparedness messages. However, the ever increasing popularity of social networking can also lead to 'information overload' which can prevent disaster management organizations from processing and using social media information effectively. This limitation can be overcome through collaboration with 'digital humanitarians' — tech savvy volunteers, who are leading the way in crisis-mapping and crowdsourcing of disaster information. Since the 2010 earthquake in Haiti, their involvement has become an integral part of the international community's response to major disasters. For example, the United Nations Office for the Coordination of Humanitarian Affairs (UNOCHA) activated the Digital Humanitarian Network during the 2013 response to typhoon Haiyan/Yolanda [1]. Our previous research has shown that Canada's disaster management community has not yet fully taken advantage of all the opportunities that social media offers, including the potential of collaboration with digital humanitarians [2]. This finding has led to the development of an experiment designed to test how social media aided collaboration can enable enhanced situational awareness and improve recovery outcomes. The experiment took place in November 2014 as a part of the third Canada-US Enhanced Resiliency Experiment (CAUSE III), which is an experiment series that focuses on enhancing resilience through situational awareness interoperability. This paper describes the results of the experiment and Canadian efforts to facilitate effective information exchange between disaster management officials, digital humanitarians as well as the public at large, so as to improve situational awareness and build resilience, both at the community and the national level.
Keywords: emergency management; social networking (online); CAUSE III; Canada-US enhanced resiliency experiment; UNOCHA; United Nations Office for the Coordination of Humanitarian Affairs; community level; digital humanitarians; disaster management organizations; disaster resilience; information overload; national level; recovery outcomes; situational awareness; social media information; social networking; Collaboration; Disaster management; Media; Organizations; Safety; Twitter; disaster management; resilience; social media; social network analysis (ID#: 16-9602)


A. Alzahrani and R. F. DeMara, “Hypergraph-Cover Diversity for Maximally-Resilient Reconfigurable Systems,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1086-1092. doi: 10.1109/HPCC-CSS-ICESS.2015.294
Abstract: Scaling trends of reconfigurable hardware (RH) and their design flexibility have proliferated their use in dependability-critical embedded applications. Although their reconfigurability can enable significant fault tolerance, due to the complexity of execution time in their design flow, in-field reconfigurability can be infeasible and thus limit such potential. This need is addressed by developing a graph and set theoretic approach, named hypergraph-cover diversity (HCD), as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-free hypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets of resources, each of which can be utilized by the same synthesized application netlist. The diverse implementations provide reconfiguration-based resilience throughout the system lifetime while avoiding the significant overheads associated with runtime placement and routing phases. Two novel scalable algorithms to construct union-free hypergraphs are proposed and described. Evaluation on a Motion-JPEG image compression core using a Xilinx 7-series-based FPGA hardware platform demonstrates a statistically significant increase in fault tolerance and area efficiency when using proposed work compared to commonly-used modular redundancy approaches.
Keywords: data compression; embedded systems; field programmable gate arrays; graph theory; image coding; motion estimation; reconfigurable architectures; HCD; Motion-JPEG image compression core; RH; Xilinx 7-series-based FPGA hardware platform; area efficiency; dependability-critical embedded applications; design flexibility; execution time; fault tolerance; hypergraph-cover diversity; in-field reconfigurability; maximally-resilient reconfigurable systems; preemptive design technique; reconfigurable hardware; reconfigurable resource partitioning; reconfiguration-based resilience; resiliency costs; routing phases; runtime placement; separable resource subsets; set theoretic approach; statistical analysis; synthesized application netlist; union-free hypergraphs; Circuit faults; Embedded systems; Fault tolerance; Fault tolerant systems; Field programmable gate arrays; Hardware; Runtime; Area Efficiency; Design Diversity; FPGAs; Fault Tolerance; Hypergraphs; Reconfigurable Systems; Reliability (ID#: 16-9603)


P. P. Hung and E. N. Huh, “A Cost and Contention Conscious Scheduling for Recovery in Cloud Environment,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 26-31. doi: 10.1109/HPCC-CSS-ICESS.2015.325
Abstract: Cloud Computing (CC) model plays an important role for the growth of contemporary IT industry where stability, availability and partition tolerance of computational resources mean a great deal. It is of utmost significance that not only cloud services are to be provided with satisfactory performance but also they are able to minimize and resiliently recover from potential damages when cloud infrastructures are subject to changes and/or disasters. This study discusses a method that results in potentially better resiliency and faster recovery from failures based on the well-known genetic algorithm. Moreover, we aim to achieve a globally optimized performance as well as a service solution that can remain financially and operationally balanced according to customer preferences. The proposed methodology has undergone various and intensive evaluations to be proclaimed of their effectiveness and efficiency, even when put under tight comparison with other existing work of relevant aspects.
Keywords: DP industry; cloud computing; costing; genetic algorithms; scheduling; system recovery; CC model; cloud computing model; cloud environment; computational resources; contemporary IT industry; contention conscious scheduling; cost; failure recovery; genetic algorithm; Arrays; Cloud computing; Genetic algorithms; Processor scheduling; Program processors; Schedules; Servers; Task scheduling; parallel computing; recovery time; big data (ID#: 16-9604)


N. Dutt, A. Jantsch and S. Sarma, “Self-Aware Cyber-Physical Systems-on-Chip,” Computer-Aided Design (ICCAD), 2015 IEEE/ACM International Conference on, Austin, TX, 2015, pp. 46-50. doi: 10.1109/ICCAD.2015.7372548
Abstract: Self-awareness has a long history in biology, psychology, medicine, and more recently in engineering and computing, where self-aware features are used to enable adaptivity to improve a system's functional value, performance and robustness. With complex many-core Systems-on-Chip (SoCs) facing the conflicting requirements of performance, resiliency, energy, heat, cost, security, etc. — in the face of highly dynamic operational behaviors coupled with process, environment, and workload variabilities — there is an emerging need for self-awareness in these complex SoCs. Unlike traditional MultiProcessor Systems-on-Chip (MPSoCs), self-aware SoCs must deploy an intelligent co-design of the control, communication, and computing infrastructure that interacts with the physical environment in real-time in order to modify the system's behavior so as to adaptively achieve desired objectives and Quality-of-Service (QoS). Self-aware SoCs require a combination of ubiquitous sensing and actuation, health-monitoring, and statistical model-building to enable the SoC's adaptation over time and space. After defining the notion of self-awareness in computing, this paper presents the Cyber-Physical System-on-Chip (CPSoC) concept as an exemplar of a self-aware SoC that intrinsically couples on-chip and cross-layer sensing and actuation using a sensor-actuator rich fabric to enable self-awareness.
Keywords: actuators; cyber-physical systems; sensors; statistical analysis; system-on-chip; SoC adaptation; communication infrastructure; computing infrastructure; control infrastructure; cross layer sensing; cross-layer actuation; dynamic operational behaviors; health monitoring; intelligent codesign; physical environment; quality-of-service; self-aware SoC; self-aware cyber-physical system-on-chip; sensor-actuator rich fabric; statistical model-building; system behavior; ubiquitous actuation; ubiquitous sensing; workload variabilities; Computational modeling; Computer architecture; Context; Predictive models; Sensors; Software; System-on-chip
(ID#: 16-9605)


A. Sajadi, R. M. Kolacinski and K. A. Loparo, “Impact of Wind Turbine Generator Type in Large-Scale Offshore Wind Farms on Voltage Regulation in Distribution Feeders,” Innovative Smart Grid Technologies Conference (ISGT), 2015 IEEE Power & Energy Society, Washington, DC, 2015, pp. 1-5. doi: 10.1109/ISGT.2015.7131785
Abstract: The goal of the U.S. Department of Energy (DOE) roadmap [1] is a 20% penetration of wind energy into the generation mix by 2030. Attaining this objective will help protect the environment and reduce fossil fuel dependency, thus improving energy security and independence. This paper discusses how the technology used in large scale offshore wind farms impacts voltage regulation in distribution feeders. Although the offshore wind farms are integrated into an interconnected power system through transmission lines, the system constraints can cause stability, resiliency and reliability issues. The major types of machine used in offshore wind farms are modeled using a generic model of General Electric (GE) wind machines. The transmission and distribution system models are based on the actual existing regional FirstEnergy/PJM power grid in Midwestern of United State. In addition, the impact of installing Static VAR Compensator (SVC) at Points of Interconnection (POI) on voltage regulation is investigated.
Keywords: offshore installations; power distribution control; static VAr compensators; voltage control; wind power plants; wind turbines; GE wind machines; General Electric wind machines; POI; SVC; distribution feeders; distribution system models; interconnected power system; large-scale offshore wind farms; points of interconnection; static VAr compensator; transmission system models; voltage regulation; wind turbine generator; Generators; Indexes; Power system stability; Static VAr compensators; Voltage control; Wind farms; Wind power generation; DFIG; Induction Machine; Offshore Wind; Synchronous Machine; Voltage Regulation; Wind Integration (ID#: 16-9606)


A. Malikopoulos, Tao Zhang, K. Heaslip and W. Fehr, “Panel 2: Connected Electrified Vehicles and Cybersecurity,” Transportation Electrification Conference and Expo (ITEC), 2015 IEEE, Dearborn, MI, 2015, pp. 1-1. doi: 10.1109/ITEC.2015.7165725
Abstract: Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. The development and deployment of a fully connected transportation system that makes the most of multi-modal, transformational applications requires a robust, underlying technological platform. The platform is a combination of well-defined technologies, interfaces, and processes that, combined, ensure safe, stable, interoperable, reliable system operations that minimize risk and maximize opportunities. The primary application area of connected vehicles is the vehicle safety. These applications are designed to increase situational awareness and reduce or eliminate crashes through vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) data transmission that supports: driver advisories, driver warnings, and vehicle and/or infrastructure controls. These technologies may potentially address a great majority of crash scenarios with unimpaired drivers, preventing tens of thousands of automobile crashes every year. Since V2V and V2I communications and a significant data processing are involved, the connected vehicles concept also requires resiliency and immunity for cyber security issues. This panel session will discuss technology, applications, dedicated short range communications (DSRC) technology and capabilities, policy and institutional issues, and international research on the subject matter.
Keywords:  (not provided) (ID#: 16-9607)


R. Routray, “Cloud Storage Infrastructure Optimization Analytics,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 92-92. doi: 10.1109/IC2E.2015.83
Abstract: Summary form only given. Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.
Keywords: cloud computing; data analysis; optimisation; Analytics as a Service; OpenStack based platform; SaaS model; Software as a Service; cloud delivery models; cloud storage infrastructure optimization analytics; data analytical capabilities; data analytics; data planes; management metric data system; management platform system; operational enterprise data center; performance optimizations; software defined environments; value proposition; Big data; Cloud computing; Computer science; Conferences; Optimization; Software as a service; Storage management (ID#: 16-9608)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.