Visible to the public Flow Control Integrity 2015Conflict Detection Enabled

SoS Newsletter- Advanced Book Block


SoS Logo

Flow Control Integrity


Control-flow attacks are pervasive. The research cited in this bibliography looks at control-flow integrity (CFI) in the context of cyber physical systems, the Smart Grid, and a variety of web applications. For the Science of Security community, CFI research has implications for resilience, composability, and governance. The work presented here was published in 2015.

Ryutov, T.; Almajali, A.; Neuman, C., “Modeling Security Policies for Mitigating the Risk of Load Altering Attacks on Smart Grid Systems,” in Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, vol., no., pp. 1–6, 13–13 April 2015. doi:10.1109/MSCPES.2015.7115393
Abstract: While demand response programs achieve energy efficiency and quality objectives, they bring potential security threats into the Smart Grid. An ability to influence load in the system provides the capability for an attacker to cause system failures and impacts the quality and integrity of the power delivered to customers. This paper presents a security mechanism that monitors and controls load according to security policies during normal system operation. The mechanism monitors, detects, and responds to load altering attacks. The authors examined security requirements of Smart Grid stakeholders and constructed a set of load control policies enforced by the mechanism. A proof of concept prototype was implemented and tested using the simulation environment. By enforcing the proposed policies in this prototype, the system is maintained in a safe state in the presence of load drop attacks.
Keywords: power system security; risk management; smart power grids; demand response programs; load altering attacks; load drop attacks; risk mitigation; security policies modeling; smart grid stakeholders; smart grid systems; Load flow control; Load modeling; Power quality; Safety; Security; Servers; Smart grids; cyber-physical; smart grid; security policy; simulation
(ID#: 15-7571)


Hedin, D.; Bello, L.; Sabelfeld, A., “Value-Sensitive Hybrid Information Flow Control for a JavaScript-Like Language,” in Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, vol., no., pp. 351–365, 13–17 July 2015. doi:10.1109/CSF.2015.31
Abstract: Secure integration of third-party code is one of the prime challenges for securing today’s web. Recent empirical studies give evidence of pervasive reliance on and excessive trust in third-party JavaScript, with no adequate security mechanism to limit the trust or the extent of its abuse. Information flow control is a promising approach for controlling the behavior of third-party code and enforcing confidentiality and integrity policies. While much progress has been made on static and dynamic approaches to information flow control, only recently their combinations have received attention. Purely static analysis falls short of addressing dynamic language features such as dynamic objects and dynamic code evaluation, while purely dynamic analysis suffers from inability to predict side effects in non-performed executions. This paper develops a value-sensitive hybrid mechanism for tracking information flow in a JavaScript-like language. The mechanism consists of a dynamic monitor empowered to invoke a static component on the fly. This enables us to achieve a sound yet permissive enforcement. We establish formal soundness results with respect to the security policy of non-interference. In addition, we demonstrate permissiveness by proving that we subsume the precision of purely static analysis and by presenting a collection of common programming patterns that indicate that our mechanism has potential to provide more permissiveness than dynamic mechanisms in practice.
Keywords: Java; program diagnostics; security of data; JavaScript-like language; common programming patterns; confidentiality policies; dynamic code evaluation; dynamic language features; dynamic objects; integrity policies; pervasive reliance; purely static analysis; security policy; third-party code; value-sensitive hybrid information flow control; Context; Monitoring; Performance analysis; Reactive power; Runtime; Security; Semantics; information flow; language-based security (ID#: 15-7572)


Bhardwaj, C., “Systematic Information Flow Control in mHealth Systems,” in Communication Systems and Networks (COMSNETS), 2015 7th International Conference on, vol., no., pp. 1–6, 6–10 Jan. 2015. doi:10.1109/COMSNETS.2015.7098736
Abstract: This paper argues that the security and integrity requirements of mHealth systems are best addressed by end-to-end information flow control (IFC). The paper extends proposals of decentralized IFC to a distributed smartphone-based mHealth system, identifying the basic threat model and the necessary trusted computing base. We show how the framework proposed can be integrated into an existing communication stack between a phalanx of sensors and an Android smartphone. The central idea of the framework involves systematically and automatically labelling data and metadata collected during medical encounters with security and integrity tags. These mechanisms provided can then be used for enforcing a wide variety of complex information flow control policies in diverse applications. The chief novelty over existing DIFC approaches is that users are relieved of having to create tags for each class of data and metadata that is collected in the system, thus making it user-friendly and scalable.
Keywords: Android (operating system); data integrity; human computer interaction; medical information systems; mobile computing; security of data; Android smart phone; communication stack; complex information flow control policies; data class; data labelling; decentralized IFC; distributed smart phone-based m-Health system; end-to-end information flow control; integrity tags; m-health system integrity requirements; m-health system security requirements; medical encounters; meta data collection; scalable system; security tags; sensor phalanx; systematic information flow control; threat model; trusted computing base; user-friendly system (ID#: 15-7573)


Zibordi de Paiva, O.; Ruggiero, W.V., “A Survey on Information Flow Control Mechanisms in Web Applications,” in High Performance Computing & Simulation (HPCS), 2015 International Conference on, vol., no., pp. 211–220, 20–24 July 2015. doi:10.1109/HPCSim.2015.7237042
Abstract: Web applications are nowadays ubiquitous channels that provide access to valuable information. However, web application security remains problematic, with Information Leakage, Cross-Site Scripting and SQL-Injection vulnerabilities - which all present threats to information - standing among the most common ones. On the other hand, Information Flow Control is a mature and well-studied area, providing techniques to ensure the confidentiality and integrity of information. Thus, numerous works were made proposing the use of these techniques to improve web application security. This paper provides a survey on some of these works that propose server-side only mechanisms, which operate in association with standard browsers. It also provides a brief overview of the information flow control techniques themselves. At the end, we draw a comparative scenario between the surveyed works, highlighting the environments for which they were designed and the security guarantees they provide, also suggesting directions in which they may evolve.
Keywords: Internet; SQL; security of data; SQL-injection vulnerability; Web application security; cross-site scripting; information confidentiality; information flow control mechanisms; information integrity; information leakage; server-side only mechanisms; standard browsers; ubiquitous channels; Browsers; Computer architecture; Context; Security; Standards; Web servers; Cross-Site Scripting; Information Flow Control; Information Leakage; SQL Injection; Web Application Security (ID#: 15-7574)


Arthur, W.; Mehne, B.; Das, R.; Austin, T., “Getting in Control of Your Control Flow with Control-Data Isolation,” in Code Generation and Optimization (CGO), 2015 IEEE/ACM International Symposium on, vol., no., pp. 79–90, 7-11 Feb. 2015. doi:10.1109/CGO.2015.7054189
Abstract: Computer security has become a central focus in the information age. Though enormous effort has been expended on ensuring secure computation, software exploitation remains a serious threat. The software attack surface provides many avenues for hijacking; however, most exploits ultimately rely on the successful execution of a control-flow attack. This pervasive diversion of control flow is made possible by the pollution of control flow structure with attacker-injected runtime data. Many control-flow attacks persist because the root of the problem remains: runtime data is allowed to enter the program counter. In this paper, we propose a novel approach: Control-Data Isolation. Our approach provides protection by going to the root of the problem and removing all of the operations that inject runtime data into program control. While previous work relies on CFG edge checking and labeling, these techniques remain vulnerable to attacks such as heap spray, read, or GOT attacks and in some cases suffer high overheads. Rather than addressing control-flow attacks by layering additional complexity, our work takes a subtractive approach; subtracting the primary cause of contemporary control-flow attacks. We demonstrate that control-data isolation can assure the integrity of the programmer’s CFG at runtime, while incurring average performance overheads of less than 7% for a wide range of benchmarks.
Keywords: computer crime; program control structures; CFG integrity; average performance overheads; computer security; contemporary control flow attacks; control-data isolation; hijacking; information age; program control; program counter; secure computation; software exploitation; software vulnerabilities; subtractive approach; Data models; Libraries; Process control; Radiation detectors; Runtime; Security; Software (ID#: 15-7575)


Chao Zhang; Niknami, M.; Chen, K.Z.; Chengyu Song; Zhaofeng Chen; Song, D., “JITScope: Protecting Web Users from Control-Flow Hijacking Attacks,” in Computer Communications (INFOCOM), 2015 IEEE Conference on, vol., no., pp. 567-575, April 26 2015–May 1 2015. doi:10.1109/INFOCOM.2015.7218424
Abstract: Web browsers are one of the most important enduser applications to browse, retrieve, and present Internet resources. Malicious or compromised resources may endanger Web users by hijacking web browsers to execute arbitrary malicious code in the victims’ systems. Unfortunately, the widely-adopted Just-In-Time compilation (JIT) optimization technique, which compiles source code to native code at runtime, significantly increases this risk. By exploiting JIT compiled code, attackers can bypass all currently deployed defenses. In this paper, we systematically investigate threats against JIT compiled code, and the challenges of protecting JIT compiled code. We propose a general defense solution, JITScope, to enforce Control-Flow Integrity (CFI) on both statically compiled and JIT compiled code. Our solution furthermore enforces the W⊕X policy on JIT compiled code, preventing the JIT compiled code from being overwritten by attackers. We show that our prototype implementation of JITScope on the popular Firefox web browser introduces a reasonably low performance overhead, while defeating existing real-world control flow hijacking attacks.
Keywords: Internet; data protection; online front-ends; source code (software); CFI; Firefox Web browser; Internet resources; JIT compiled code; JIT optimization technique; JITScope; W⊕X policy; Web user protection; arbitrary malicious code; control-flow hijacking attacks; control-flow integrity; just-in-time compilation; source code compilation; Browsers; Engines; Instruments; Layout; Runtime; Safety; Security (ID#: 15-7576)


Bichhawat, A., “Post-Dominator Analysis for Precisely Handling Implicit Flows,” in Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on, vol. 2, pp. 787–789, 16–24 May 2015. doi:10.1109/ICSE.2015.250
Abstract: Most web applications today use JavaScript for including third-party scripts, advertisements etc., which pose a major security threat in the form of confidentiality and integrity violations. Dynamic information flow control helps address this issue of information stealing. Most of the approaches over-approximate when unstructured control flow comes into picture, thereby raising a lot of false alarms. We utilize the post-dominator analysis technique to determine the context of the program at a given point and prove that this approach is the most precise technique to handle implicit flows.
Keywords: Java; authoring languages; program diagnostics; security of data; JavaScript; Web applications; confidentiality violations; dynamic information flow control; implicit flow handling; integrity violations; post-dominator analysis technique; security threat; unstructured control flow; Computer languages; Conferences; Context; Lattices; Programmable logic arrays; Security; Software engineering (ID#: 15-7577)


Davi, L.; Hanreich, M.; Paul, D.; Sadeghi, A.-R.; Koeberl, P.; Sullivan, D.; Arias, O.; Jin, Y., “HAFIX: Hardware-Assisted Flow Integrity eXtension,” in Design Automation Conference (DAC), 2015 52nd ACM/EDAC/IEEE, vol., no., pp.1–6, 8–12 June 2015. doi:10.1145/2744769.2744847
Abstract: Code-reuse attacks like return-oriented programming (ROP) pose a severe threat to modern software on diverse processor architectures. Designing practical and secure defenses against code-reuse attacks is highly challenging and currently subject to intense research. However, no secure and practical system-level solutions exist so far, since a large number of proposed defenses have been successfully bypassed. To tackle this attack, we present HAFIX (Hardware-Assisted Flow Integrity Extension), a defense against code-reuse attacks exploiting backward edges (returns). HAFIX provides fine-grained and practical protection, and serves as an enabling technology for future control-flow integrity instantiations. This paper presents the implementation and evaluation of HAFIX for the Intel® Siskiyou Peak and SPARC embedded system architectures, and demonstrates its security and efficiency in code-reuse protection while incurring only 2% performance overhead.
Keywords: data protection; software reusability; HAFIX; Intel Siskiyou Peak; ROP; SPARC embedded system architectures; backward edges; code-reuse attacks; code-reuse protection; control-flow integrity instantiations; hardware-assisted flow integrity extension; processor architectures; return-oriented programming; Benchmark testing; Computer architecture; Hardware; Pipelines; Program processors; Random access memory; Registers (ID#: 15-7578)


Evans, I.; Fingeret, S.; Gonzalez, J.; Otgonbaatar, U.; Tang, T.; Shrobe, H.; Sidiroglou-Douskos, S.; Rinard, M.; Okhravi, H., “Missing the Point(er): On the Effectiveness of Code Pointer Integrity,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 781–796, 17–21 May 2015. doi:10.1109/SP.2015.53
Abstract: Memory corruption attacks continue to be a major vector of attack for compromising modern systems. Numerous defenses have been proposed against memory corruption attacks, but they all have their limitations and weaknesses. Stronger defenses such as complete memory safety for legacy languages (C/C++) incur a large overhead, while weaker ones such as practical control flow integrity have been shown to be ineffective. A recent technique called code pointer integrity (CPI) promises to balance security and performance by focusing memory safety on code pointers thus preventing most control-hijacking attacks while maintaining low overhead. CPI protects access to code pointers by storing them in a safe region that is protected by instruction level isolation. On x86-32, this isolation is enforced by hardware, on x86-64 and ARM, isolation is enforced by information hiding. We show that, for architectures that do not support segmentation in which CPI relies on information hiding, CPI’s safe region can be leaked and then maliciously modified by using data pointer overwrites. We implement a proof-of-concept exploit against Nginx and successfully bypass CPI implementations that rely on information hiding in 6 seconds with 13 observed crashes. We also present an attack that generates no crashes and is able to bypass CPI in 98 hours. Our attack demonstrates the importance of adequately protecting secrets in security mechanisms and the dangers of relying on difficulty of guessing without guaranteeing the absence of memory leaks.
Keywords: data protection; security of data; ARM; C-C++; CPI safe region; code pointer integrity effectiveness; code pointer protection; control flow integrity; control-hijacking attacks; data pointer overwrites; information hiding; instruction level isolation; legacy languages; memory corruption attacks; memory safety; security mechanisms; time 98 hour; Computer crashes; Delays; Libraries; Safety; Security (ID#: 15-7579)


Andriesse, D.; Bos, H.; Slowinska, A., “Parallax: Implicit Code Integrity Verification Using Return-Oriented Programming,” in Dependable Systems and Networks (DSN), 2015 45th Annual IEEE/IFIP International Conference on, vol., no., pp. 125–135, 22–25 June 2015. doi:10.1109/DSN.2015.12
Abstract: Parallax is a novel self-contained code integrity verification approach, that protects instructions by overlapping Return-Oriented Programming (ROP) gadgets with them. Our technique implicitly verifies integrity by translating selected code (verification code) into ROP code which uses gadgets scattered over the binary. Tampering with the protected instructions destroys the gadgets they contain, so that the verification code fails, thereby preventing the adversary from using the modified binary. Unlike prior solutions, Parallax does not rely on code checksumming, so it is not vulnerable to instruction cache modification attacks which affect checksumming techniques. Further, unlike previous algorithms which withstand such attacks, Parallax does not compute hashes of the execution state, and can thus protect code with non-deterministic state. Parallax limits performance overhead to the verification code, while the protected code executes at its normal speed. This allows us to protect performance-critical code, and confine the slowdown to other code regions. Our experiments show that Parallax can protect up to 90% of code bytes, including most control flow instructions, with a performance overhead of under 4%.
Keywords: object-oriented programming; program verification; software performance evaluation; Parallax; ROP code; ROP gadgets; code checksumming technique; control flow instructions; implicit code integrity verification; instruction cache modification attacks; nondeterministic state; performance-critical code protection; return-oriented programming; self-contained code integrity verification approach; verification code; Debugging; Detectors; Programming; Registers; Runtime; Semantics; Software; Tamperproofing; code verification; reverse engineering (ID#: 15-7580)


Li, W.; Zhang, W.; Gu, D.; Cao, Y.; Tao, Z.; Zhou, Z.; Liu, Y.; Liu, Z., “Impossible Differential Fault Analysis on the LED Lightweight Cryptosystem in the Vehicular Ad-hoc Networks,” in Dependable and Secure Computing, IEEE Transactions on, vol. 13, no.1, pp. 84–92, Jan–Feb. 1 2016. doi:10.1109/TDSC.2015.2449849
Abstract: With the advancement and deployment of leading-edge telecommunication technologies for sensing and collecting traffic related information, the vehicular ad-hoc networks (VANETs) have emerged as a new application scenario that is envisioned to revolutionize the human driving experiences and traffic flow control systems. To avoid any possible malicious attack and resource abuse, employing lightweight cryptosystems is widely recognized as one of the most effective approaches for the VANETs to achieve confidentiality, integrity and authentication. As a typical substitution-permutation network lightweight cryptosystem, LED supports 64-bit and 128-bit secret keys, which are flexible to provide security for the RFID and other highly-constrained devices in the VANETs. Since its introduction, some research of fault analysis has been devoted to attacking the last three rounds of LED. It is an open problem to know whether provoking faults at a former round of LED allows recovering the secret key. In this paper, we give an answer to this problem by showing a novel impossible differential fault analysis on one round earlier of all LED keysize variants. Mathematical analysis and simulating experiments show that the attack could recover the 64-bit and 128-bit secret keys of LED by introducing 48 faults and 96 faults in average, respectively. The result in this study describes that LED is vulnerable to a half byte impossible differential fault analysis. It will be beneficial to the analysis of the same type of other iterated lightweight cryptosystems in the VANETs.
Keywords: Ciphers; Circuit faults; Encryption; Light emitting diodes; Schedules; LED; RFID, VANET, Vehicular Ad-hoc Networks; impossible differential fault analysis; lightweight cryptosystems (ID#: 15-7581)


Jamshidifar, A.A.; Jovcic, D., “3-Level Cascaded Voltage Source Converters Controller with Dispatcher Droop Feedback for Direct Current Transmission Grids,” in Generation, Transmission & Distribution, IET, vol. 9, no. 6, pp. 571–579, 20 Apr. 2015. doi:10.1049/iet-gtd.2014.0348
Abstract: The future direct current (DC) grids will require additional control functions on voltage source converters (VSC) in order to ensure stability and integrity of DC grids under wide range of disturbances. This study proposes a 3-level cascaded control topology for all the VSC and DC/DC converters in DC grids. The inner control level regulates local current which prevents converter overload. The middle control level uses fast proportional integral feedback control of local DC voltage on each terminal which is essential for the grid stability. The hard limits (suggested ±5%) on voltage reference will ensure that DC voltage at all terminals is kept within narrow band under all contingencies. At the highest level, each station follows power reference which is received from the dispatcher. It is proposed to locate voltage droop power reference adjustment at a central dispatcher, to maintain average DC voltage in the grid and to ensure optimal power flow in the grid. This slow control function has minimal impact on stability. Performance of the proposed control is tested on PSCAD/EMTDC model of the CIGRE B4 DC grid test system. A number of severe outages are simulated and both steady-state variables and transient responses are observed and compared against conventional droop control method. The comparison verifies superior performance of the proposed control topology.
Keywords: DC-DC power convertors; HVDC power transmission; PI control; electric current control; feedback; load flow; power grids; power system stability; voltage control; 3-level cascaded control topology; 3-level cascaded voltage source converter controller; CIGRE B4 DC grid test system; DC grids; DC-DC converters; PSCAD-EMTDC model; VSC converters; central dispatcher; control function; control functions; converter overload; direct current transmission grids; dispatcher droop feedback; droop control method; fast proportional integral feedback control; grid stability; local DC voltage control; local current control optimal power flow; steady-state variables; transient responses; voltage droop power reference adjustment (ID#: 15-7582)


Sedghi, H.; Jonckheere, E., “Statistical Structure Learning to Ensure Data Integrity in Smart Grid,” in Smart Grid, IEEE Transactions on, vol. 6, no. 4, pp. 1924–1933, July 2015. doi:10.1109/TSG.2015.2403329
Abstract: Robust control and management of the grid relies on accurate data. Both phasor measurement units and remote terminal units are prone to false data injection attacks. Thus, it is crucial to have a mechanism for fast and accurate detection of tampered data—both for preventing attacks that may lead to blackouts, and for routine monitoring and control of current and future grids. We propose a decentralized false data injection detection scheme based on the Markov graph of the bus phase angles. We utilize the conditional covariance test CMIT to learn the structure of the grid. Using the dc power flow model, we show that, under normal circumstances, the Markov graph of the voltage angles is consistent with the power grid graph. Therefore, a discrepancy between the calculated Markov graph and learned structure should trigger the alarm. Our method can detect the most recent stealthy deception attack on the power grid that assumes knowledge of the bus-branch model of the system and is capable of deceiving the state estimator; hence damaging power network control, monitoring, demand response, and pricing scheme. Specifically, under the stealthy deception attack, the Markov graph of phase angles changes. In addition to detecting a state of attack, our method can detect the set of attacked nodes. To the best of our knowledge, our remedy is the first to comprehensively detect this sophisticated attack and it does not need additional hardware. Moreover, it is successful no matter the size of the attacked subset. Simulation of various power networks confirms our claims.
Keywords: Markov processes; control engineering computing; data integrity; graph theory; learning (artificial intelligence); phasor measurement; power engineering computing; power system control; power system management; power system security; robust control; security of data; smart power grids; statistical testing; CMIT conditional covariance test; Markov graph; bus phase angles; bus-branch model; decentralized false data injection detection scheme; demand response; false data injection attacks; grid management; phasor measurement units; power grid graph; power network control; power networks; pricing scheme; remote terminal units; robust control; routine monitoring; smart grid; state estimator; statistical structure learning; stealthy deception attack; tampered data detection; Measurement uncertainty; Monitoring; Phasor measurement units; Power grids; Random variables; Vectors; Bus phase angles; conditional covariance test; false data injection detection; structure learning (ID#: 15-7583)


Zhong, Wenbin; Chang, Wenlong; Rubio, Luis; Xichun Luo, “Reconfigurable Software Architecture for a Hybrid Micro Machine Tool,” in Automation and Computing (ICAC), 2015 21st International Conference on, vol., no., pp. 1–4, 11–12 Sept. 2015. doi:10.1109/IConAC.2015.7313994
Abstract: Hybrid micro machine tools are increasingly in demand for manufacturing microproducts made of hard-to-machine materials, such as ceramic air bearing, bio-implants and power electronics substrates etc. These machines can realize hybrid machining processes which combine one or two non-conventional machining techniques such as EDM, ECM, laser machining, etc. and conventional machining techniques such as turning, grinding, milling on one machine bed. Hybrid machine tool developers tend to mix and match components from multiple vendors for the best value and performance. The system integrity is usually at the second priority at the initial design phase, which generally leads to very complex and inflexible system. This paper proposes a reconfigurable control software, architecture for a hybrid micro machine tool, which combines laser-assisted machining and 5-axis micro-milling as well as incorporating a material handling system and advanced on-machine sensors. The architecture uses finite state machine (FSM) for hardware control and data flow. FSM simplifies the system integration and allows a flexible architecture that can be easily ported to similar applications. Furthermore, component-based technology is employed to encapsulate changes for different modules to realize “plug-and-play”. The benefits of using the software architecture include reduced lead time and lower cost of development.
Keywords: component-based technology; finite state machine; hybrid micro machine tool; reconfigurable software architecture (ID#: 15-7584)


de Amorim, A.A.; Dénès, M.; Giannarakis, N.; Hritcu, C.; Pierce, B.C.; Spector-Zabusky, A.; Tolmach, A., “Micro-Policies: Formally Verified, Tag-Based Security Monitors,” in Security and Privacy (SP), 2015 IEEE Symposium on, vol., no., pp. 813–830, 17–21 May 2015. doi:10.1109/SP.2015.55
Abstract: Recent advances in hardware design have demonstrated mechanisms allowing a wide range of low-level security policies (or micro-policies) to be expressed using rules on metadata tags. We propose a methodology for defining and reasoning about such tag-based reference monitors in terms of a high-level “symbolic machine” and we use this methodology to define and formally verify micro-policies for dynamic sealing, compartmentalization, control-flow integrity, and memory safety, in addition, we show how to use the tagging mechanism to protect its own integrity. For each micro-policy, we prove by refinement that the symbolic machine instantiated with the policy’s rules embodies a high-level specification characterizing a useful security property. Last, we show how the symbolic machine itself can be implemented in terms of a hardware rule cache and a software controller.
Keywords: cache storage; inference mechanisms; meta data; security of data; formally verified security monitors; hardware design; hardware rule cache; metadata tags; micro-policies; reasoning; software controller; tag-based reference monitors; tag-based security monitors; Concrete; Hardware; Monitoring; Registers; Safety; Transfer functions (ID#: 15-7585)


Garg, G.; Garg, R., “Detecting Anomalies Efficiently in SDN Using Adaptive Mechanism,” in Advanced Computing & Communication Technologies (ACCT), 2015 Fifth International Conference on, vol., no., pp. 367–370, 21–22 Feb. 2015. doi:10.1109/ACCT.2015.98
Abstract: Monitoring and measurement of network traffic flows in SDN is key requirement for maintaining the integrity of our data in network. It plays a vital role in management task of SDN controller for controlling the traffic. Anomaly detection considered as one of the important issues while monitoring the traffic. More efficiently we detect the anomalies, easier it will be for us, to manage the traffic. However we have to consider the workload, response time and overhead on network while applying the network monitoring policies, so that our network perform with similar efficiency. To reduce the overhead, it is required to perform analysis on certain portion of traffic instead of analyzing each and every packet in the network. This paper presents an adaptive mechanism for dynamically updating the policies for aggregation of flow entries and anomaly detection, so that monitoring overhead can be reduced and anomalies can be detected with greater accuracy. In previous work, rules for expansion and contraction of aggregation policies according to adaptive behavior are defined. This paper represents a work towards reducing the complexity of dynamic algorithm for updating policies of flow counting rules for anomaly detection.
Keywords: computer network security; software defined networking; telecommunication traffic; SDN; adaptive mechanism; anomaly detection; dynamic algorithm complexity reduction; flow counting rules; flow entry aggregation; network traffic monitoring; overhead monitoring; overhead reduction; Aggregates; Algorithm design and analysis; Complexity theory; Contracts; Heuristic algorithms; Monitoring; Telecommunication traffic; Anomaly detection; Network management; Network traffic monitoring; flow-counting; traffic-aggregation (ID#: 15-7586)


Yaghini, P.M.; Eghbal, A.; Khayambashi, M.; Bagherzadeh, N., “Coupling Mitigation in 3-D Multiple-Stacked Devices,” in Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 23, no. 12, pp. 2931–2944, Dec. 2015. doi:10.1109/TVLSI.2014.2379263
Abstract: A 3-D multiple-stacked IC has been proposed to support energy efficiency for data center operations as dynamic RAM (DRAM) scaling improves annually. 3-D multiple-stacked IC is a single package containing multiple dies, stacked together, using through-silicon via (TSV) technology. Despite the advantages of 3-D design, fault occurrence rate increases with feature-size reduction of logic devices, which gets worse for 3-D stacked designs. TSV coupling is one of the main reliability issues for 3-D multiple-stacked IC data TSVs. It has large disruptive effects on signal integrity and transmission delay. In this paper, we first characterize the inductance parasitics in contemporary TSVs, and then we analyze and present a classification for inductive coupling cases. Next, we devise a coding algorithm to mitigate the TSV-to-TSV inductive coupling. The coding method controls the current flow direction in TSVs by adjusting the data bit streams at run time to minimize the inductive coupling effects. After performing formal analyses on the efficiency scalability of devised algorithm, an enhanced approach supporting larger bus sizes is proposed. Our experimental results show that the proposed coding algorithm yields significant improvements, while its hardware-implemented encoder results in tangible latency, power consumption, and area.
Keywords: Capacitance; Couplings; Encoding; Inductance; Reliability; Through-silicon vias; 3-D; 3-D multiple-stacked IC; coupling; reliability; signal integrity (SI); through-silicon via (TSV) (ID#: 15-7587)


Rekha, P.M.; Dakshayini, M., “Dynamic Network Configuration and Virtual Management Protocol for Open Switch in Cloud Environment,” in Advance Computing Conference (IACC), 2015 IEEE International, vol., no., pp. 143–148, 12–13 June 2015. doi:10.1109/IADCC.2015.7154687
Abstract: Cloud data center have to accommodate many users with isolated and independent networks in the distributed environment to support multi-tenant network and integrity. User applications are stored separately in virtual network. To support huge network traffic data center networks require software to change physically connected devices into virtual networks. Software defined networking is a new paradigm allows networks easily programmable that helps in controlling and managing virtual network devices. Flow decisions are made based upon real-time analysis of network consumption statistics with software defined networking. Managing these virtual networks is the real challenge for the network administrators. In this paper, we propose a novel network management approach between the controller and virtual switches and provide QoS of virtual LAN in the distributed cloud environment. This approach provides a protocol to deploy in Cloud Data center network environments using OpenFlow architecture in switches. Our approach delivers two different features, dynamic network configuration and Virtual management protocol between controller and open vswitch. This technique is very useful for cloud network administrators to provide better network services in cloud data center to multi-tenant users quickly.
Keywords: cloud computing; computer centres; protocols; software defined networking; switching networks; virtualisation; OpenFlow architecture; QoS; cloud data center network environments; cloud network administrators; data center networks; distributed cloud environment; dynamic network configuration; independent networks; multitenant network; network consumption statistics; network management approach; open switch; real-time analysis; software defined networking; user applications; virtual LAN; virtual management protocol; virtual network devices; virtual switches; Ports (Computers); Protocols; Quality of service; Software defined networking; Switches; Virtual machining; OpenvSwitch; Virtual management protocol; dynamic network configuration; virtual networking traffic (ID#: 15-7588)


Carpenter, D.R.; Willingham, J., “Objectionable Currents Associated with Shock, Fire and Destruction of Equipment,” in Electrical Safety Workshop (ESW), 2015 IEEE IAS, pp. 1–11, 26–30 Jan. 2015. doi:10.1109/ESW.2015.7094952
Abstract: This paper identifies objectionable currents in relation to improper wiring methods which result in fire, shock and the destruction of equipment. This information will assist those responsible for electrical safety, reliability and production. It also presents an understanding of the source and misapplication that often associate with objectionable currents. The information contained in this document should be useful threefold: 1) How to control objectionable currents which are responsible for the destruction of sensitive electronic equipment, shock hazards and fire hazards within a premise wiring system, based on testing models discovered with experiments. 2) Determine the reliability of existing accepted wiring methods. Experiments and tests were explored to prove or disprove existing best practice, codes, standards and theorems. 3) As a tutorial to properly apply theory, codes and standards, plus the proven reasons to why, where and how best practices and standards are apply.
Keywords: electric shocks; electronic equipment testing; fires; hazards; wiring; electrical production; electrical reliability; electrical safety; equipment destruction; equipment fire; equipment shock; fire hazards; improper wiring methods; objectionable currents; sensitive electronic equipment; shock hazards; Bonding; Conductors; Current measurement; Fasteners; Grounding; Metals; Wiring; Equipment Ground, Protective Ground or Grounding Conductor — the conductor required to facilitate an overcurrent protection device when a ground fault occurs. Adapted from NFPA 70 section 100 Definitions; Neutral or Grounded Circuit Conductor — a system or circuit conductor that is intentionally grounded. Adapted from NFPA 70 section 100 Definitions; Objectionable Current — the term objectionable current and stray current are used interchangeably in most nomenclature. Objectionable currents are considered to be objectionable when currents are flowing in a conductive path which is unintended and undesirable. This does not include electrical noise such as differential, transverse or common mode noise. Premise Wiring System — wiring on the secondary side of service equipment. Adapted from NFPA 70 section 100 Definitions; Stray Current — the term stray current is similar to the term objectionable current because it is reference to currents which flow in unintended paths. Stray currents include electrical noise. (ID#: 15-7589)


Sathya, R.; Thangarajan, R., “Efficient Anomaly Detection and Mitigation in Software Defined Networking Environment,” in Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, vol., no., pp. 479–484, 26–27 Feb. 2015. doi:10.1109/ECS.2015.7124952
Abstract: A Computer network or data communication network is a telecommunication network that allows computers to exchange data. Computer networks are typically built from a large number of network devices such as routers, switches and numerous types of middle boxes with many complex protocols implemented on them. They need to accomplish very complex tasks with access to very limited tools. As a result, network management and performance tuning is quite challenging. Software-Defined Networking (SDN) is an emerging architecture purporting to be adaptable, cost-effective, dynamic and manageable pursuing to be suitable for the high-bandwidth, changing nature of today’s applications. SDN architectures decouples network control and forwarding functions, making network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services. The network security is a prominent feature of the network ensuring accountability, confidentiality, integrity, and protection against many external and internal threats. An Intrusion Detection System (IDS) is a type of security software designed to automatically alert administrators when someone or something is trying to compromise information system through malicious activities or through security policy violations. Security violation in SDN environment needs to be identified to prevent the system from an attack. The proposed work aims to detect the attacks on SDN environment. Detecting anomalies on SDN environment will be more manageable and efficient.
Keywords: computer network management; computer network security; software defined networking; IDS; SDN architectures; anomaly detection; anomaly mitigation; complex protocols; computer networks; data communication network; external threats; forwarding functions; internal threats; intrusion detection system; malicious activities; network accountability; network confidentiality; network control; network control functions; network devices; network integrity; network management; network performance tuning; network protection; network security; network services; security policy violations; security software; software defined networking environment; telecommunication network; Classification algorithms; Computer architecture; Computer networks; Control systems; Entropy; Feature extraction; Protocols; Entropy based detection; Feature Selection; Flow Table; Intrusion Detection System; Software Defined Networking (ID#: 15-7590)


Di Hu; Yongxue Yu; Ferrario, Antonio; Bayet, Olivier; Lin Shen; Nimmagadda, Ravi; Bonardi, Felice; Matus, Francis, “System Power Noise Analysis Using Modulated CPM,” in Electromagnetic Compatibility and Signal Integrity, 2015 IEEE Symposium on, vol., no., pp. 265–270, 15–21 March 2015. doi:10.1109/EMCSI.2015.7107697
Abstract: As the semiconductor industry advances to ever smaller technology nodes, the power distribution network (PDN) is becoming an essential design factor to ensure system performance and reliability. The time domain simulations typically utilize the chip power model (CPM), generated by Ansys RedHawk, as the current load. The typical CPM only includes current consumption in a few clock cycles, which includes the high frequencies components (several hundreds of MHz), but losing mid to low frequencies. This paper describes a modulated CPM (MCPM) design and signoff process for PDN. The first step is frequency domain analysis of PDN to identify the die-package resonance frequency. Then the chip gate level simulation is performed over an extended period of time to generate the VPD (Value Change Dump plus) file, with realistic low to mid frequency current components. This information is then used to modulate the CPM as the current load for the system level time domain noise simulations. This PI analysis flow was validated using a set of three test cases, with reasonable simulation-measurement correlation achieved. This analysis flow enables more effective power/ground plane layout optimization and capacitor optimization in a timely manner.
Keywords: capacitors; chip scale packaging; circuit optimisation; distribution networks; frequency-domain analysis; integrated circuit layout; power integrated circuits; semiconductor industry; time-domain analysis; Ansys RedHawk; PDN; VPD; capacitor optimization; chip gate level simulation; chip power model; clock cycles; current consumption; die-package resonance frequency; frequency current components; frequency domain analysis; modulated CPM; power distribution network; power-ground plane layout optimization; signoff process; system power noise analysis; time domain noise simulations; time domain simulations; value change dump plus; Impedance; Load modeling; Noise; Noise measurement; Time-domain analysis; Voltage control; Voltage measurement; MCPM; PCB; correlation; measurement; package; power integrity; simulation (ID#: 15-7591)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.