Science of Security (SoS) Newsletter

SoS Newletter Banner


Science of Security (SoS) Newsletters


The SoS newsletters showcase research programs of interest to the Science of Security Community (SoS).  All of these materials are based on open sources and in many cases link to the original work or to the web page for a specific program.  There is a great deal of good work going on in this community, and this page is a way to share information about colleagues, research projects, and opportunities.

Table of Contents

Science of Security (SoS) Newsletter (2016 - Issue 4)

Science of Security (SoS) Newsletter (2016 - Issue 3)

Science of Security (SoS) Newsletter (2016 - Issue 2)

Science of Security (SoS) Newsletter (2016 - Issue 1)

Science of Security (SoS) Newsletter (2015 - Issue 10)

Science of Security (SoS) Newsletter (2015 - Issue 9)

Science of Security (SoS) Newsletter (2015 - Issue 8)

Science of Security (SoS) Newsletter (2015 - Issue 7)


Science of Security (SoS) Newsletter (2015 - Issue 6)

Science of Security (SoS) Newsletter (2015 - Issue 5)

Science of Security (SoS) Newsletter (2015 - Issue 4) 

Science of Security (SoS) Newsletter (2015 - Issue 3)

Science of Security (SoS) Newsletter (Vol 2015 - Issue 2)

Science of Security (SoS) Newsletter (Vol 2015 - Issue 1)

Science of Security (SoS) Newsletter (Vol 2014 - Issue 6)

Science of Security (SoS) Newsletter (Vol 2014 - Issue 5)

Science of Security (SoS) Newsletter (Vol 2014 - Issue 4)

Science of Security (SoS) Newsletter (Vol 2014 - Issue 3)

Science of Security (SoS) Newsletter (Vol 2014 - Issue 2)

Science of Security (SoS) Newsletter (Vol 2014 - Issue 1)



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.



Index of Newsletter Topics

Index of Newsletter Topics

The following is an index with hot links of the all topics included in all the SoS Newsletters to date.

3d Annual NSA Cybersecurity Paper Contest Winners
6LoWPAN 2015

Academic SoS Research Highlights
Academic SoS Research Programs
Acoustic Fingerprints
Acoustic Fingerprints (2014 Year in Review)
Acoustic Fingerprints 2015
Actuator Security
Actuator Security 2015
Ad Hoc Network Security
Adaptive Filtering
Adaptive Filtering 2015
Adoption of Cybersecurity Technology Workshop
Adversary Models and Privacy 2014
Agents 2015
Analogical Transfer
Analogical Transfer, 2014
Android and iOS Encryption
Android Encryption 2015
Anonymity and Privacy 2015
Anonymity and Privacy in Wireless Networks
Anonymity in Wireless Networks 2015
APIs 2015 part 1
APIs 2015 part 2
Artificial Intelligence
Artificial Intelligence and Privacy, 2014
Artificial Neural Networks and Security 2015
Asymmetric Encryption 2015
Attack Graphs and Privacy, 2014
Attribution (2014 Year in Review) Part 1
Attribution (2014 Year in Review) Part 2
Attribution 2015
Authentication and Authorization
Authentication and Authorization (2014 Year in Review) Part 1
Authentication and Authorization (2014 Year in Review) Part 2
Authentication and Authorization with Privacy 2015 Part 1
Authentication and Authorization with Privacy 2015 Part 2
Automated Response Actions
Automated Response Actions (2014 Year in Review)
Automated Response Actions 2015
Autonomic Security
Autonomic Security 2015


Best Paper Competition Award Ceremony
Best Scientific Cybersecurity Paper
Big Data
Big Data Security Issues (2014 Year in Review)
Big Data Security Issues in the Cloud 2015
Big Data Security Metrics, 2014
Biometric Encryption and Privacy 2014
Black Box Cryptography 2015
Botnets 2015
Browser Security
Building secure and resilient software from the start

Carnegie Mellon Lablet
Channel Coding
Channel Coding 2015
Citations for Hard Problems (2014 Year in Review)
Clean Slate
Clean Slate (2014 Year in Review)
Clean Slate 2015
Cloud Security
CMU – Carnegie Mellon University
CMU fields cloud-based sandbox
CMU Lablet Recent Activities
Coding Theory
Coding Theory and Security 2015
Coding Theory and Security, 2014, Part 1
Coding Theory and Security, 2014, Part 2
Coding Theory and Security, 2014, Part 3
Cognitive Radio Security
Cognitive Radio Security 2015
Command Injection Attacks
Communications Security
Compendium of Science of Security Articles of Interest
Compiler Security
Compiler Security 2015
Composability 2015
Compositional Security 2014
Compressive Sampling
Compressive Sampling 2015
Computational Cybersecurity in Compromised Environments Workshop
Computational Intelligence
Computational Intelligence 2015
Computer Science
Computer Science as a Theoretical Science
Computing Theory and Composability, 2014
Computing Theory and Privacy, 2014
Computing Theory and Security Metrics, 2014
Computing Theory and Security Resilience, 2014
Confinement 2015
Consumer Privacy in the Smart Grid 2014
Control Theory
Control Theory and Privacy 2014 Part 1
Control Theory and Privacy 2014 Part 2
Cooperative SoS Activities
Covert Channels
Covert Channels 2015
Cross Layer Security
Cross Layer Security (2014 Year in Review)
Cross Layer Security 2015
Cross Site Scripting
Cross Site Scripting (2014 Year in Review)
Cross Site Scripting 2015
Cryptanalysis 2015
Cryptography and Data Security 2015
Cryptography and Security
Cryptography with Photons 2015
Cryptology and Data Security, 2014
Cyber Aptitude and the Science of Intellectual Assessment
Cyber Physical Expert Security Systems 2015
Cyber Physical Systems and Metrics 2015
Cyber Physical Systems and Privacy 2015
Cyber Physical Systems Resiliency 2015
Cyber Scene (2016 - Issue 4)
Cyber Security, Cyber Warfare, and Digital Forensics (CyberSec) - Beirut, Lebanon
Cyber-crime Analysis
Cyber-physical System Security and Privacy, 2014, Part 1
Cyber-physical System Security and Privacy, 2014, Part 2
Cyber-Physical Systems
Cyber-physical Systems Security
Cybercrime Analysis 2015
Cybersecurity Conference Publications, Early 2015
Cybersecurity Education
Cybersecurity Education 2015

Data at Rest - Data in Motion
Data Deletion 2015
Data Deletion and Forgetting
Data Deletion, 2014
Data in Motion Data at Rest 2015
Data Race Vulnerabilities 2015
Data Sanitization
Data Sanitization 2015
Decomposition and Security 2015
Deep Packet Inspection 2014
Deploying the Security Behavior Observatory: An Infrastructure for Long-term Monitoring of Client Machines
Deterrence 2015
Deterrence, 2014 (ACM Publications)
Developing Security Metrics
Differential Privacy, 2014, Part 1
Differential Privacy, 2014, Part 2
Digital Signature Security
Digital Signatures
Digital Signatures and Privacy 2015
Discrete and Continuous Optimization
Discrete and Continuous Optimization 2015
Distributed Denial of Service Attack Detection 2015
Distributed Denial of Service Attack Mitigation 2015
Distributed Denial of Service Attack Prevention 2015
Distributed Denial of Service Attacks (DDoS Attacks)
DNA Cryptography 2015
DNA Cryptography, 2014
Dynamic Execution
Dynamic Network Services and Security 2015
Dynamical Systems
Dynamical Systems 2015
Dynamical Systems, 2014

Edge Detection
Edge Detection and Metrics 2015
Edge Detection and Security 2015
Effectiveness and Work Factor Metrics
Effectiveness and Work Factor Metrics 2015 Part 1
Effectiveness and Work Factor Metrics 2015 Part 2
Efficient Encryption
Efficient Encryption 2015
Elliptic Curve Cryptography (2014 Year in Review), Part 1
Elliptic Curve Cryptography (2014 Year in Review), Part 2
Elliptic Curve Cryptography (2014 Year in Review), Part 3
Elliptic Curve Cryptography (2014 Year in Review), Part 4
Elliptic Curve Cryptography (2014 Year in Review), Part 5
Elliptic Curve Cryptography 2015
Elliptic Curve Cryptography from ACM, 2014, Part 1
Elliptic Curve Cryptography from ACM, 2014, Part 2
Embedded System Security
Embedded System Security 2015
Encryption Audits 2014
End to End Computing
End to End Security and IPv6, 2014
End to End Security and the Internet of Things 2015
End to End Security and the Internet of Things, 2014
Expandability 2015
Expert Systems
Expert Systems Security 2015

Facial Recognition
Facial Recognition 2015
False Data Injection Attacks 2015
Flow Control Integrity 2015
Fog Computing Security
Fog Computing Security 2015
Forward Error Correction
Forward Error Correction 2015
Fuzzy Logic and Security
Fuzzy Logic and Security 2015

Game Theoretic Approaches
Game Theoretic Security 2015
Game Theoretic Security, 2014
General Topics of Interest (2014 - Issue 4)
General Topics of Interest (2015 - Issue 2)
General Topics of Interest (Vol 2014 - Issue 1)
General Topics of Interest (Vol 2014 - Issue 6)

Hard Problems Human Behavior 2015 Part 1
Hard Problems Human Behavior 2015 Part 2
Hard Problems Resiliency 2015
Hard Problems Security Metrics 2015
Hard Problems: Predictive Security Metrics (ACM)
Hard Problems: Predictive Security Metrics (IEEE)
Hard Problems: Resilient Security Architectures (ACM)
Hard Problems: Scalability and Composability (2014 Year in Review)
Hardware Trojan Horse Detection
Hardware Trojan Horse Detection 2015
Hash Algorithms
Hash Algorithms 2015
Homomorphism 2014  (Homomorphic Encryption)
Homomorphism 2015
Honey Pots 2015
Host-based IDS 2015
Host-based Intrusion Detection
Hot Research in Cybersecurity Fuels 2016
HotSoS 2015 - Interest in Cybersecurity Science and Research Heats Up
HotSoS 2015 - Research Presentations
HotSoS 2015 - Tutorials
HotSoS 2016 Paper, Posters and Tutorials
Human Factors
Human Trust
Human Trust 2015
I-O System Security 2014

Identity Management
Identity Management 2015
Immersive Systems
Immersive Systems 2015
Improving power grid cybersecurity
In the News (2014 - Issue 1)
In the News (2014 - Issue 2)
In the News (2014 - Issue 3)
In the News (2014 - Issue 4)
In the News (2014 - Issue 5)
In the News (2014 - Issue 6)
In the News (2015 - Issue 1)
In the News (2015 - Issue 2)
In the News (2015 - Issue 4)
In the News (2015 - Issue 6)
In the News (2015 - Issue 7)
In the News (2015 - Issue 8)
In the News (2016 - Issue 1)
In the News (2016 - Issue 2)
In the News (2016 - Issue 3)
In the News (2016 - Issue 4)
In the News (2016 - Issue 5)
Information Assurance
Information Assurance and Cyber Security (CIACS) - Pakistan
Information Flow Analysis and Security 2014
Information Forensics and Security 2015
Information Security for South Africa
Information Theoretic Security
Information Theoretic Security 2015
Insider Threat
Insider Threats (2014 Year in Review)
Insider Threats 2015
Insights into Composability from Lablet Research
Integrated Security
Integrated Security Technologies in CPS 2015
Integrity of Outsourced Databases 2015
Interdisciplinary SoS Activities
International Conferences ICCCT India 2015
International Conferences ICCSP India 2015
International Conferences SSIC China 2015
International Conferences: 10th International Conference on Security and Privacy in Communication Networks - Beijing, China
International Conferences: 15th International Conference on Information & Communications Security (ICICS 2013) - Beijing, China
International Conferences: 2014 Iran Workshop on Communication and Information Theory (IWCIT) - Iran
International Conferences: 6th International Conference on New Technologies, Mobility & Security (NTMS) - Dubai
International Conferences: ACM CHI Conference on Human Factors in Computing Systems - Toronto, Canada
International Conferences: ACM Symposium on InformAtion, Computer and Communications Security (ASIACCS) 2015, Singapore
International Conferences: AINA Korea 2015
International Conferences: AsiaJCIS 2015
International Conferences: China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China
International Conferences: Chinese Control and Decision Conference (CCDC)
International Conferences: Cloud Engineering (IC2E), 2015 Arizona
International Conferences: CODASPY 15, San Antonio, Texas
International Conferences: Communication, Information & Computing Technology (ICCICT), 2015
International Conferences: Computer Communication and Informatics (ICCCI) - Coimbatore, India
International Conferences: Computer Science and Information Systems (FedCSIS), Warsaw, Poland
International Conferences: Computer Security Applications Conference, December 2014, New Orleans, Part 1
International Conferences: Computer Security Applications Conference, December 2014, New Orleans, Part 2
International Conferences: Conference on Advanced Communication Technology - Korea
International Conferences: Conference on Networking Systems & Security (NSysS), Dhaka, Bangladesh
International Conferences: Cryptography and Security in Computing Systems, 2015, Amsterdam
International Conferences: Cryptography and Security in Computing Systems, Amsterdam, 2015
International Conferences: CYBCONF 2015 Poland
International Conferences: Cyber and Information Security Research, Oak Ridge, TN
International Conferences: CyberSA 2015  London
International Conferences: Dependable Systems and Networks (2014) - USA
International Conferences: eCrime, Spain 2015
International Conferences: EuroSec 15, Bordeaux, France
International Conferences: Human Computer Interaction (CHI 15), Korea
International Conferences: IACC India 2015
International Conferences: IBCAST 2015  Islamabad
International Conferences: ICCPS Seattle 2015
International Conferences: ICISCE, China 2015
International Conferences: IEEE Information Theory Workshop, Hobart, Australia
International Conferences: IEEE Security and Privacy Workshops, San Jose, California
International Conferences: IEEE World Congress on Services, Anchorage, Alaska
International Conferences: IH&MMSec Oregon 2015
International Conferences: IMF Germany 2015
International Conferences: INFOCOM Kowloon China 2015
International Conferences: Information Hiding and Multimedia Security Workshop, Salzburg, Austria
International Conferences: Information Networking (ICOIN), 2015, Cambodia
International Conferences: Information Theory Workshop (ITW), 2014, Hobart, Tasmania
International Conferences: Innovations in Theoretical Computer Science, Israel, 2015
International Conferences: Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 Singapore
International Conferences: International Science of Security Research: China Communications 2013
International Conferences: International Science of Security Research: China Communications 2014
International Conferences: International Security Research
International Conferences: ISSNIP, Singapore 2015
International Conferences: MobiCom Paris 2015
International Conferences: MobiHoc 2015
International Conferences: NSysS 2015  Dhaka
International Conferences: Online Social Networks, 2014, Dublin, Ireland
International Conferences: Privacy and Security of Big Data, Shanghai, China, 2014
International Conferences: PST Turkey 2015
International Conferences: SACMAT 2015
International Conferences: Service Oriented System Engineering, 2014, Oxford, U.K.
International Conferences: SIGMETRICS 2015
International Conferences: SIGMIS-CPR 2015
International Conferences: Signal Processing and Integrated Networks (SPIN), 2015
International Conferences: Signal Processing, Informatics, Communication & Energy Systems (SPICES), India, 2015
International Conferences: Software Analysis, Evolution and Reengineering (SANER) Quebec, Canada
International Conferences: Software Security and Reliability (SERE), San Francisco, CA
International Conferences: Software Testing, Verification and Validation Workshops (ICSTW), Graz, Austria
International Conferences: Symposium on Resilient Control Systems (ISRCS)
International Conferences: The Third International Conference on Computer, Communication, Control and Information Technology (C3IT), 2015, India
International Conferences: WiSec 2015
International Conferences: Workshop on IoT Privacy, Trust, and Security, 2015, Singapore
International Conferences: Workshop on Security and Privacy Analytics (IWSPA) '15, San Antonio, Texas
International Conferences: Workshop on Visualization for Cyber Security (VizSec 2014), Paris, France
International News (2014 - Issue 5)
International News (2014 - Issue 6)
International News (2015 - Issue 1)
International News (2015 - Issue 2)
International News (2015 - Issue 4)
International Security Related Conferences
International Security Related Conferences
International Security Related Conferences (2015 - Issue 3)
International Security Related Conferences (2015 - Issue 7)
International Security Related Conferences (2015 - Issue 8)
International Security Related Conferences (2015 - Issue 9)
International Security Related Conferences (2016 - Issue 2)
International Security Related Conferences (Vol 2015 - Issue 4)
International Security Related Conferences (Vol 2015 - Issue 5)
International Security Research Conferences (2015 - Issue 2)
International Security Research Conferences (Vol 2015 - Issue 1)
Internet of Things (Part 1)
Internet of Things Security 2015
Internet of Things Security Problems
Internet of Vehicles 2015
Intrusion Tolerance
Intrusion Tolerance (2014 Year in Review)
Intrusion Tolerance 2015
iOS Encryption
IoT and NSA Research
IP Piracy
IP Protection 2015
IPv6 and Other Protocols
IPv6 Security 2015

Journal: IEEE Transactions on Information Forensics and Security, March 2015

Kerberos 2015
Key Management
Key Management 2015
Keystroke Analysis
Keystroke Analysis 2015

Lablet Activities (2015 - Issue 1)
Lablet Activities (2015 - Issue 2)
Lablet Presentations—Quarterly Lablet Meeting at CMU, July 2015
Lablet Quarterly Meeting at NCSU - Feb 2-3, 2016
Lablet Quarterly Meeting at UMD 201510 SoS Hard Problems
Lablet Research on Policy-Governed Secure Collaboration
Lablet Research: Human Behavior & Cybersecurity
Lablet Research: Resilient Architecture
Language Based Security
Language Based Security 2015
Lightweight Ciphers 2015
Lightweight Cryptography
Location based Services 2015
Location Privacy 2014 Authentication-based
Location Privacy 2014 Cloaking-based
Location Privacy in Wireless Networks 2015
Locking 2015

Machine Learning
Machine Learning 2015
Magnetic Remanence 2015
Malware Analysis, 2014, Part 1 (ACM)
Malware Analysis, 2014, Part 2 (ACM)
Malware Analysis, Part 1
Malware Analysis, Part 2
Malware Analysis, Part 3
Malware Analysis, Part 4
Malware Analysis, Part 5
MANET Security and Privacy 2014
Measurement and Metrics: Testing, 2014
Measurement of Security Weaknesses, 2014
Meet the Leaders in SoS Research
Meet the SoS Research Lablets
Metadata Discovery Problem
Metadata Discovery Problem 2015
Microelectronics Security
Microelectronics Security 2015
Middleware Security
Middleware Security 2015
Mobile Computing
Mobile Computing and Security
Moving Target Defense
Moving Target Defense (2014 Year in Review)
Moving Target Defense 2015
Multicore Computing and Security, 2014
Multicore Computing Security (Update)
Multicore Computing Security 2015
Multicore Computing Security 2015
Multidimensional Signal Processing
Multidimensional Signal Processing 2015 Part 1
Multidimensional Signal Processing 2015 part 2
Multifactor Authentication 2015
Multiple Fault Diagnosis
Multiple Fault Diagnosis 2015

Natural Language Processing
Natural Language Processing 2015
NC State Lablet Recent Activities
NCSU – North Carolina State University
Network Accountability
Network Accountability 2015
Network Coding
Network Coding 2015
Network Intrusion Detection
Network Intrusion Detection 2015
Network Layer Security for the Smart Grid 2015
Network Reconnaissance 2015
Network Security Architecture
Network Security Architecture and Resilience 2015
Networked Control Systems 2015
Neural Networks
Neural Networks 2015

Open Systems
Open Systems and Security, 2014
Operating System Security 2015
Operating Systems
Operating Systems Security (2014 Year in Review), Part 1
Operating Systems Security (2014 Year in Review), Part 2
Operating Systems Security (2014 Year in Review), Part 3
Oscillating Behavior
Oscillating Behavior 2015

Panel Presentations—Quarterly Lablet Meeting at CMU, July 2015
Paper Presentations—Quarterly Lablet Meeting at CMU, July 2015
Pattern Locks, 2014
Peer to Peer 2015
Peer to Peer Security, 2014
Peer to Peer Systems
Pervasive Computing
Pervasive Computing 2015
Phishing (ACM) (2014 Year in Review)
Phishing (IEEE) (2014 Year in Review), Part 1
Phishing (IEEE) (2014 Year in Review), Part 2
Phishing 2015
Physical Layer Security
Physical Layer Security 2015
PKI Trust Models 2014
Policy Analysis
Policy Analysis 2015
Policy-based Governance 2015
Pollution Attacks 2015
Polymorphic Worms, 2014
Power Grid Security
Power Grid Vulnerability Analysis 2015
Predictive Metrics 2015
Privacy Models, 2014
Privacy Models, 2015
Protocol Verification
Protocol Verification 2015
Provenance 2015
Publications of Interest (2014 - Issue 1)
Publications of Interest (2014 - Issue 2)
Publications of Interest (2014 - Issue 3)
Publications of Interest (2014 - Issue 4)
Publications of Interest (2014 - Issue 5)
Publications of Interest (2014 - Issue 6)
Publications of Interest (2015 - Issue 1)
Publications of Interest (2015 - Issue 2)
Publications of Interest (2015 - Issue 3)
Publications of Interest (2015 - Issue 4)
Publications of Interest (2015 - Issue 5)
Publications of Interest (2015 - Issue 6)
Publications of Interest (2015 - Issue 7)
Publications of Interest (2015 - Issue 8)
Publications of Interest (2015 - Issue 9)
Publications of Interest (2015 - Issue 10)
Publications of Interest (2015 - Issue 11)
Publications of Interest (2016 - Issue 2)
Publications of Interest (2016 - Issue 3)
Publications of Interest (2016 - Issue 4)
Publications of Interest (2016 - Issue 5)

QR Code 2015
Quantum Computing
Quantum Computing (Update)
Quantum Computing Security (2014 Year in Review)
Quantum Computing Security 2015

Radio Frequency Identification
Radio Frequency Identification 2015
Ransomware 2016
Recent NSF Research Grants 2012-2013
Remind me Tomorrow: Human Behaviors and Cyber Vulnerabilities
Research Grant Opportunities
Resilience Indicators
Resilience Metrics 2015
Resiliency and Security, 2014
Resilient Security Architectures (IEEE)
Resilient Security Architectures 2015
Risk Estimation 2015
Risk Estimations
Router System Security 2014
Router Systems Security
Routing Anomalies 2015

Safe Coding
Safe Coding Guidelines 2015
Sandboxing 2015
Sandboxing for Mobile Apps 2015
Scalability and Compositionality 2015
Science of Secure and Resilient Cyber-Physical Systems
Science of Security (2014 Year in Review)
Science of Security (2014 Year in Review)
Science of Security (SoS) Newsletter (Vol 2014 - Issue 1)
Science of Security (SoS) Newsletter (Vol 2014 - Issue 2)
Science of Security (SoS) Newsletter (Vol 2014 - Issue 3)
Science of Security (SoS) Newsletter (Vol 2014 - Issue 4)
Science of Security (SoS) Newsletter (Vol 2014 - Issue 5)
Science of Security (SoS) Newsletter (Vol 2014 - Issue 6)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 1)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 2)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 3)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 4)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 6)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 7)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 8)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 9)
Science of Security (SoS) Newsletter (Vol 2015 - Issue 10)
Science of Security (SoS) Newsletter (Vol 2016 - Issue 1)
Science of Security (SoS) Newsletter (Vol 2016 - Issue 2)
Science of Security (SoS) Summer Internships
Science of Security Quarterly Lablet Meeting (UMD - Oct 2014)
Scientific Computing
Scientific Computing 2015
Searchable Encryption 2014
Searchable Encryption 2015
Secret Life of Passwords
Secure File Sharing
Secure File Sharing 2015
SecUrity and Resilience (SURE) Review
Security by Default 2015
Security Conference Publications, Early 2015
Security Measurement and Metric Methods, 2014
Security of Networked Cyber-Physical Systems: Challenges and Some Promising Approaches
Security Scalability and Big Data, 2014
Selection of Android graphic pattern passwords
Self-healing Networks 2015
Signal Processing 2015
Signal Propagation and Computer Technology (ICSPCT) - India
Signals Processing
Signature-based Defenses, 2014
Situational Awareness
Situational Awareness 2015
Situational Awareness and Security - Part 1
Situational Awareness and Security - Part 2
Smart Grid Security
Smart Grid Security 2015
Software Assurance
Software Assurance 2014 Part 1
Software Assurance 2014 Part 2
Software Assurance and Metrics 2015
Software Security, 2014 (ACM), Part 1
Software Security, 2014 (ACM), Part 2
Software Security, 2014 (IEEE), Part 1
Software Security, 2014 (IEEE), Part 2
Software Tamper Resistance
SoS Academic Survey
SoS and Resilience for Cyber-Physical Systems project
SoS Lablet Publications
SoS Lablet Quarterly Meeting - CMU
SoS Lablet Quarterly Meeting - NCSU
SoS Newsletter (2016 - Issue 3)
SoS Newsletter (2016 - Issue 4)
SoS Newsletter (2016 - Issue 5)
SoS VO Member Contributions
Spotlight on Current Lablet Activities
Spotlight on Lablet Activities
Spotlight on Research Activities Outside of the US
SQL Injections
SQL Injections 2015
Static Dynamic Analysis of Security Metrics for Cyberphysical Systems
Steganography 2015
SURE Meeting
SURE Meeting Presentations 2015 March 17-18
Sure Presentations
Survey on Resilience
Swarm Intelligence Security
Swarm Intelligence Security 2015
Sybil Attacks 2015
Synopses of Research Presentations UMD 201510
System Recovery
System Recovery 2015
System Science of SecUrity and REsilience (SURE)
System Science of SecUrity and REsilience (SURE) - Kickoff Meeting

Taint Analysis 2014
Tamper Resistance 2015
Text Analytics
Text Analytics 2015
Text Analytics Techniques (2014 Year in Review)
Theoretical Cryptography
Theoretical Cryptography 2015
Theoretical Foundations for Software
Threat Vector Metrics and Privacy, 2014
Threat Vectors
Time Frequency Analysis
Time Frequency Analysis and Security 2015
Trust and Trustworthiness
Trust and Trustworthiness 2015 Part 1
Trust and Trustworthiness 2015 Part 2
Trust and Trustworthiness, 2014
Trusted Platform Modules (TPMs)
Trusted Platform Modules 2015
Trustworthy Systems 2015
Trustworthy Systems, Part 1
Trustworthy Systems, Part 2

UIUC – University of Illinois at Urbana-Champaign
UIUC Lablet Recent Activities
UMD - University of Maryland
UMD Lablet Recent Activities
Upcoming Events of Interest
Upcoming Events of Interest (2014 - Issue 1)
Upcoming Events of Interest (2014 - Issue 3)
Upcoming Events of Interest (2014 - Issue 5)
Upcoming Events of Interest (2014 - Issue 6)
Upcoming Events of Interest (2015 - Issue 1)
Upcoming Events of Interest (2015 - Issue 2)
Upcoming Events of Interest (2015 - Issue 3)
Upcoming Events of Interest (2015 - Issue 4)
Upcoming Events of Interest (2015 - Issue 6)
Upcoming Events of Interest (2015 - Issue 7)
Upcoming Events of Interest (2015 - Issue 8)
Upcoming Events of Interest (2016 - Issue 1)
Upcoming Events of Interest (2016 - Issue 2)
Upcoming Events of Interest (2016 - Issue 3)
Upcoming Events of Interest (2016 - Issue 4)
Upcoming Events of Interest (2016 - Issue 5)
US News (2014 - Issue 5)
US News (2014 - Issue 6)
US News (2015 - Issue 2)
US News (Vol 2015 - Issue 1)
US News (Vol 2015 - Issue 4)
User Privacy in the Cloud, 2014

Video Surveillance
Video Surveillance 2015
Virtual Machines (1)
Virtual Machines (2)
Virtual Machines 2015
Virtual Machines, 2015
Virtualization Privacy Auditing
Visible Light Communication
Visible Light Communications Security 2015
Vulnerability Detection (2014 Year in Review), Part 1
Vulnerability Detection (2014 Year in Review), Part 2
Vulnerability Detection (2014 Year in Review), Part 3
Vulnerability Detection (2014 Year in Review), Part 4
Vulnerability Detection 2015

Weaknesses 2015
Web Browser Security 2015
Web Browsers
Web Caching
Web Caching 2015
White Box Cryptography
Wireless Mesh Network Security
Wireless Mesh Network Security 2015
Work Factor Metrics 2015
Wyvern Programming Language
Wyvern programming language builds secure apps



Zero Day Attacks
Zero Day Exploits part 1
Zero Day Exploits part 2


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Science of Security (SoS) Newsletter (2016 - Issue 8)

Newsletter Banner

Science of Security (SoS) Newsletter (2016 - Issue 8)

Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

Publications of Interest

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.



Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Cyber Scene #4


SoS Logo

Cyber Scene #4


Post-BREXIT: Transatlantic Cyber Defense Issues

Given the unlikely event of what the Economist on 2 July dubbed “...the possibility of an inelegant, humiliating, and yet welcome, Breversal,” the United Kingdom (UK) ushered in Prime Minister Theresa May and wrestled with its planned withdrawal from the European Union (EU) while the North Atlantic Treaty Organization (NATO) nations addressed cyber defense issues at the Warsaw Summit on 8–9 July 2016. The New York Times reported: “Europe, the anchor of the trans-Atlantic alliance, is battling centrifugal forces unleashed by Britain’s vote to leave the European Union.” As a counterweight, President Obama addressed the NATO Summit, stating “We haven't simply reaffirmed the alliance; we're moving forward with the most significant reinforcement of collective defense any time since the Cold War.” Recently, ex-Prime Minister David Cameron affirmed that despite its decision to leave the EU, the UK is not turning its back on Europe or on European security. The NATO Secretary General, former Norwegian Prime Minister Jens Stoltenberg, added that NATO was undergoing the biggest reinforcement to its collective defense in a generation.

Just prior to the Summit, NATO agreed to elevate cyberspace to the conflict domain of ground, air, sea, and space operations. This follows the February 2016 NATO Technical Arrangement on cyber defence cooperation with the EU, stating that international law applies to cyberspace. The US, for its part, published a list on 8 July entitled “U.S. Assurance and Deterrence Efforts in Support of NATO Allies” to underscore this direction. See: The Economist Special Edition Anarchy in the UK and particularly "Adrift" (p. 10) of 2 July 2016 in, and

NATO Deputy Assistant Secretary General for Emerging Security Challenges, Jamie Shea, provides a dynamite summary of the ascendancy of cyber as a tool of warfare in NATO, particularly regarding cost-effectiveness, the ease of the use of proxies and anonymity, and the impact versus cost issue, in the following video clip: The recent creation of his post is in itself testimony to the shift in emphasis, even prior to BREXIT.

As the EU contracts, NATO expands, officially adding Montenegro in July 2016 to the accession process following Albania and Croatia’s joining in 2009.  Moreover, in addition to the existing NATO Cyber Defence Centre (sic) of Excellence in Estonia and the NATO Intelligence Fusion Centre in the UK, NATO Secretary General Stoltenberg announced the standing up of an intelligence fusion center in Tunisia to focus on Special Forces Training on anti-terrorism issues which have drawn in NATO members and partners.

Reinforcing NATO’s growth spurt in contrast to the EU, the Secretary General underscored that beyond the addition of the 29th NATO nation, NATO welcomes the nations who have chosen to be strong partners, such as Sweden, Finland, Austria, and Serbia. Some of these partners bring sophistication, experience, and geography to the cyber table. Interoperability, an enduring technical challenge for NATO, was tested just prior to the summit at a meeting attended virtually by 53 nations deemed “crucial security partners” by Deputy Secretary General, Ambassador, and former Pentagon Assistant Secretary for International Security Alexander Vershbow. See: 

For the latest on NATO’s cyber defense initiatives, see the NATO JULY 2016 FACT SHEET ON CYBER DEFENCE at

N.B. NATO officially uses British spelling. 

(ID#: 16-11373)

In the News 2016 - Issue 8


SoS Logo

In the News


This section features topical, current news items of interest to the international cybersecurity community. These articles and highlights were selected from various popular science and security magazines, newspapers, and online sources.

US News     

“Experts Say Cybercriminals Are Trying to Manipulate the US Election,” CNBC, 11 August 2016. [Online]
A study conducted by Tripwire found that most cybersecurity professionals agree that criminals are attempting to influence the election. The most evident of their efforts so far has been the breach of the Democratic National Committee's computer network, and the subsequent release of sensitive and controversial information that the criminals recovered. An overwhelming 82% of those surveyed “believed that state-sponsored attacks around elections should be considered acts of cyberwar.”


“Ransomware Spam Campaign Targets US Government and Educational Institutions,” International Business Times, 10 August 2016. [Online]
CryptFile2 ransomware has recently begun targeting state and local government agencies and educational institutions. A large surge of emails sent out by the ransomware was disguised as a convincing message from American Airlines advertising free flights and discounts. According to a Softpedia report, CryptFile2 belongs to the CrypBoss family of ransomware. Unlike other versions of CrypBoss, a decryption code has not yet been found. See:


“Apple Offers Big Cash Rewards for Help Finding Security Bugs,” Reuters, 5 August 2016. [Online]
Apple is the latest player to toss its hat into the bug bounty ring. The tech giant said that it will be awarding up to $200,000 for finding critical security bugs. The program will not initially be open to the public, with invitations going out to only two dozen researchers. The researchers will search for flaws in five specific categories, including Apple's “secure boot” which carries the largest reward. See:


International News 

“Video Game Cybersecurity Startup Wins Spot at TechCrunch Disrupt Expo,” Bizjournals, 11 August 2016. [Online]
Panopticon, a startup company specializing in stopping credit card fraud and identity theft in online video games, beat out seven other companies for a spot at the upcoming TechCrunch Disrupt Expo. One of Panopticon’s clients reported losing nearly 40% of its revenue to in-game theft.


“Microsoft Accidentally Leaks Golden Keys that Unlock Every Windows Device,” International Business Times, 11 August 2016. [Online]
Microsoft accidentally released several keys that have the capacity to unlock any device running Windows. The keys allow a user to bypass Secure Boot and run other operating systems or install rootkits and bootkits. Microsoft has released several patches to deal with the issue, however, it is unknown if they will be able to fully correct it.


“Hacker Steals Nearly Two Million Accounts from Dota 2 Developer Forum,” International Business Times, 10 August 2016. [Online]
A hacker successfully breached the official developer forum for the popular online game Dota 2 and stole information including usernames, passwords, emails, and IP addresses. The security flaw that allowed the breach to occur has since been patched but not before nearly two million accounts were compromised. The breach is reported to have taken place on 10 July. A forum administrator said that all passwords have been reset and assured users that no Steam credentials or payment information was stored or taken.


“Turnbull Warns ‘Heads will Roll’ After DDoS Attacks Cause Chaos in Australian Digital Census,” International Business Times, 11 August 2016. [Online] 
Australian Prime Minister Malcolm Turnbull said that “heads will roll” following a DDoS attack on the country’s census system. Turnbull said that fault lies with the Australian Bureau of Statistics. Many are anticipating fallout to impact IBM who was awarded the contract to manage the census website.


“Keyless Systems of Many VW Group Cars Can Be Hacked: Researchers,” Reuters, 11 August 2016. [Online]
According to a group of researchers, tens of millions of vehicles sold over the last 20 years are vulnerable to a bug in the keyless entry system. Computer experts at the University of Birmingham published a paper detailing the hack. The list of vulnerable vehicles includes almost every model from Volkswagen, Audi, Seat, and Skoda produced since 1995.

(ID#: 16-11370)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Publications of Interest


SoS Logo

Publications of Interest


The Publications of Interest section contains bibliographical citations, abstracts if available, links on specific topics, and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics that have been presented or published within the past year. Some represent updates from work presented in previous years; others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to:

(ID#: 16-11191)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence

APIs 2015 (Part 1)


SoS Logo


2015 (Part 1)


Applications Programming Interfaces, APIs, are definitions of interfaces to systems or modules. As code is reused, more and more are modified from earlier code. For the Science of Security community, the problems of compositionality and resilience are direct. The research work cited here was presented in 2015.

A. Masood and J. Java, “Static Analysis for Web Service Security — Tools & Techniques for a Secure Development Life Cycle,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225337
Abstract: In this ubiquitous IoT (Internet of Things) era, web services have become a vital part of today's critical national and public sector infrastructure. With the industry wide adaptation of service-oriented architecture (SOA), web services have become an integral component of enterprise software eco-system, resulting in new security challenges. Web services are strategic components used by wide variety of organizations for information exchange on the internet scale. The public deployments of mission critical APIs opens up possibility of software bugs to be maliciously exploited. Therefore, vulnerability identification in web services through static as well as dynamic analysis is a thriving and interesting area of research in academia, national security and industry. Using OWASP (Open Web Application Security Project) web services guidelines, this paper discusses the challenges of existing standards, and reviews new techniques and tools to improve services security by detecting vulnerabilities. Recent vulnerabilities like Shellshock and Heartbleed has shifted the focus of risk assessment to the application layer, which for majority of organization means public facing web services and web/mobile applications. RESTFul services have now become the new service development paradigm normal; therefore SOAP centric standards such as XML Encryption, XML Signature, WS-Security, and WS-SecureConversation are nearly not as relevant. In this paper we provide an overview of the OWASP top 10 vulnerabilities for web services, and discuss the potential static code analysis techniques to discover these vulnerabilities. The paper reviews the security issues targeting web services, software/program verification and security development lifecycle.
Keywords: Web services; program diagnostics; program verification; security of data; Heartbleed; Internet of Things; Internet scale; OWASP; Open Web Application Security Project; RESTFul services; SOAP centric standards; Shellshock; WS-SecureConversation; WS-security; Web applications; Web service security; Web services guidelines; XML encryption; XML signature; critical national infrastructure; dynamic analysis; enterprise software ecosystem; information exchange; mission critical API; mobile applications; national security and industry; program verification; public deployments; public sector infrastructure; risk assessment; secure development life cycle; security challenges; service development paradigm; service-oriented architecture; services security; software bugs; software verification; static code analysis; strategic components; ubiquitous IoT; vulnerabilities detection; vulnerability identification; Computer crime; Cryptography; Simple object access protocol; Testing; XML; Cyber Security; Penetration Testing; RESTFul API; SOA; SOAP; Secure Design; Secure Software Development; Security Code Review; Service Oriented Architecture; Source Code Analysis; Static Analysis Tool; Static Code Analysis; Web Application security; Web Services; Web Services Security (ID#: 16-10020)


L. Tang, Liubo Ouyang and W. T. Tsai, “Multi-factor Web API Security for Securing Mobile Cloud,” Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, Zhangjiajie, 2015, pp. 2163-2168. doi: 10.1109/FSKD.2015.7382287
Abstract: Mobile Cloud Computing is gaining more popularity in both mobile users and enterprises. With mobile-first becoming enterprise IT strategy and more enterprises exposing their business services to mobile cloud through Web API, the security of mobile cloud computing becomes a main concern and key successful factor as well. This paper shows the security challenges of mobile cloud computing and defines an end-to-end secure mobile cloud computing reference architecture. Then it shows Web API security is a key to the end-to-end security stack and specifies traditional API security mechanism and two multi-factor Web API security strategy and mechanism. Finally, it compares the security features provided by ten API gateway providers.
Keywords: application program interfaces; cloud computing; mobile computing; security of data; API gateway providers; API security mechanism; business services; end-to-end secure mobile cloud computing; enterprise IT strategy; mobile cloud computing; mobile users; multifactor Web API security; securing mobile cloud; Authentication; Authorization; Business; Cloud computing; Mobile communication; end-to-end; mobile cloud; security mechanism; web API (ID#: 16-10021)


M. F. F. Khan and K. Sakamura, “Tamper-Resistant Security for Cyber-Physical Systems with eTRON Architecture,” 2015 IEEE International Conference on Data Science and Data Intensive Systems, Sydney, NSW, 2015, pp. 196-203. doi: 10.1109/DSDIS.2015.98
Abstract: This article posits tamper-resistance as a necessary security measure for cyber-physical systems (CPS). With omnipresent connectivity and pervasive use of mobile devices, software security alone is arguably not sufficient to safeguard sensitive digital information we use everyday. As a result, utilization of a variety of tamper-resistant devices - including smartcards, secure digital cards with integrated circuits, and mobile phones with subscriber identity module - has become standard industry practice. Recognizing the need for effective hardware security alongside software security, in this paper, we present the eTRON architecture - at the core of which lies the tamper-resistant eTRON chip, equipped with functions for mutual authentication, encrypted communication and access control. Besides the security features, the eTRON architecture also offers a wide range of functionalities through a coherent set of application programming interfaces (API) leveraging tamper-resistance. In this paper, we discuss various features of the eTRON architecture, and present two representative eTRON-based applications with a view to evaluating its effectiveness by comparing with other existing applications.
Keywords: authorisation; cyber-physical systems; electronic commerce; smart cards; ubiquitous computing; API; CPS; application programming interfaces; cyber-physical systems; eTRON architecture; hardware security; integrated circuits; mobile phones; secure digital cards; security features; smartcards; software security; subscriber identity module; tamper-resistant devices; tamper-resistant security; Access control; Authentication; Computer architecture; Cryptography; Hardware; Libraries; CPS; Tamper-resistance; access control; authentication; e-commerce; secure filesystem; smartcards (ID#: 16-10022)


Y. Sun, S. Nanda and T. Jaeger, “Security-as-a-Service for Microservices-Based Cloud Applications,” 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), Vancouver, BC, 2015, pp. 50-57. doi: 10.1109/CloudCom.2015.93
Abstract: Microservice architecture allows different parts of an application to be developed, deployed and scaled independently, therefore becoming a trend for developing cloud applications. However, it comes with challenging security issues. First, the network complexity introduced by the large number of microservices greatly increases the difficulty in monitoring the security of the entire application. Second, microservices are often designed to completely trust each other therefore compromise of a single microservice may bring down the entire application. The problems are only exacerbated by the cloud, since applications no longer have complete control over their networks. In this paper, we propose a design for security-as-a-service for microservices-based cloud applications. By adding a new API primitive FlowTap for the network hypervisor, we build a flexible monitoring and policy enforcement infrastructure for network traffic to secure cloud applications. We demonstrate the effectiveness of our solution by deploying the Bro network monitor using FlowTap. Results show that our solution is flexible enough to support various kinds of monitoring scenarios and policies and it incurs minimal overhead (~6%) for real world usage. As a result, cloud applications can leverage our solution to deploy network security monitors to flexibly detect and block threats both external and internal to their network.
Keywords: application program interfaces; cloud computing; security of data; trusted computing; API primitive FlowTap; Bro network monitor; microservice-based cloud applications; network hypervisor; policy enforcement infrastructure; security-as-a-service; Cloud computing; Complexity theory; Computer architecture; DVD; Electronic mail; Monitoring; Security; microservices; network monitoring; security (ID#: 16-10023)


W. You et al., “Promoting Mobile Computing and Security Learning Using Mobile Devices,” Integrated STEM Education Conference (ISEC), 2015 IEEE, Princeton, NJ, 2015, pp. 205-209. doi: 10.1109/ISECon.2015.7119924
Abstract: It is of vital importance to provide mobile computing and security education to students in the computing fields. As the mobile applications become increasingly popular and inexpensive ways for people to communicate, share information and take advantage of convenient functionality in people's daily lives, they also regularly attract the interests of malicious attackers. Malware and spyware that may damage smart phones or steal sensitive information are also growing in every aspect of people's lives. Another concern lies in insecure mobile application development. This kind of programming makes mobile devices more vulnerable. For example, some insecure exposures of the APIs or the abuse of some components while developing apps will make the applications suffer from potential threats. Although many academic institutions have started to or planned to offer mobile computing courses, there is a shortage of hands-on lab modules and resources that can be integrated into multiple existing computing courses. In this paper, we present our development on mobile computing and security hands-on Labs and share our experiences in teaching courses on mobile computing and security with students' learning feedback using Android mobile devices.
Keywords: Android (operating system); computer science education; invasive software; mobile computing; smart phones; teaching; API; Android mobile devices; hands-on lab modules; malicious attackers; malware; mobile application development; mobile computing and security education; mobile computing courses; mobile computing hands-on labs; mobile security hands-on labs; sensitive information; spyware; student learning feedback; teaching; Mobile communication; Mobile computing; Programming; Security; Smart phones; Android development; Mobile security education; Secure programming (ID#: 16-10024)


S. Hosseinzadeh, S. Rauti, S. Hyrynsalmi and V. Leppanen, “Security in the Internet of Things Through Obfuscation and Diversification,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-5. doi: 10.1109/CCCS.2015.7374189
Abstract: Internet of Things (IoT) is composed of heterogeneous embedded and wearable sensors and devices that collect and share information over the Internet. This may contain private information of the users. Thus, securing the information and preserving the privacy of the users are of paramount importance. In this paper we look into the possibility of applying the two techniques, obfuscation and diversification, in IoT. Diversification and obfuscation techniques are two outstanding security techniques used for proactively protecting the software and code. We propose obfuscating and diversifying the operating systems and APIs on the IoT devices, and also some communication protocols enabling the external use of IoT devices. We believe that the proposed ideas mitigate the risk of unknown zero-day attacks, large-scale attacks, and also the targeted attacks.
Keywords: Internet of Things; application program interfaces; operating systems (computers); security of data; API; IoT; diversification techniques; obfuscation techniques; operating systems; security techniques; Apertures; Feeds; Impedance; Radar antennas; Substrates; Wireless LAN; diversification; obfuscation; privacy; security (ID#: 16-10025)


M. A. Saied, O. Benomar, H. Abdeen and H. Sahraoui, “Mining Multi-level API Usage Patterns,” 2015 IEEE 22nd International Conference on Software Analysis, Evolution, and Reengineering (SANER), Montreal, QC, 2015, pp. 23-32. doi: 10.1109/SANER.2015.7081812
Abstract: Software developers need to cope with complexity of Application Programming Interfaces (APIs) of external libraries or frameworks. However, typical APIs provide several thousands of methods to their client programs, and such large APIs are difficult to learn and use. An API method is generally used within client programs along with other methods of the API of interest. Despite this, co-usage relationships between API methods are often not documented. We propose a technique for mining Multi-Level API Usage Patterns (MLUP) to exhibit the co-usage relationships between methods of the API of interest across interfering usage scenarios. We detect multi-level usage patterns as distinct groups of API methods, where each group is uniformly used across variable client programs, independently of usage contexts. We evaluated our technique through the usage of four APIs having up to 22 client programs per API. For all the studied APIs, our technique was able to detect usage patterns that are, almost all, highly consistent and highly cohesive across a considerable variability of client programs.
Keywords: application program interfaces; data mining; software libraries; MLUP; application programming interface; multilevel API usage pattern mining; Clustering algorithms; Context; Documentation; Graphical user interfaces; Java; Layout; Security; API Documentation; API Usage; Software Clustering; Usage Pattern (ID#: 16-10026)


M. N. Aneci, L. Gheorghe, M. Carabas, S. Soriga and R. A. Somesan, “SDN-based Security Mechanism,” 2015 14th RoEduNet International Conference - Networking in Education and Research (RoEduNet NER), Craiova, 2015, pp. 12-17. doi: 10.1109/RoEduNet.2015.7311820
Abstract: Nowadays most hardware configurations are being replaced with software configurations in order to virtualize everything is possible. At this rate, in the networking research domain, the concept of Software Defined Network (SDN) is evolving. This paper proposes an SDN-based security mechanism that provides confidentiality and integrity by using custom cryptographic algorithms between two routers. The mechanism is able to secure for both TCP and UDP traffic. It provides the possibility to choose what information to secure: the Layer 4 header and Layer 7 payload, or just the Layer 7 payload. The implementation of the proposed security mechanism relies on the Cisco model for SDN and is using OnePK API.
Keywords: computer network security; cryptography; software defined networking; transport protocols; OnePK API; SDN-based security mechanism; TCP traffic; UDP traffic; application program interface; cryptographic algorithms; software configuration; software defined network; transport control protocol; user defined protocol; Decision support systems; Cisco; Software Defined Networking; confidentiality; custom encryption; integrity; onePK (ID#: 16-10027)


R. Beniwal, P. Zavarsky and D. Lindskog, “Study of Compliance of Apple's Location based APIs with Recommendations of the IETF Geopriv,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 214-219. doi: 10.1109/ICITST.2015.7412092
Abstract: Location Based Services (LBS) are services offered by smart phone applications which use device location data to offer the location-related services. Privacy of location information is a major concern in LBS applications. This paper compares the location APIs of iOS with the IETF Geopriv architecture to determine what mechanisms are in place to protect location privacy of an iOS user. The focus of the study is on the distribution phase of the Geopriv architecture and its applicability in enhancing location privacy on iOS mobile platforms. The presented review shows that two iOS APIs features known as Geocoder and turning off location services provide to some extent location privacy for iOS users. However, only a limited number of functionalities can be considered as compliant with Geopriv's recommendations. The paper also presents possible ways how to address limited location privacy offered by iOS mobile devices based on Geopriv recommendations.
Keywords: application program interfaces; data privacy; iOS (operating system); recommender systems; smart phones; Apple location based API; Geocoder; Geopriv recommendation; IETF Geopriv architecture; LBS; device location data; distribution phase; iOS mobile device; iOS mobile platform; iOS user; location based service; location information privacy; location privacy; location-related service; off location service; smart phone application; Global Positioning System; Internet; Mobile communication; Operating systems; Privacy; Servers; Smart phones; APIs; Geopriv; iOS; location information (ID#: 16-10028)


A. Alotaibi and A. Mahmmod, “Enhancing OAuth Services Security by an Authentication Service with Face Recognition,” Systems, Applications and Technology Conference (LISAT), 2015 IEEE Long Island, Farmingdale, NY, 2015, pp. 1-6. doi: 10.1109/LISAT.2015.7160208
Abstract: Controlling secure access to web Application Programming Interfaces (APIs) and web services has become more vital with advancement and use of the web technologies. The security of web services APIs is encountering critical issues in managing authenticated and authorized identities of users. Open Authorization (OAuth) is a secure protocol that allows the resource owner to grant permission to a third-party application in order to access the resource owner's protected resource on their behalf, without releasing their credentials. Most web APIs are still using the traditional authentication which is vulnerable to many attacks such as man-in-the middle attack. To reduce such vulnerability, we enhance the security of OAuth through the implementation of a biometric service. We introduce a face verification system based on Local Binary Patterns as an authentication service handled by the authorization server. The entire authentication process consists of three services: Image registration service, verification service, and access token service. The developed system is most useful in securing those services where a human identification is required.
Keywords: Web services; application program interfaces; authorisation; biometrics (access control); face recognition; image registration; OAuth service security; Web application programming interfaces; Web services API; Web technologies; access token service; authentication service; authorization server; biometric service; face verification system; human identification; image registration service; local binary patterns; open authorization; resource owner protected resource; third-party application; verification service; Authentication; Authorization; Databases; Protocols; Servers; Access Token; Face Recognition; OAuth; Open Authorization; Web API; Web Services (ID#: 16-10029)


G. Suddul, K. Nundran, J. L. K. Cheung and M. Richomme, “Rapid Prototyping with a Local Geolocation API,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-4. doi: 10.1109/CCCS.2015.7374192
Abstract: Geolocation technology provides the ability to target content and services to users visiting specific locations. There is an expanding growth of device features and Web Application Programming Interfaces (APIs) supporting the development of applications with geolocation services on mobile platforms. However, to be effective, these applications rely on the availability of broadband networks which are not readily available in various developing countries, especially Africa. We propose a geolocation API for the Orange Emerginov Platform which keeps geolocation data in an offline environment and periodically synchronises with its online database. The API also has a set of new features like categorisation and shortest path. It has been successfully implemented and tested with geolocation data of Mauritius, consumed by mobile applications. Our result demonstrates reduction of response time around 80% for some features, when compared with other online Web APIs.
Keywords: application program interfaces; mobile computing; software prototyping; Mauritius; Orange Emerginov Platform; application programming interfaces; local geolocation API; mobile applications; rapid prototyping; Artificial neural networks; Computational modeling; Industries; Liquids; Mathematical model; Process control; Training; OSM geolocation API; Web API; geolocation; micro-services (ID#: 16-10030)


P. Sun, S. Chandrasekaran, S. Zhu and B. Chapman, “Deploying OpenMP Task Parallelism on Multicore Embedded Systems with MCA Task APIs,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 843-847. doi: 10.1109/HPCC-CSS-ICESS.2015.88
Abstract: Heterogeneous multicore embedded systems are rapidly growing with cores of varying types and capacity. Programming these devices and exploiting the hardware has been a real challenge. The programming models and its execution are typically meant for general purpose computation, they are mostly too heavy to be adopted for the resource-constrained embedded systems. Embedded programmers are still expected to use low-level and proprietary APIs, making the software built less and less portable. These challenges motivated us to explore how OpenMP, a high-level directive-based model, could be used for embedded platforms. In this paper, we translate OpenMP to Multicore Association Task Management API (MTAPI), which is a standard API for leveraging task parallelism on embedded platforms. Our results demonstrate that the performance of our OpenMP runtime library is comparable to the state-of-the-art task parallel solutions. We believe this approach will provide a portable solution since it abstracts the low-level details of the hardware and no longer depends on vendor-specific API.
Keywords: application program interfaces; embedded systems; multiprocessing systems; parallel processing; MCA; MTAPI; OpenMP runtime library; OpenMP task parallelism; heterogeneous multicore embedded system; high-level directive-based model; multicore association task management API; multicore embedded system; resource-constrained embedded system; vendor-specific API; Computational modeling; Embedded systems; Hardware; Multicore processing; Parallel processing; Programming; Heterogeneous Multicore Embedded Systems; MTAPI; OpenMP; Parallel Computing (ID#: 16-10031)


N. Kawaguchi and K. Omote, “Malware Function Classification Using APIs in Initial Behavior,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 138-144. doi: 10.1109/AsiaJCIS.2015.15
Abstract: Malware proliferation has become a serious threat to the Internet in recent years. Most of the current malware are subspecies of existing malware that have been automatically generated by illegal tools. To conduct an efficient analysis of malware, estimating their functions in advance is effective when we give priority to analyze. However, estimating malware functions has been difficult due to the increasing sophistication of malware. Although various approaches for malware detection and classification have been considered, the classification accuracy is still low. In this paper, we propose a new classification method which estimates malware's functions from APIs observed by dynamic analysis on a host. We examining whether the proposed method can correctly classify unknown malware based on function by machine learning. The results show that our new method can classify each malware's function with an average accuracy of 83.4%.
Keywords: Internet; invasive software; learning (artificial intelligence); pattern classification; API; dynamic analysis; efficient malware analysis; illegal tools; initial behavior; machine learning; malware detection; malware function classification; malware proliferation; Accuracy; Data mining; Feature extraction; Machine learning algorithms; Malware; Software; Support vector machines; machine learning; malware classification (ID#: 16-10032)


M. A. Saied, H. Abdeen, O. Benomar and H. Sahraoui, “Could We Infer Unordered API Usage Patterns Only Using the Library Source Code?,” 2015 IEEE 23rd International Conference on Program Comprehension, Florence, 2015, pp. 71-81. doi: 10.1109/ICPC.2015.16
Abstract: Learning to use existing or new software libraries is a difficult task for software developers, which would impede their productivity. Much existing work has provided different techniques to mine API usage patterns from client programs in order to help developers on understanding and using existing libraries. However, considering only client programs to identify API usage patterns is a strong constraint as the client programs source code is not always available or the clients themselves do not exist yet for newly released APIs. In this paper, we propose a technique for mining Non Client-based Usage Patterns (NCBUP miner). We detect unordered API usage patterns as distinct groups of API methods that are structurally and semantically related and thus may contribute together to the implementation of a particular functionality for potential client programs. We evaluated our technique through four APIs. The obtained results are comparable to those of client-based approaches in terms of usage-patterns cohesion.
Keywords: application program interfaces; data mining; software libraries; source code (software); NCBUP miner; client programs; library source code; non client-based usage patterns; software libraries; unordered API usage patterns; Clustering algorithms; Context; Java; Matrix decomposition; Measurement; Security; Semantics; API Documentation; API Usage; Software Clustering; Usage Pattern (ID#: 16-10033)


Y. E. Oktian, SangGon Lee, HoonJae Lee and JunHuy Lam, “Secure Your Northbound SDN API,” 2015 Seventh International Conference on Ubiquitous and Future Networks, Sapporo, 2015, pp. 919-920. doi: 10.1109/ICUFN.2015.7182679
Abstract: A lot of new features and capabilities emerge in the network because of separation of data plane and control plane in Software Defined Network (SDN) terminology. One of them is the possibility to implement Northbound API to allow third-party application to access resources of the network. However, most of the current implementations of Northbound API do not consider the security aspect. Therefore, we design more secure scheme for it. The design consists of token authentication for application and user, who is responsible to control/use the application/network, using OAuth 2.0 protocol.
Keywords: application program interfaces; computer network security; cryptographic protocols; software defined networking; Northbound SDN API; OAuth 2.0 protocol; SDN terminology; authentication; software defined network terminology; Authentication; Authorization; Proposals; Protocols; Servers; Software defined networking; Northbound API; SDN; authentication; token
(ID#: 16-10034)


C. I. Fan, H. W. Hsiao, C. H. Chou and Y. F. Tseng, “Malware Detection Systems Based on API Log Data Mining,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 255-260. doi: 10.1109/COMPSAC.2015.241
Abstract: As information technology improves, the Internet is involved in every area in our daily life. When the mobile devices and cloud computing technology start to play important parts of our life, they have become more susceptible to attacks. In recent years, phishing and malicious websites have increasingly become serious problems in the field of network security. Attackers use many approaches to implant malware into target hosts in order to steal significant data and cause substantial damage. The growth of malware has been very rapid, and the purpose has changed from destruction to penetration. The signatures of malware have become more difficult to detect. In addition to static signatures, malware also tries to conceal dynamic signatures from anti-virus inspection. In this research, we use hooking techniques to trace the dynamic signatures that malware tries to hide. We then compare the behavioural differences between malware and benign programs by using data mining techniques in order to identify the malware. The experimental results show that our detection rate reaches 95% with only 80 attributes. This means that our method can achieve a high detection rate with low complexity.
Keywords: Web sites; application program interfaces; cloud computing; computer viruses; data mining; API log data mining; Internet; antivirus inspection; cloud computing technology; dynamic signature tracing; dynamic signatures; hooking techniques; information technology; malicious Web sites; malware detection systems; mobile devices; network security; phishing; static signatures; Accuracy; Bayes methods; Data mining; Feature extraction; Malware; Monitoring; Training; API; Classification; Data Mining; Malware; System Call (ID#: 16-10035)


G. G. Sundarkumar, V. Ravi, I. Nwogu and V. Govindaraju, “Malware Detection via API Calls, Topic Models and Machine Learning,” 2015 IEEE International Conference on Automation Science and Engineering (CASE), Gothenburg, 2015, pp. 1212-1217. doi: 10.1109/CoASE.2015.7294263
Abstract: Dissemination of malicious code, also known as malware, poses severe challenges to cyber security. Malware authors embed software in seemingly innocuous executables, unknown to a user. The malware subsequently interacts with security-critical OS resources on the host system or network, in order to destroy their information or to gather sensitive information such as passwords and credit card numbers. Malware authors typically use Application Programming Interface (API) calls to perpetrate these crimes. We present a model that uses text mining and topic modeling to detect malware, based on the types of API call sequences. We evaluated our technique on two publicly available datasets. We observed that Decision Tree and Support Vector Machine yielded significant results. We performed t-test with respect to sensitivity for the two models and found that statistically there is no significant difference between these models. We recommend Decision Tree as it yields 'if-then' rules, which could be used as an early warning expert system.
Keywords: application program interfaces; data mining; decision trees; expert systems; invasive software; learning (artificial intelligence); support vector machines; API calls; application programming interface calls; cyber security; decision tree; early warning expert system; if-then rules; machine learning; malicious code dissemination; malware detection; security-critical OS resources; support vector machine text mining; topic modeling; topic models; Feature extraction; Grippers; Sensitivity; Support vector machines; Text mining; Trojan horses (ID#: 16-10036)


M. Sneps-Sneppe and D. Namiot, “Metadata in SDN API for WSN,” 2015 7th International Conference on New Technologies, Mobility and Security (NTMS), Paris, 2015, pp. 1-5. doi: 10.1109/NTMS.2015.7266504
Abstract: This paper discusses the system aspects of the development of applied programming interfaces in Software-Defined Networking (SDN). SDN is a prospect software enablement for Wireless Sensor Networks (WSN). So, application layer SDN API will be the main application API for WSN. Almost all existing SDN interfaces use so-called Representational State Transfer (REST) services as a basic model. This model is simple and straightforward for developers, but often does not support the information (metadata) necessary for programming automation. In this article, we cover the issues of representation of metadata in the SDN API.
Keywords: meta data; software defined networking; wireless sensor networks; REST services; SDN interfaces; WSN; metadata; programming automation; programming interfaces; representational state transfer; software-defined networking; Computer architecture; Metadata; Programming; service-oriented architecture; Wireless sensor networks; Parlay; REST; SDN; WSDL; northbound API (ID#: 16-10037)


M. Panda and A. Nag, “Plain Text Encryption Using AES, DES and SALSA20 by Java Based Bouncy Castle API on Windows and Linux,” Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 541-548. doi: 10.1109/ICACCE.2015.130
Abstract: Information Security has become an important element of data communication. Various encryption algorithms have been proposed and implemented as a solution and play an important role in information security system. But on the other hand these algorithms consume a significant amount of computing resources such as CPU time, memory and battery power. However, for all practical applications, performance and the cost of implementation are also important concerns. Therefore it is essential to assess the performance of encryption algorithms. In this paper, the performance of three Symmetric Key based algorithms-AES, Blowfish and Salsa20 has been evaluated based on execution time, memory required for implementation and throughput across two different operating systems. Based on the simulation results, it can be concluded that AES and Salsa20 are preferred over Blowfish for plain text data encryption.
Keywords: Java; Linux; application program interfaces; cryptography; AES; Blowfish; Bouncy Castle API; DES; SALSA20; Salsa20; Windows; data communication; information security system; operating systems; performance assessment; plain text data encryption; plain text encryption; symmetric key based algorithms; Algorithm design and analysis; Ciphers; Classification algorithms; Encryption; Memory management; Performance Analysis (ID#: 16-10038)


V. Casola, A. D. Benedictis, M. Rak and U. Villano, “SLA-Based Secure Cloud Application Development: The SPECS Framework,” 2015 17th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), Timisoara, 2015, pp. 337-344. doi: 10.1109/SYNASC.2015.59
Abstract: The perception of lack of control over resources deployed in the cloud may represent one of the critical factors for an organization to decide to cloudify or not their own services. Furthermore, in spite of the idea of offering security-as-a-service, the development of secure cloud applications requires security skills that can slow down the adoption of the cloud for nonexpert users. In the recent years, the concept of Security Service Level Agreements (Security SLA) is assuming a key role in the provisioning of cloud resources. This paper presents the SPECS framework, which enables the development of secure cloud applications covered by a Security SLA. The SPECS framework offers APIs to manage the whole Security SLA life cycle and provides all the functionalities needed to automatize the enforcement of proper security mechanisms and to monitor user defined security features. The development process of SPECS applications offering security-enhanced services is illustrated, presenting as a real-world case study the provisioning of a secure web server.
Keywords: application program interfaces; cloud computing; contracts; security of data; API; SLA-based secure cloud application development; SPECS framework; secure Web server; security SLA; security service level agreement; security-as-a-service; security-enhanced service; user-defined security feature; Cloud computing; Context; Monitoring; Security; Supply chains; Unified modeling language; SPECS; Secure Cloud Application Development; Secure Web Server; Security Service Level Agreement (ID#: 16-10039)


N. Paladi and C. Gehrmann, “Towards Secure Multi-Tenant Virtualized Networks,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 1180-1185. doi: 10.1109/Trustcom.2015.502
Abstract: Network virtualization enables multi-tenancy over physical network infrastructure, with a side-effect of increased network complexity. Software-defined networking (SDN) is a novel network architectural model – one where the control plane is separated from the data plane by a standardized API – which aims to reduce the network management overhead. However, as the SDN model itself is evolving, its application to multi-tenant virtualized networks raises multiple security challenges. In this paper, we present a security analysis of SDN-based multi-tenant virtualized networks: we outline the security assumptions applicable to such networks, define the relevant adversarial model, identify the main attack vectors for such network infrastructure deployments and finally synthesize a set of high-level security requirements for SDN-based multi-tenant virtualized networks. This paper sets the foundation for future design of secure SDN-based multi-tenant virtualized networks.
Keywords: application program interfaces; computer network management; computer network security; software defined networking; virtualisation; SDN; main attack vectors; multitenant virtualized network security; multitenant virtualized networks; network architectural model; network complexity; network infrastructure deployments; network management overhead reduction; network virtualization; physical network infrastructure; software-defined networking; standardized API; Cloud computing; Computer architecture; Hardware; Network operating systems; Routing; Security; Virtualization; Multi-tenant Virtualized Networks; Network Virtualization; Security; Software Defined Networks (ID#: 16-10040)


A. Bianchi, J. Corbetta, L. Invernizzi, Y. Fratantonio, C. Kruegel and G. Vigna, “What the App is That? Deception and Countermeasures in the Android User Interface,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015,
pp. 931-948. doi: 10.1109/SP.2015.62
Abstract: Mobile applications are part of the everyday lives of billions of people, who often trust them with sensitive information. These users identify the currently focused app solely by its visual appearance, since the GUIs of the most popular mobile OSes do not show any trusted indication of the app origin. In this paper, we analyze in detail the many ways in which Android users can be confused into misidentifying an app, thus, for instance, being deceived into giving sensitive information to a malicious app. Our analysis of the Android platform APIs, assisted by an automated state-exploration tool, led us to identify and categorize a variety of attack vectors (some previously known, others novel, such as a non-escapable full screen overlay) that allow a malicious app to surreptitiously replace or mimic the GUI of other apps and mount phishing and click-jacking attacks. Limitations in the system GUI make these attacks significantly harder to notice than on a desktop machine, leaving users completely defenseless against them. To mitigate GUI attacks, we have developed a two-layer defense. To detect malicious apps at the market level, we developed a tool that uses static analysis to identify code that could launch GUI confusion attacks. We show how this tool detects apps that might launch GUI attacks, such as ransom ware programs. Since these attacks are meant to confuse humans, we have also designed and implemented an on-device defense that addresses the underlying issue of the lack of a security indicator in the Android GUI. We add such an indicator to the system navigation bar, this indicator securely informs users about the origin of the app with which they are interacting (e.g., The Pay Pal app is backed by “Pay Pal, Inc.”). We demonstrate the effectiveness of our attacks and the proposed on-device defense with a user study involving 308 human subjects, whose ability to detect the attacks increased significantly when using a system equipped with our defense.
Keywords: Android (operating system); graphical user interfaces; invasive software; program diagnostics; smart phones; Android platform API; Android user interface; GUI confusion attacks; app origin; attack vectors; automated state-exploration tool; click-jacking attacks; desktop machine; malicious app; mobile OS; mobile applications; on-device defense; phishing attacks; ransomware programs; security indicator; sensitive information; static analysis; system navigation bar; trusted indication; two-layer defense; visual appearance; Androids; Graphical user interfaces; Humanoid robots; Navigation; Security; Smart phones; mobile-security; static-analysis; usable-security (ID#: 16-10041)


E. Markoska, N. Ackovska, S. Ristov, M. Gusev and M. Kostoska, “Software Design Patterns to Develop an Interoperable Cloud Environment,” Telecommunications Forum Telfor (TELFOR), 2015 23rd, Belgrade, 2015, pp. 986-989. doi: 10.1109/TELFOR.2015.7377630
Abstract: Software development has provided methods and tools to facilitate the development process, resulting in scalable, efficient, testable, readable and bug-free code. This endeavor has resulted in a multitude of products, many of them nowadays known as good practices, specialized environments, improved compilers, as well as software design patterns. Software design patterns are a tested methodology, and are most often language neutral. In this paper, we identify the problem of the heterogeneous cloud market, as well as the various APIs per a single cloud. By using a set of software design patterns, we developed a pilot software component that unifies the APIs of heterogeneous clouds. It offers an interface that would greatly simplify the development process of cloud based applications. The pilot adapter is developed for two open source clouds - Eucalyptus and OpenStack, but the usage of software design patterns allows an easy enhancement for all other clouds that have APIs for cloud management, either open source or commercial.
Keywords: application program interfaces; cloud computing; object-oriented methods; object-oriented programming; public domain software; software engineering; API; Eucalyptus; OpenStack; application program interface; cloud environment interoperability; open source cloud; software design pattern; software development; Cloud computing; Interoperability; Java; Production facilities; Security; Software design; cloud; design patterns; interoperability (ID#: 16-10042)


S. Betgé-Brezetz, G. B. Kamga and M. Tazi, “Trust Support for SDN Controllers and Virtualized Network Applications,” Network Softwarization (NetSoft), 2015 1st IEEE Conference on, London, 2015, pp. 1-5. doi: 10.1109/NETSOFT.2015.7116153
Abstract: The SDN paradigm allows networks to be dynamically reconfigurable by network applications. SDN is also of particular interest for NFV which deals with the virtualization of network functions. The network programmability offered by SDN presents then various advantages but it also induces various threats regarding potential attacks on the network. For instance, there is a critical risk that a hacker takes over the network control by exploiting this SDN network programmability (e.g., using the SDN API or tampering a network application running on the SDN controller). This paper proposes then an approach to deal with this possible lack of trust in the SDN controller or in their applications. This approach consists in not relying on a single controller but on several `redundant' controllers that may also run in different execution environments. The network configuration requests coming from these controllers are then compared and, if deemed sufficiently consistent and then trustable, they are actually sent to the network. This approach has been implemented in an intermediary layer (based on a network hypervisor) inserted between the network equipments and the controllers. Experimentations have been performed showing the feasibility of the approach and providing some first evaluations of its impact on the network and the services.
Keywords: application program interfaces; computer network security; software defined networking; trusted computing; virtualisation; NFV; SDN API; SDN controllers; SDN network programmability; SDN paradigm; network configuration requests; network control; network equipments; network function virtualization; network hypervisor; network programmability; redundant controllers; trust support; virtualized network applications; Computer architecture; Network topology; Prototypes; Routing; Security; Virtual machine monitors; Virtualization; SDN; network applications; network virtualization; security; trust (ID#: 16-10043)


H. L. Choo, S. Oh, J. Jung and H. Kim, “The Behavior-Based Analysis Techniques for HTML5 Malicious Features,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 436-440. doi: 10.1109/IMIS.2015.67
Abstract: HTML5 announced in October 2014 contains many more functions than previous HTML versions. It includes the media controls of audio, video, canvas, etc., and it is designed to access the browser file system through the Java Script API such as the web storage and file reader API. In addition, it provides the powerful functions to replace existing active X. As the HTML5 standard is adopted, the conversion of web services to HTML5 is being carried out all over the world. The browser developers particularly have high expectation for HTML5 as it provides many mobile functions. However, as there is much expectation of HTML5, the damage of malicious attacks using HTML5 is also expected to be large. The script, which is the key to HTML5 functions, is a different type from existing malware attacks as a malicious attack can be generated merely by only a user accessing a browser. The existing known attacks can also be reused by bypassing the detection systems through the new HTML5 elements. This paper intends to define the unique HTML5 behavior data through the browser execution data and to propose the detection of malware by categorizing the malicious HTML5 features.
Keywords: Internet; Java; hypermedia markup languages; invasive software; mobile computing; multimedia computing; online front-ends; telecommunication control; HTML versions; HTML5 behavior data; HTML5 elements; HTML5 functions; HTML5 malicious features; HTML5 standard; Java Script API; Web services; Web storage; behavior-based analysis techniques; browser developers; browser execution data; browser file system; detection systems; file reader API; malicious attacks; malware attacks; media controls; mobile functions; Browsers; Engines; Feature extraction; HTML; Malware; Standards; Behavior-Based Analysis; HTML5 Malicious Features; Script-based CyberAttack; Web Contents Security (ID#: 16-10044)


H. Graupner, K. Torkura, P. Berger, C. Meinel and M. Schnjakin, “Secure Access Control for Multi-Cloud Resources,” Local Computer Networks Conference Workshops (LCN Workshops), 2015 IEEE 40th, Clearwater Beach, FL, 2015, pp. 722-729. doi: 10.1109/LCNW.2015.7365920
Abstract: Privacy, security, and trust concerns are continuously hindering the growth of cloud computing despite its attractive features. To mitigate these concerns, an emerging approach targets the use of multi-cloud architectures to achieve portability and reduce cost. Multi-cloud architectures however suffer several challenges including inadequate cross-provider APIs, insufficient support from cloud service providers, and especially non-unified access control mechanisms. Consequently, the available multi-cloud proposals are unhandy or insecure. This paper proposes two contributions. At first, we survey existing cloud storage provider interfaces. Following, we propose a novel technique that deals with the challenges of connecting modern authentication standards and multiple cloud authorization methods.
Keywords: authorisation; cloud computing; data privacy; storage management; trusted computing; cloud storage provider interfaces inadequate cross-provider APIs; modern authentication standards; multicloud resources; multiple cloud authorization methods; nonunified access control mechanisms; privacy; secure access control; security; trust concerns; Access control; Authentication; Cloud computing; Containers; Google; Standards; Cloud storage; access control management; data security; multi-cloud systems
(ID#: 16-10045)


J. Xu and X. Yuan, “Developing a Course Module for Teaching Cryptography Programming on Android,” Frontiers in Education Conference (FIE), 2015. 32614 2015. IEEE, El Paso, TX, 2015, pp. 1-4. doi: 10.1109/FIE.2015.7344086
Abstract: Mobile platforms have become extremely popular among users and hence become an important platform for developers. Mobile devices often store tremendous amount of personal, financial and commercial data. Several studies have shown that large number of the mobile applications that use cryptography APIs have made mistakes. This could potentially attract both targeted and mass-scale attacks, which will cause great loss to the mobile users. Therefore, it is vitally important to provide education in secure mobile programming to students in computer science and other related disciplines. It is very hard to find pedagogical resources on this topic that many educators urgently need. This paper introduces a course module that teaches students how to develop secure Android applications by correctly using Android's cryptography APIs. This course module is targeted to two areas where programmers commonly make many mistakes: password based encryption and SSL certificate validation. The core of the module includes a real world sample Android program for students to secure by implementing cryptographic components correctly. The course module will use open-ended problem solving to let students freely explore the multiple options in securing the application. The course module includes a lecture slide on Android's Crypto library, its common misuses, and suggested good practices. Assessment materials will also be included in the course module. This course module could be used in mobile programming class or network security class. It could also be taught as a module in advanced programming class or used as a self-teaching tool for general public.
Keywords: application program interfaces; computer aided instruction; computer science education; cryptography; educational courses; mobile computing; smart phones; teaching; Android crypto library; Android program; SSL certificate validation; assessment materials; computer science; course module development; cryptographic components; cryptography API; cryptography programming; education; lecture slide; mass-scale attacks; mobile applications; mobile devices; mobile platforms; network security class; open-ended problem solving; password based encryption; pedagogical resources; secure Android applications; secure mobile programming class; targeted attacks; teaching; Androids; Encryption; Humanoid robots; Mobile communication; Programming; Android programming; SSL; course module; cryptography; programming; security (ID#: 16-10046)


J. Li, D. Tian and C. Hu, “Dynamic Tracking Reinforcement Based on Simplified Control Flow,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 358-362. doi: 10.1109/CIS.2015.93
Abstract: With the rapid development of computer science and Internet technology, software security issues have become one of the main threats to information system. The technique of execution path tracking based on control flow integrity is an effective method to improve software security. However, the dynamic tracking method may incur considerable performance overhead. To address this problem, this paper proposes a method of dynamic control flow enforcement based on API invocations. Our method is based on a key observation: most control flow attackers will invoke the sensitive APIs to achieve their malicious purpose. To defeat these attacks, we first extract the normal execution path of API calls by offline analysis. Then, we utilize the offline information for run-time enforcement. The results of the experiment showed that our method is able to detect and prevent the control flow attacks with malicious API invocations. Compared with existing methods, the system performance is improved.
Keywords: Internet; application program interfaces; information systems; security of data; API calls; API invocations; Internet technology; computer science; control flow attacks; control flow integrity; dynamic control flow enforcement; dynamic tracking reinforcement; information system; offline analysis; offline information; run-time enforcement; simplified control flow; software security; software security issues; Algorithm design and analysis; Heuristic algorithms; Instruments; Registers; Security; Software; Yttrium; API calls; inserted reinforcement; path tracking; simplified control flow (ID#: 16-10047)


V. S. Sinha, D. Saha, P. Dhoolia, R. Padhye and S. Mani, “Detecting and Mitigating Secret-Key Leaks in Source Code Repositories,” 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories, Florence, 2015, pp. 396-400. doi: 10.1109/MSR.2015.48
Abstract: Several news articles in the past year highlighted incidents in which malicious users stole API keys embedded in files hosted on public source code repositories such as GitHub and Bit Bucket in order to drive their own work-loads for free. While some service providers such as Amazon have started taking steps to actively discover such developer carelessness by scouting public repositories and suspending leaked API keys, there is little support for tackling the problem from the code sharing platforms themselves. In this paper, we discuss practical solutions to detecting, preventing and fixing API key leaks. We first outline a handful of methods for detecting API keys embedded within source code, and evaluate their effectiveness using a sample set of projects from GitHub. Second, we enumerate the mechanisms which could be used by developers to prevent or fix key leaks in code repositories manually. Finally, we outline a possible solution that combines these techniques to provide tool support for protecting against key leaks in version control systems.
Keywords: application program interfaces; public key cryptography; source code (software); code repositories; fix key leaks; key leaks protection; secret-key leaks detection; secret-key leaks mitigation; source code repositories; version control systems; Control systems; Facebook; History; Java; Leak detection; Pattern matching; Software; api keys; git; mining software repositories; security
(ID#: 16-10048)


J. Spring, S. Kern and A. Summers, “Global Adversarial Capability Modeling,” 2015 APWG Symposium on Electronic Crime Research (eCrime), Barcelona, 2015, pp. 1-21. doi: 10.1109/ECRIME.2015.7120797
Abstract: Intro: Computer network defense has models for attacks and incidents comprised of multiple attacks after the fact. However, we lack an evidence-based model the likelihood and intensity of attacks and incidents. Purpose: We propose a model of global capability advancement, the adversarial capability chain (ACC), to fit this need. The model enables cyber risk analysis to better understand the costs for an adversary to attack a system, which directly influences the cost to defend it. Method: The model is based on four historical studies of adversarial capabilities: capability to exploit Windows XP, to exploit the Android API, to exploit Apache, and to administer compromised industrial control systems. Result: We propose the ACC with five phases: Discovery, Validation, Escalation, Democratization, and Ubiquity. We use the four case studies as examples as to how the ACC can be applied and used to predict attack likelihood and intensity.
Keywords: Analytical models; Androids; Biological system modeling; Computational modeling; Humanoid robots; Integrated circuit modeling; Software systems; CND; computer network defense; cybersecurity; Incident response; intelligence; intrusion detection; modeling; security (ID#: 16-10049)


K. Shah and D. K. Singh, “A Survey on Data Mining Approaches for Dynamic Analysis of Malwares,” Green Computing and Internet of Things (ICGCIoT), 2015 International Conference on, Noida, 2015, pp. 495-499. doi: 10.1109/ICGCIoT.2015.7380515
Abstract: The number of samples being analyzed by the security vendors is continuously increasing on daily basis. Therefore generic automated malware detection tools are needed, to detect zero day threats. Using machine learning techniques, the exploitation of behavioral patterns obtained, can be done for classifying malwares (unknown samples) to their families. Variable length instructions of Intel x86 placed at any arbitrary addresses makes it affected by obfuscation techniques. Padding bytes insertion at locations that are unreachable during runtime tends static analyzers being contused to misinterpret binaries of program. Often the code that is actually running may not necessarily be the code which static analyzer analyzed. Such programs use polymorphism, metamorphism techniques and are self modifying. In this paper, using dynamic analysis of executable and based on mining techniques. Application Programming Interface (API) calls invoked by samples during execution are used as parameter of experimentation.
Keywords: application program interfaces; data mining; invasive software; learning (artificial intelligence); pattern classification; system monitoring; application programming interface; behavioral pattern exploitation; data mining approach; dynamic malware analysis; generic automated malware detection tools; machine learning techniques; malware classification; metamorphism techniques; obfuscation techniques; padding byte insertion; polymorphism; security vendors; variable length instructions; Classification algorithms; API Calls; AdaBoost; Classifiers; Dynamic Analysis (ID#: 16-10050)


N. S. Gawale and N. N. Patil, “Implementation of a System to Detect Malicious URLs  for Twitter Users,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-5. doi: 10.1109/PERVASIVE.2015.7087078
Abstract: Over the last few years, there is tremendous use of online social networking sites. It's also providing opportunities for hackers to enter easily in network and do their unauthorized activities. There are many notable social networking websites like Twitter, Facebook and Google+ etc. These are popularly practiced by numerous people to become linked up with each other and partake their daily happenings through it. Here we focus on twitter for an experiment which is more popular for micro-blogging and its community interact through publishing text-based posts of 140 characters known as tweets. By considering this popularity of tweeter hacker's use of short Uniform Resource Locator (URL), as a result it disseminates viruses on user accounts. Our study is based on examining the malicious content or their short URLs and protect the user from unauthorized activities. We introduce such a system which provides the security to multiple users of twitter. Besides, they get some alert mails. Our goal is to download URLs in real time from multiple accounts. Then we get entry points of correlated URLs. Crawler browser marks the suspicious URL. This system finds such malicious URLs by using five features like initial URL, similar text, friend follower ratio and relative URLs. Then alert mail is sent to users, which is added to the host.
Keywords: authorisation; computer crime; computer viruses; social networking (online); Facebook; Google+; Twitter; Uniform Resource Locator; alert mail; crawler browser; friend follower ratio; hackers; malicious URL detection; malicious content; microblogging; online social networking Web sites; suspicious URL; text-based post publishing; unauthorized activities; user accounts; virus dissemination; Crawlers; Databases; Real-time systems; Servers; Uniform resource locators; API keys; Conditional redirect; Suspicious URL;  classifier; crawler (ID#: 16-10051)


Y. Li, J. Fang, C. Liu, M. Liu and S. Wu, “Study on the Application of Dalvik Injection Technique for the Detection of Malicious Programs in Android,” Electronics Information and Emergency Communication (ICEIEC), 2015 5th International Conference on, Beijing, 2015, pp. 309-312. doi: 10.1109/ICEIEC.2015.7284546
Abstract: With the increasing popularization of smart phones in life, malicious software targeting smart phones is emerging in an endless stream. As the phone system possessing the highest current market share, Android is facing a full-scale security challenge. This article focuses on analyzing the application of Dalvik injection technique in the detection of Android malware. Modify the system API (Application Program Interface) through Dalvik injection technique can detect the programs on an Android phone directly. Through the list of sensitive API called by malicious programs, eventually judge the target program as malicious or not.
Keywords: Android (operating system); application program interfaces; invasive software; API; Android malware; Dalvik injection technique; application program interface; malicious program detection; Google; Java; Libraries; Security; Smart phones; Software; Dalvik injection; detection of malicious programs; sensitive API (ID#: 16-10052)


A. d. Benedictis, M. Rak, M. Turtur and U. Villano, “REST-Based SLA Management for Cloud Applications,” 2015 IEEE 24th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises, Larnaca, 2015, pp. 93-98. doi: 10.1109/WETICE.2015.36
Abstract: In cloud computing, possible risks linked to availability, performance and security can be mitigated by the adoption of Service Level Agreements (SLAs) formally agreed upon by cloud service providers and their users. This paper presents the design of services for the management of cloud-oriented SLAs that hinge on the use of a REST-based API. Such services can be easily integrated into existing cloud applications, platforms and infrastructures, in order to support SLA-based cloud services delivery. After a discussion on the SLA life-cycle, an agreement protocol state diagram is introduced. It takes explicitly into account negotiation, remediation and renegotiation issues, is compliant with all the active standards, and is compatible with the WS-Agreement standard. The requirement analysis and the design of a solution able to support the proposed SLA protocol is presented, introducing the REST API used. This API aims at being the basis for a framework to build SLA-based applications.
Keywords: application program interfaces; cloud computing; contracts; diagrams; formal specification; formal verification; protocols; systems analysis; CSP; REST-based API; REST-based SLA management; SLA-based cloud services delivery; agreement protocol state diagram;  cloud service provider; requirement analysis; service level agreement; Cloud computing; Monitoring; Protocols; Security; Standards; Uniform resource locators; XML; API; Cloud; REST; SLA; WS-Agreement (ID#: 16-10053)


K. Shekanayaki, A. Chakure and A. Jain, “A Survey of Journey of Cloud and Its Future,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, Pune, 2015, pp. 60-64. doi: 10.1109/ICCUBEA.2015.20
Abstract: Cloud computing in the past few years has grown from a promising business idea to one of the fastest growing field of the IT industry. Still the IT organization is concern about critical issues (like security, data loss) existing with the implementation of cloud computing, related to security in particular. Consequently issues arise due to client switching to cloud computing. This paper briefs about the role of cloud in IT business enterprise, its woes and its projected solutions. It proposes the use of “Honey-Comb Infrastructure” for flexible, secure and reliable storage supported by parallel computing.
Keywords: cloud computing; electronic commerce; parallel processing; security of data; IT business enterprise; IT industry; IT organization; honey-comb infrastructure; information technology; parallel computing; security issue; Business; Cloud computing; Computational modeling; Computer architecture; Security; Servers; Software as a service; API (Application Programming Interface); CSP (Cloud Service Provider); Cloud Computing; DC (Data-centers); PAAS (Platform as a Service); SAAS (Software as a Service); SOA (Service-Oriented Architecture); TC (Telecommunications Closet); Virtualization (ID#: 16-10054)


M. Coblenz, R. Seacord, B. Myers, J. Sunshine and J. Aldrich, “A Course-Based Usability Analysis of Cilk Plus and OpenMP,” Visual Languages and Human-Centric Computing (VL/HCC), 2015 IEEE Symposium on, Atlanta, GA, 2015, pp. 245-249. doi: 10.1109/VLHCC.2015.7357223
Abstract: Cilk Plus and OpenMP are parallel language extensions for the C and C++ programming languages. The CPLEX Study Group of the ISO/IEC C Standards Committee is developing a proposal for a parallel programming extension to C that combines ideas from Cilk Plus and OpenMP. We conducted a preliminary comparison of Cilk Plus and OpenMP in a master's level course on security to evaluate the design tradeoffs in the usability and security of these two approaches. The eventual goal is to inform decision-making within the committee. We found several usability problems worthy of further investigation based on student performance, including declaring and using reductions, multi-line compiler directives, and the understandability of task assignment to threads.
Keywords: C++ language; application program interfaces; computer aided instruction; computer science education; human factors; multi-threading; program compilers; C programming language; C++ programming language; CPLEX Study Group; Cilk Plus; ISO/IEC C Standards Committee; OpenMP; course-based usability analysis; decision-making; master level course; multiline compiler directives; parallel language extensions; student performance analysis; task assignment understandability; Programming; API usability; Cilk Plus; OpenMP; empirical studies of programmers; parallel programming (ID#: 16-10055)


P. Gohar and L. Purohit, “Discovery and Prioritization of Web Services Based on Fuzzy User Preferences for QoS,” Computer, Communication and Control (IC4), 2015 International Conference on, Indore, 2015, pp. 1-6. doi: 10.1109/IC4.2015.7375702 
Abstract: Web services are the key technologies for the web applications developed using Service Oriented Architecture (SOA). There are many challenges involved in implementing web services. Some of them are web service selection and discovery which involves matchmaking and finding the most suitable web service from a large collection of functionally-equivalent web services. In this paper a fuzzy-based approach for web service discovery is developed that model the ranking of QoS-aware web services as a fuzzy multi-criteria decision-making problem. To describe the web services available in the registry, ontology is created for each web service; and to represent the functional and imprecise Quality of Service (QoS) preferences of both the web service consumer and provider in linguistics term, fuzzy rule base is created with the help of Java Expert System Shell (JESS) API. To make decisions on multiple and conflicting QoS requirements, enhanced Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE) model is adopted for QoS-based web service ranking. To demonstrate the abilities of the proposed framework, a web based system “E-Recruitment System“ is implemented. 
Keywords: Java; Web services; decision making; fuzzy set theory; ontologies (artificial intelligence);operations research; quality of service; service-oriented architecture; E-Recruitment System; JESS API; Java Expert System Shell API;PROMETHEE model; Preference Ranking Organization METHod for Enrichment Evaluation; QoS requirements; QoS-aware Web service ranking; SOA; Web applications; Web based system; Web service consumer; Web service discovery; Web service prioritization; Web service selection; fuzzy multicriteria decision-making problem; fuzzy rule base; fuzzy user preference; fuzzy-based approach; linguistics term; ontology; quality of service preference; service oriented architecture; Computer architecture; Computers; Conferences; Quality of service; Security; Service-oriented architecture; Fuzzy Discovery; JESS API; PROMETHEE; QoS Parameters; Web Service (ID#: 16-10056)


F. Yamaguchi, A. Maier, H. Gascon and K. Rieck, “Automatic Inference of Search Patterns for Taint-Style Vulnerabilities,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 797-812. doi: 10.1109/SP.2015.54
Abstract: Taint-style vulnerabilities are a persistent problem in software development, as the recently discovered “Heart bleed” vulnerability strikingly illustrates. In this class of vulnerabilities, attacker-controlled data is passed unsanitized from an input source to a sensitive sink. While simple instances of this vulnerability class can be detected automatically, more subtle defects involving data flow across several functions or project-specific APIs are mainly discovered by manual auditing. Different techniques have been proposed to accelerate this process by searching for typical patterns of vulnerable code. However, all of these approaches require a security expert to manually model and specify appropriate patterns in practice. In this paper, we propose a method for automatically inferring search patterns for taint-style vulnerabilities in C code. Given a security-sensitive sink, such as a memory function, our method automatically identifies corresponding source-sink systems and constructs patterns that model the data flow and sanitization in these systems. The inferred patterns are expressed as traversals in a code property graph and enable efficiently searching for unsanitized data flows — across several functions as well as with project-specific APIs. We demonstrate the efficacy of this approach in different experiments with 5 open-source projects. The inferred search patterns reduce the amount of code to inspect for finding known vulnerabilities by 94.9% and also enable us to uncover 8 previously unknown vulnerabilities.
Keywords: application program interfaces; data flow analysis; public domain software; security of data; software engineering; C code; attacker-controlled data; automatic inference; code property graph; data flow; data security; inferred search pattern; memory function; open-source project; project- specific API; search pattern; security-sensitive sink; sensitive sink; software development; source-sink system; taint-style vulnerability; Databases; Libraries; Payloads; Programming; Security; Software; Syntactics; Clustering; Graph Databases; Vulnerabilities (ID#: 16-10057) 


P. Jia, X. He, L. Liu, B. Gu and Y. Fang, “A Framework for Privacy Information Protection on Android,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 1127-1131. doi: 10.1109/ICCNC.2015.7069508
Abstract: Permissions-based security model of Android increasingly shows its vulnerability in protecting users' privacy information. According to the permissions-based security model, an application should have the appropriate permissions before gaining various resources (including data and hardware) in the phone. This model can only restrict an application to access system resources without appropriate permissions, but can not prevent malicious accesses to privacy data after the application having obtained permissions. During the installation of an application, the system will prompt what permissions the application is requesting. Users have no choice but to allow all the requested permissions if they want to use the application. Once an application is successfully installed, the system is unable to control its behavior dynamically, and at this time the application can obtain privacy information and send them out without the acknowledgements of users. Therefore, there is a great security risk of the permissions-based security model. This paper researches on different ways to access users' privacy information and proposes a framework named PriGuard for dynamically protecting users' privacy information based on Binder communication interception technology and feature selection algorithm. Applications customarily call system services remotely by using the Binder mechanism, then access the equipment and obtain information through system services. By redirecting the Binder interface function of Native layer, PriGuard intercepts Binder messages, as a result, intercepting the application's Remote Procedure Call (RPC) for system services, then it can dynamically monitor the application's behaviors that access privacy information. In this paper, we collect many different types of benign Application Package File (APK) samples, and get the Application Programming Interface (API) calls of each sample when it is running. Afterwards we transform these API calls of each sample into f- ature vectors. Feature selection algorithm is used to generate the optimal feature subset. PriGuard automatically completes the privacy policy configuration on the newly installed software according to the optimal feature subset, and then control the calls on system service of the software using Binder message interception technology, which achieves the purpose of protecting users' privacy information.
Keywords: Android (operating system); application program interfaces; authorisation; data protection; remote procedure calls; API; APK; Android; Binder communication interception technology; Binder interface function; Binder message interception technology; PriGuard framework; RPC; application installation; application package file; application programming interface; application remote procedure call; dynamic application behavior monitoring; dynamic user privacy information protection; feature selection algorithm; native layer; optimal feature subset generation; permission-based security model; privacy policy configuration; security risk; system resource access; system services; user privacy information access; user privacy information protection; Conferences; Monitoring; Privacy; Security; Smart phones; Software; Vectors; RPC intercept; android; binder; feature selection algorithm; privacy protection (ID#: 16-10058)


B. He et al., “Vetting SSL Usage in Applications with SSLINT,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 519-534. doi: 10.1109/SP.2015.38
Abstract: Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols have become the security backbone of the Web and Internet today. Many systems including mobile and desktop applications are protected by SSL/TLS protocols against network attacks. However, many vulnerabilities caused by incorrect use of SSL/TLS APIs have been uncovered in recent years. Such vulnerabilities, many of which are caused due to poor API design and inexperience of application developers, often lead to confidential data leakage or man-in-the-middle attacks. In this paper, to guarantee code quality and logic correctness of SSL/TLS applications, we design and implement SSLINT, a scalable, automated, static analysis system for detecting incorrect use of SSL/TLS APIs. SSLINT is capable of performing automatic logic verification with high efficiency and good accuracy. To demonstrate it, we apply SSLINT to one of the most popular Linux distributions -- Ubuntu. We find 27 previously unknown SSL/TLS vulnerabilities in Ubuntu applications, most of which are also distributed with other Linux distributions.
Keywords: Linux; application program interfaces; formal verification; program diagnostics; protocols; security of data; API design; Linux distributions; SSL usage vetting; SSL-TLS protocols; SSLINT; Ubuntu; automatic logic verification; code quality; logic correctness; network attacks; secure sockets layer; static analysis system; transport layer security; Accuracy; Libraries; Protocols; Security; Servers; Software; Testing (ID#: 16-10059)


N. Pazos, M. Müller, M. Aeberli and N. Ouerhani, “ConnectOpen – Automatic Integration of IoT Devices,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 640-644. doi: 10.1109/WF-IoT.2015.7389129
Abstract: There exists, today, a wide consensus that Internet of Things (IoT) is creating a wide range of business opportunities for various industries and sectors like Manufacturing, Healthcare, Public infrastructure management, Telecommunications and many others. On the other hand, the technological evolution of IoT facing serious challenges. The fragmentation in terms of communication protocols and data formats at device level is one of these challenges. Vendor specific application architectures, proprietary communication protocols and lack of IoT standards are some reasons behind the IoT fragmentation. In this paper we propose a software enabled framework to address the fragmentation challenge. The framework is based on flexible communication agents that are deployed on a gateway and can be adapted to various devices communicating different data formats using different communication protocol. The communication agent is automatically generated based on specifications and automatically deployed on the Gateway in order to connect the devices to a central platform where data are consolidated and exposed via REST APIs to third party services. Security and scalability aspects are also addressed in this work.
Keywords: Internet of Things; application program interfaces; cloud computing; computer network security; internetworking; transport protocols; ConnectOpen;  IoT fragmentation; REST API; automatic IoT device integration; central platform; communication agents; communication protocol; communication protocols; data formats; device level; scalability aspect; security aspect; software enabled framework; third party services; Business; Embedded systems; Logic gates; Protocols; Scalability; Security; Sensors; Communication Agent; End Device; Gateway; IoT; Kura; MQTT; OSGi (ID#: 16-10060)


L. Qiu, Z. Zhang, Z. Shen and G. Sun, “AppTrace: Dynamic Trace on Android Devices,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7145-7150. doi: 10.1109/ICC.2015.7249466
Abstract: Mass vulnerabilities involved in the Android alternative applications could threaten the security of the launched device or users data. To analyze the alternative applications, generally, researchers would like to observe applications' runtime features first. Then they need to decompile the target application and read the complicated code to figure out what the application really does. Traditional dynamic analysis methodology, for instance, the TaintDroid, uses dynamic taint tracking technique to mark information at source APIs. However, TaintDroid is limited to constraint on requiring target application to run in custom sandbox that might be not compatible with all the Android versions. For solving this problem and helping analysts to have insight into the runtime behavior, this paper presents AppTrace, a novel dynamic analysis system that uses dynamic instrumentation technique to trace member methods of target application that could be deployed in any version above Android 4.0. The paper presents an evaluation of AppTrace with 8 apps from Google Play as well as 50 open source apps from F-Droid. The results show that AppTrace could trace methods of target applications successfully and notify users effectively when some sensitive APIs are invoked.
Keywords: application program interfaces; smart phones; system monitoring; API; Android devices; AppTrace; Google Play; TaintDroid; dynamic instrumentation technique; dynamic taint tracking technique; dynamic trace; novel dynamic analysis system; open source apps; Androids; Humanoid robots; Instruments; Java; Runtime; Security; Smart phones (ID#: 16-10061)


S. S. Shinde and S. S. Sambare, “Enhancement on Privacy Permission Management for Android Apps,” Communication Technologies (GCCT), 2015 Global Conference on, Thuckalay, 2015, pp. 838-842. doi: 10.1109/GCCT.2015.7342779
Abstract: Nowadays everyone is using smartphone devices for personal and official data storage. Smartphone apps are usually not secure and need user permission to access protected system resources. Specifically, the existing Android permission system will check whether the calling app has the right permission to invoke sensitive system APIs. Android OS allows third-party applications. Whenever a user installs any third party application user is having, only limited options at the time of installation. Either user can agree to all terms and conditions and install that application or reject from the installation of applications. Existing approaches failed to provide security to user's sensitive data from being violated. To protect user's privacy, there is a need for secure permission management for Android applications. In this paper, to fine-grained the permission management, we have proposed a system which provides a facility to smartphone user's grant or revoke access to user's private sensitive data as per user's choice. Overall Performance of proposed system improves the limitations of existing android permission system shown in detail in terms of results and features in the paper.
Keywords: Android (operating system); authorisation; data privacy; smart phones; APIs; Android OS; Android apps; Android permission system; official data storage; personal data storage; privacy permission management; smartphone apps; smartphone devices; third-party applications; Androids; Databases; Humanoid robots; Internet; Privacy; Security; Smart phones; Android OS; Fine-grained; Permission system; Smartphone user; privacy (ID#: 16-10062)


A. Slominski, V. Muthusamy and R. Khalaf, “Building a Multi-Tenant Cloud Service from Legacy Code with Docker Containers,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 394-396. doi: 10.1109/IC2E.2015.66
Abstract: In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services. Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.
Keywords: Java; application program interfaces; cloud computing; specification languages; Beta Workflow service; Cloudant persistence layer; HTTP REST requests; IBM Bluemix Workflow Service; Javascript code; Javascript-based domain specific language; REST API; docker containers; legacy Web application; legacy codebase; multitenant cloud service; reusable architectural pattern; Browsers; Cloud computing; Containers; Engines; Memory management; Organizations; Security (ID#: 16-10063)


H. Hamadeh, S. Chaudhuri and A. Tyagi, “Area, Energy, and Time Assessment for a Distributed TPM for Distributed Trust in IoT Clusters,” 2015 IEEE International Symposium on Nanoelectronic and Information Systems, Indore, 2015, pp. 225-230. doi: 10.1109/iNIS.2015.17
Abstract: IoT clusters arise from natural human societal clusters such as a house, an airport, and a highway. IoT clusters are heterogeneous with a need for device to device as well as device to user trust. The IoT devices are likely to be thin computing clients. Due to low cost, an individual IoT device is not built to be fault tolerant through redundancy. Hence the trust protocols cannot take the liveness of a device for granted. In fact, the differentiation between a failing device and a malicious device is difficult from the trust protocol perspective. We present a minimal distributed trust layer based on distributed consensus like operations. These distributed primitives are cast in the context of the APIs supported by a trusted platform module (TPM). TPM with its 1024 bit RSA is a significant burden on a thin IoT design. We use RNS based slicing of a TPM where in each slice resides within a single IoT device. The overall TPM functionality is distributed among several IoT devices within a cluster. The VLSI area, energy, and time savings of such a distributed TMP implementation is assessed. A sliced/distributed TPM is better suited for an IoT environment based on its resource needs. We demonstrate over 90% time reduction, over 3% area reduction, and over 90% energy reduction per IoT node in order to support TPM protocols.
Keywords: Internet of Things; VLSI; application program interfaces; cryptographic protocols; residue number systems; trusted computing;1024 bit RSA; APIs; IoT clusters; RNS based slicing; TPM protocols; VLSI area savings; VLSI energy savings; VLSI time savings; distributed TPM; distributed consensus like operations; minimal distributed trust layer; sliced TPM; trust protocol; trusted platform module; Airports; Approximation algorithms; Computers; Delays; Electronic mail; Protocols; Security; Area; IoT; Residue Number System; Time and Energy; Trusted Platform Module (ID#: 16-10064)


A. Aflatoonian, A. Bouabdallah, K. Guillouard, V. Catros and J. M. Bonnin, “BYOC: Bring Your Own Control—A New Concept to Monetize SDN’s Openness,” Network Softwarization (NetSoft), 2015 1st IEEE Conference on, London, 2015, pp. 1-5. doi: 10.1109/NETSOFT.2015.7116147
Abstract: Software Defined Networking (SDN) is supposed to bring flexibility, dynamicity and automation to today's network through a logically centralized network controller. We argue that reaching SDN's full capacities requires however the development of standardized programming capabilities on its top. In this paper we introduce “Bring Your Own Control” (BYOC) as a new concept providing a convenient framework structuring the openness of the SDN on its northbound side. We derive from the lifecycle characterizing the services deployed in an SDN, the parts of services the control of which may be delegated by the operator to external customers through dedicated application programming interfaces (API) located in the northbound interface (NBI). We argue that the exploitation of such services may noticeably be refined by the operator through various business models monetizing the openness of the SDN following the new paradigm of “Earn as Your Bring” (EaYB). We propose an analysis of BYOC and we illustrate our approach with several use cases.
Keywords: application program interfaces; open systems; software defined networking; API; BYOC; EaYB; NBI; SDN openness; bring your own control; dedicated application programming interfaces; earn as your bring; framework structuring; logically centralized network controller; northbound interface; software defined networking; standardized programming capabilities; Business; Computer architecture; Monitoring; Multiprotocol label switching; Real-time systems; Security; Virtual private networks; Bring Your Own Control (BYOC); Business model; Earn as You Bring (EaYB); Northbound Interface (NBI); Software Defined Networking (SDN) (ID#: 16-10065)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

APIs 2015 (Part 2)


SoS Logo


2015 (Part 2)


Applications Programming Interfaces, APIs, are definitions of interfaces to systems or modules. As code is reused, more and more are modified from earlier code. For the Science of Security community, the problems of compositionality and resilience are direct. The research work cited here was presented in 2015.

E. Kowalczyk, A. M. Memon and M. B. Cohen, “Piecing Together App Behavior from Multiple Artifacts: A Case Study,” Software Reliability Engineering (ISSRE), 2015 IEEE 26th International Symposium on, Gaithersburg, MD, 2015, pp. 438-449. doi: 10.1109/ISSRE.2015.7381837
Abstract: Recent research in mobile software analysis has begun to combine information extracted from an app's source code and marketplace webpage to identify correlated variables and validate an app's quality properties such as its intended behavior, trust or suspiciousness. Such work typically involves analysis of one or two artifacts such as the GUI text, user ratings, app description keywords, permission requests, and sensitive API calls. However, these studies make assumptions about how the various artifacts are populated and used by developers, which may lead to a gap in the resulting analysis. In this paper, we take a step back and perform an in-depth study of 14 popular apps from the Google Play Store. We have studied a set of 16 different artifacts for each app, and conclude that the output of these must be pieced together to form a complete understanding of the app's true behavior. We show that (1) developers are inconsistent in where and how they provide descriptions; (2) each artifact alone has incomplete information; (3) different artifacts may contain contradictory pieces of information; (4) there is a need for new analyses, such as those that use image processing; and (5) without including analyses of advertisement libraries, the complete behavior of an app is not defined. In addition, we show that the number of downloads and ratings of an app does not appear to be a strong predictor of overall app quality, as these are propagated through versions and are not necessarily indicative of the current app version's behavior.
Keywords: application program interfaces; graphical user interfaces; mobile computing; source code (software); GUI text; Google Play Store; app description keywords; apps source code; marketplace webpage; mobile software analysis; permission requests; sensitive API calls; user ratings; Androids; Cameras; Data mining; Google; Humanoid robots; Security; Videos (ID#: 16-10066)


R. Cziva, S. Jouet, K. J. S. White and D. P. Pezaros, “Container-Based Network Function Virtualization for Software-Defined Networks,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 415-420. doi: 10.1109/ISCC.2015.7405550
Abstract: Today's enterprise networks almost ubiquitously deploy middlebox services to improve in-network security and performance. Although virtualization of middleboxes attracts a significant attention, studies show that such implementations are still proprietary and deployed in a static manner at the boundaries of organisations, hindering open innovation. In this paper, we present an open framework to create, deploy and manage virtual network functions (NF)s in OpenFlow-enabled networks. We exploit container-based NFs to achieve low performance overhead, fast deployment and high reusability missing from today's NFV deployments. Through an SDN northbound API, NFs can be instantiated, traffic can be steered through the desired policy chain and applications can raise notifications. We demonstrate the systems operation through the development of exemplar NFs from common Operating System utility binaries, and we show that container-based NFV improves function instantiation time by up to 68% over existing hypervisor-based alternatives, and scales to one hundred co-located NFs while incurring sub-millisecond latency.
Keywords: computer network performance evaluation; computer network security; software defined networking; virtualisation; NFV deployments; OpenFlow-enabled networks; SDN northbound API; container-based NF; container-based network function virtualization; enterprise networks; function instantiation time; hypervisor-based alternatives; in-network security; middlebox services; network performance; operating system utility binaries; performance overhead; policy chain; software-defined networks; systems operation; virtual network functions; Containers; Middleboxes; Noise measurement; Ports (Computers); Routing; Servers; Virtualization
(ID#: 16-10067)


F. Shahzad, “Safe Haven in the Cloud: Secure Access Controlled File Encryption (SAFE) System,” Science and Information Conference (SAI), 2015, London, 2015, pp. 1329-1334. doi: 10.1109/SAI.2015.7237315
Abstract: The evolution of cloud computing has revolutionized how the computing is abstracted and utilized on remote third party infrastructure. It is now feasible to try out novel ideas over the cloud with no or very low initial cost. There are challenges in adopting cloud computing; but with obstacles, we have opportunities for research in several aspects of cloud computing. One of the main issue is the data security and privacy of information stored and processed at cloud provider's systems. In this work, a practical system (called SAFE) is designed and implemented to securely store/retrieve user's files on the third party cloud storage systems using well established cryptographic techniques. It utilizes the client-side, multilevel, symmetric/asymmetric encryption and decryption operations to provide policy-based access control and assured deletion of remotely hosted client's files. The SAFE is a generic application which can be extended to support any cloud storage provider as long as there is an API which support basic file upload and download operations.
Keywords: application program interfaces; authorisation; client-server systems; cloud computing; computer network security; cryptography; data privacy; outsourcing; API; SAFE system; client-side-multilevel asymmetric encryption operation; client-side-multilevel symmetric encryption operation; client-side-multilevel-asymmetric decryption operation; client-side-multilevel-symmetric decryption operation; cloud provider systems; cloud storage provider; cryptographic techniques; data security; file download operation; file upload operation; information privacy; policy-based access control; remote third-party infrastructure; remotely hosted client file deletion; secure access controlled file encryption system; third-party cloud storage systems; user file retrieval; user file storage; Access control; Cloud computing; Encryption; Java; Servers; Assured deletion; Cryptography; Data privacy; Secure storage (ID#: 16-10068)


S. Chunwijitra et al., “The Strategy to Sustainable Sharing Resources Repository for Massive Open Online Courses in Thailand,” Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, Hua Hin, 2015, pp. 1-5. doi: 10.1109/ECTICon.2015.7206980
Abstract: The proposed paper investigates on the educational knowledge and resources to support lifelong Massive Open Online Courses (MOOC) especially in Thailand. In this paper, we proposed a strategy to provide resource center repository for sharing among e-Learning systems based on the Creative Commons license. An aim of the strategy is to develop a sustainable educational resource repository used for massive e-Learning systems. We decide to integrate the Open Educational Resources (OER) system and MOOC system by using a newly implemented FedX API for exchanging resources between them. FedX API applies the REST API and XBlock SDK to establish resources accession. Fifteen elements of Dublin Core metadata are agreed for inter-changing resources among OER systems that support the Open Archives Initiative (OAI) standard. The proposed system is designed to stand on a cloud computing system concerning the advantages of data storage, processing, bandwidth, and security.
Keywords: application program interfaces; cloud computing; computer aided instruction; open systems; security of data; storage management; FedX API; MOOC; OER system; REST API; Thailand; XBlock SDK; cloud computing system; data bandwidth; data processing; data security; data storage; e-learning system; massive open online course; open educational resources system; sustainable sharing resource repository; Electronic learning; Licenses; Standards; Massive Open Online Courses; Open Archives Initiative; Open Educational Resources; e-Learning (ID#: 16-10069)


P. Dewan and P. Kumaraguru, “Towards Automatic Real Time Identification of Malicious Posts on Facebook,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 85-92. doi: 10.1109/PST.2015.7232958
Abstract: Online Social Networks (OSNs) witness a rise in user activity whenever a news-making event takes place. Cyber criminals exploit this spur in user-engagement levels to spread malicious content that compromises system reputation, causes financial losses and degrades user experience. In this paper, we characterized a dataset of 4.4 million public posts generated on Facebook during 17 news-making events (natural calamities, terror attacks, etc.) and identified 11,217 malicious posts containing URLs. We found that most of the malicious content which is currently evading Facebook's detection techniques originated from third party and web applications, while more than half of all legitimate content originated from mobile applications. We also observed greater participation of Facebook pages in generating malicious content as compared to legitimate content. We proposed an extensive feature set based on entity profile, textual content, metadata, and URL features to automatically identify malicious content on Facebook in real time. This feature set was used to train multiple machine learning models and achieved an accuracy of 86.9%. We performed experiments to show that past techniques for spam campaign detection identified less than half the number of malicious posts as compared to our model. This model was used to create a REST API and a browser plug-in to identify malicious Facebook posts in real time.
Keywords: learning (artificial intelligence); meta data; security of data; social networking (online); Facebook detection technique; Facebook page; OSN; REST API; URL feature; automatic real time identification; browser plug-in; cyber criminal; financial loss; malicious content; malicious post; metadata; multiple machine learning model; online social network; spam campaign detection; system reputation; user activity; user-engagement level; Facebook; Malware; Real-time systems; Twitter; Uniform resource locators
(ID#: 16-10070)


H. Kondylakis et al., “Digital Patient: Personalized and Translational Data Management through the Myhealthavatar EU Project,” 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, 2015,
pp. 1397-1400. doi: 10.1109/EMBC.2015.7318630
Abstract: The advancements in healthcare practice have brought to the fore the need for flexible access to health-related information and created an ever-growing demand for the design and the development of data management infrastructures for translational and personalized medicine. In this paper, we present the data management solution implemented for the MyHealthAvatar EU research project, a project that attempts to create a digital representation of a patient's health status. The platform is capable of aggregating several knowledge sources relevant for the provision of individualized personal services. To this end, state of the art technologies are exploited, such as ontologies to model all available information, semantic integration to enable data and query translation and a variety of linking services to allow connecting to external sources. All original information is stored in a NoSQL database for reasons of efficiency and fault tolerance. Then it is semantically uplifted through a semantic warehouse which enables efficient access to it. All different technologies are combined to create a novel web-based platform allowing seamless user interaction through APIs that support personalized, granular and secure access to the relevant information.
Keywords: SQL; health care; medical information systems; ontologies (artificial intelligence); query processing; security of data; semantic Web; MyHealthAvatar EU research project; NoSQL database; Web-based platform; health-related information; healthcare practice; ontologies; personalized data management; personalized medicine; query translation; semantic warehouse; translational data management; translational medicine; Data models; Data warehouses; Europe; Joining processes; Medical services; Ontologies; Semantics (ID#: 16-10071)


A. Arins, “Firewall as a Service in SDN OpenFlow Network,” Information, Electronic and Electrical Engineering (AIEEE), 2015 IEEE 3rd Workshop on Advances in, Riga, 2015, pp. 1-5. doi: 10.1109/AIEEE.2015.7367309
Abstract: Protecting publicly available servers in internet today is a serious challenge, especially when encountering Distributed denial-of-service (DDoS) attacks. In traditional internet, there is narrow scope of choices one can take when ingress traffic overloads physical connection limits. This paper proposes Firewall as a service in internet service providers (ISP) networks allowing end users to request and install match-action rules in ISPs edge routers. In proposed scenario, ISP runs Software Defined Networking environment where control plane is separated from data plane utilizing OpenFlow protocol and ONOS controller. For interaction between end-users and SDN Controller author defines an Application Programming Interface (API) over a secure SSL/TLS connection. The Controller is responsible for translating high-level logics in low-level rules in OpenFlow switches. This study runs experiments in OpenFlow test-bed researching a mechanism for end-user to discard packets on ISP edge routers thus minimizing their uplink saturation and staying on-line.
Keywords: Internet; application program interfaces; computer network security; firewalls; routing protocols; software defined networking; Firewall; ISP edge routers; ISP networks; Internet service provider networks; ONOS controller; OpenFlow protocol; OpenFlow switches; OpenFlow test-bed; SDN Controller; SDN OpenFlow network; SSL/TLS connection; application programming interface; data plane; distributed denial-of-service attacks; high-level logics; low-level rules; match-action rules; publicly available server protection; software defined networking environment; uplink saturation; Computer crime; Control systems; Firewalls (computing); IP networks; Servers; BGP; BGP experimentation; latency (ID#: 16-10072)


A. J. Poulter, S. J. Johnston and S. J. Cox, “Using the MEAN Stack to Implement a RESTful Service for an Internet of Things Application,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 280-285.
doi: 10.1109/WF-IoT.2015.7389066
Abstract: This paper examines the components of the MEAN development stack (MongoDb, Express.js, Angular.js, & Node.js), and demonstrate their benefits and appropriateness to be used in implementing RESTful web-service APIs for Internet of Things (IoT) appliances. In particular, we show an end-to-end example of this stack and discuss in detail the various components required. The paper also describes an approach to establishing a secure mechanism for communicating with IoT devices, using pull-communications.
Keywords: Internet of Things; Web services; application program interfaces; security of data; software tools; Angular.js; Express. js; Internet of Things application; IoT devices; MEAN development stack; MongoDb; Node.js; RESTful Web-service API; pull-communications; secure mechanism; Databases; Hardware; Internet of things; Libraries; Logic gates; Servers; Software; Angular. js; Express.js; IoT; MEAN; REST; web programming (ID#: 16-10073)


S. Colbran and M. Schulz, “An Update to the Software Architecture of the iLab Service Broker,” Remote Engineering and Virtual Instrumentation (REV), 2015 12th International Conference on, Bangkok, 2015, pp. 90-93. doi: 10.1109/REV.2015.7087269
Abstract: The MIT iLab architecture (consisting of Lab Servers and Service Brokers) was designed in the 1990's and while the Lab Server was designed as a software service the same architectural approach was not adopted for the Service Broker. This paper reports on a redesign of the Service Broker as a software service, which is itself a collection of software services. In the process of this redesign it was decided to examine the API on the Lab Server and to support not only the existing Lab Server API (to maintain support for all existing iLab Lab Servers) but to concurrently support an alternative lightweight API based upon a RESTful architecture and to use JSON to encode the data. As these changes required a complete rewrite of the Service Broker code base, it was decided to experiment with an implementation of the services using Node.js — a popular approach to the implementation of servers in Javascript. The intention was to open up the code base to code developers normally associated with web development and not normally associated with the development of remote laboratories. A new software service named an “agent” was developed that wraps around the service broker to allow programmable modification of requests. The agent also has the ability to serve up an interface to user clients. The use of agents has advantages over existing implementations because it allows customised authentication schemes (such as OAuth) as well as providing different user groups with unique Lab Clients to the same Lab Servers. Lab Clients no longer are served up through the Service Broker, but can reside anywhere on the Internet and access the Service Broker via access to a suitable agent. One outcome of these architectural changes has been the introduction of a simple integration of a remote laboratory in the Blackboard Learning Management System (LMS) using a Learning Tool Interoperability (LTI) module for user authentication.
Keywords: Internet; Java; application program interfaces; learning management systems; open systems; security of data; software architecture; user interfaces; API; Blackboard learning management system; JSON; Javascript; LMS; LTI module; Lab Server; MIT iLab architecture; Node.js; RESTful architecture; customised authentication schemes; iLab service broker; learning tool interoperability; remote laboratory; software service; user authentication; Authentication; Computer architecture; Protocols; Remote laboratories; Servers; Software; ISA; ISABM; MIT iLab; Web Services; iLab Service Broker (ID#: 16-10074)


B. Caillat, B. Gilbert, R. Kemmerer, C. Kruegel and G. Vigna, “Prison: Tracking Process Interactions to Contain Malware,” High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), 2015 IEEE 17th International Conference on, New York, NY, 2015, pp. 1282-1291. doi: 10.1109/HPCC-CSS-ICESS.2015.297
Abstract: Modern operating systems provide a number of different mechanisms that allow processes to interact. These interactions can generally be divided into two classes: inter-process communication techniques, which a process supports to provide services to its clients, and injection methods, which allow a process to inject code or data directly into another process' address space. Operating systems support these mechanisms to enable better performance and to provide simple and elegant software development APIs that promote cooperation between processes. Unfortunately, process interaction channels introduce problems at the end-host that are related to malware containment and the attribution of malicious actions. In particular, host-based security systems rely on process isolation to detect and contain malware. However, interaction mechanisms allow malware to manipulate a trusted process to carry out malicious actions on its behalf. In this case, existing security products will typically either ignore the actions or mistakenly attribute them to the trusted process. For example, a host-based security tool might be configured to deny untrusted processes from accessing the network, but malware could circumvent this policy by abusing a (trusted) web browser to get access to the Internet. In short, an effective host-based security solution must monitor and take into account interactions between processes. In this paper, we present Prison, a system that tracks process interactions and prevents malware from leveraging benign programs to fulfill its malicious intent. To this end, an operating system kernel extension monitors the various system services that enable processes to interact, and the system analyzes the calls to determine whether or not the interaction should be allowed. Prison can be deployed as an online system for tracking and containing malicious process interactions to effectively mitigate the threat of malware. The system can also be used as a dynamic analysis too- to aid an analyst in understanding a malware sample's effect on its environment.
Keywords: Internet; application program interfaces; invasive software; online front-ends; operating system kernels; software engineering; system monitoring; Prison; Web browser; code injection; dynamic analysis tool; host-based security solution; host-based security systems; injection method; interprocess communication technique; malicious action attribution; malware containment; operating system kernel extension; process address space; process interaction tracking; process isolation; software development API; trusted process; Browsers; Kernel; Malware; Monitoring; inter-process communication; ; prison; windows (ID#: 16-10075)


W. He and D. Jap, “Dual-Rail Active Protection System Against Side-Channel Analysis in FPGAs,” 2015 IEEE 26th International Conference on Application-specific Systems, Architectures and Processors (ASAP), Toronto, ON, 2015, pp. 64-65. doi: 10.1109/ASAP.2015.7245707
Abstract: The security of the implemented cryptographic module in hardware has seen severe vulnerabilities against Side-Channel Attack (SCA), which is capable of retrieving hidden things by observing the pattern or quantity of unintentional information leakage. Dual-rail Precharge Logic (DPL) theoretically thwarts side-channel analyses by its low-level compensation manner, while the security reliability of DPLs can only be achieved at high resource expenses and degraded performance. In this paper, we present a dynamic protection system for selectively configuring the security-sensitive crypto modules to SCA-resistant dual-rail style in the scenario that the real-time threat is detected. The threat-response mechanism helps to dynamically balance the security and cost. The system is driven by a set of automated dual-rail conversion APIs for partially transforming the cryptographic module into its dual-rail format, particularly to a highly secure symmetric and interleaved placement. The elevated security grade from the safe to threat mode is validated by EM based mutual information analysis using fine-grained surface scan to a decapsulated Virtex-5 FPGA on SASEBO GII board.
Keywords: cryptography; field programmable gate arrays; reliability; DPL; EM based mutual information analysis; SASEBO GII board; SCA-resistant dual-rail style; Virtex-5 FPGA; automated dual-rail conversion API; cryptographic module; dual-rail active protection system; dual-rail format; dual-rail precharge logic; dynamic protection system; fine-grained surface scan; information leakage; security reliability; security-sensitive cryptomodules; side-channel analysis; side-channel attack; threat-response mechanism; Ciphers; Field programmable gate arrays; Hardware; Mutual information; Rails (ID#: 16-10076)


M. K. Debnath, S. Samet and K. Vidyasankar, “A Secure Revocable Personal Health Record System with Policy-Based Fine-Grained Access Control,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 109-116. doi: 10.1109/PST.2015.7232961
Abstract: Collaborative sharing of information is becoming much more needed technique to achieve complex goals in today's fast-paced tech-dominant world. In our context, Personal Health Record (PHR) system has become a popular research area for sharing patients information very quickly among health professionals. PHR systems store and process sensitive information, which should have proper security mechanisms to protect data. Thus, access control mechanisms of the PHR should be well-defined. Secondly, PHRs should be stored in encrypted form. Therefore, cryptographic schemes offering a more suitable solution for enforcing access policies based on user attributes are needed. Attribute-based encryption can resolve these problems. We have proposed a framework with fine-grained access control mechanism that protects PHRs against service providers, and malicious users. We have used the Ciphertext Policy Attribute Based Encryption system as an efficient cryptographic technique, enhancing security and privacy of the system, as well as enabling access revocation in a hierarchical scheme. The Web Services and APIs for the proposed framework have been developed and implemented, along with an Android mobile application for the system.
Keywords: authorisation; cryptography; data protection; electronic health records; API; Android mobile application; PHR system; Web services; access policies; access revocation; ciphertext policy attribute based encryption system; collaborative information sharing; cryptographic schemes; cryptographic technique; health professionals; malicious users; patients information sharing; policy-based fine-grained access control; secure revocable personal health record system; security mechanisms; service providers; system privacy; system security; tech-dominant world; user attributes; Access control; Data privacy; Encryption; Medical services; Servers; Attribute Revocation; Attribute-Based Encryption; Fine-Grained Access Control; Patient-centric Data Privacy; Personal Health Records
(ID#: 16-10077)


S. Hou, L. Chen, E. Tas, I. Demihovskiy and Y. Ye, “Cluster-Oriented Ensemble Classifiers for Intelligent Malware Detection,” Semantic Computing (ICSC), 2015 IEEE International Conference on, Anaheim, CA, 2015, pp. 189-196. doi: 10.1109/ICOSC.2015.7050805
Abstract: With explosive growth of malware and due to its damage to computer security, malware detection is one of the cyber security topics that are of great interests. Many research efforts have been conducted on developing intelligent malware detection systems applying data mining techniques. Such techniques have successes in clustering or classifying particular sets of malware samples, but they have limitations that leave a large room for improvement. Specifically, based on the analysis of the file contents extracted from the file samples, existing researches apply only specific clustering or classification methods, but not integrate them together. Actually, the learning of class boundaries for malware detection between overlapping class patterns is a difficult problem. In this paper, resting on the analysis of Windows Application Programming Interface (API) calls extracted from the file samples, we develop the intelligent malware detection system using cluster-oriented ensemble classifiers. To the best of our knowledge, this is the first work of applying such method for malware detection. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.
Keywords: application program interfaces; data mining; invasive software; pattern classification; pattern clustering; Comodo Cloud Security Center; Windows API; Windows application programming interface; cluster-oriented ensemble classifiers; computer security; cybersecurity; data mining techniques; intelligent malware detection; Training (ID#: 16-10078)


L. Chen, T. Li, M. Abdulhayoglu and Y. Ye, “Intelligent Malware Detection Based on File Relation Graphs,” Semantic Computing (ICSC), 2015 IEEE International Conference on, Anaheim, CA, 2015, pp. 85-92. doi: 10.1109/ICOSC.2015.7050784
Abstract: Due to its damage to Internet security, malware and its detection has caught the attention of both anti-malware industry and researchers for decades. Many research efforts have been conducted on developing intelligent malware detection systems. In these systems, resting on the analysis of file contents extracted from the file samples, like Application Programming Interface (API) calls, instruction sequences, and binary strings, data mining methods such as Naive Bayes and Support Vector Machines have been used for malware detection. However, driven by the economic benefits, both diversity and sophistication of malware have significantly increased in recent years. Therefore, anti-malware industry calls for much more novel methods which are capable to protect the users against new threats, and more difficult to evade. In this paper, other than based on file contents extracted from the file samples, we study how file relation graphs can be used for malware detection and propose a novel Belief Propagation algorithm based on the constructed graphs to detect newly unknown malware. A comprehensive experimental study on a real and large data collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our proposed method outperform other alternate data mining based detection techniques.
Keywords: belief maintenance; cloud computing; data mining; invasive software; support vector machines; API call; Comodo cloud security center; Internet security; anti-malware industry; application programming interface; belief propagation algorithm; binary strings; data mining method; file relation graph; instruction sequences; intelligent malware detection system; malware diversity; malware sophistication; naive Bayes method; Facebook; Welding (ID#: 16-10079)


A. Javed and M. Akhlaq, “Patterns in Malware Designed for Data Espionage and Backdoor Creation,” 2015 12th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, 2015, pp. 338-342. doi: 10.1109/IBCAST.2015.7058526
Abstract: In the recent past, malware have become a serious cyber security threat which has not only targeted individuals and organizations but has also threatened the cyber space of countries around the world. Amongst malware variants, trojans designed for data espionage and backdoor creation dominates the threat landscape. This necessitates an in depth study of these malware with the scope of extracting static features like APIs, strings, IP Addresses, URLs, email addresses etc. by and large found in such malicious codes. Hence in this research paper, an endeavor has been made to establish a set of patterns, tagged as APIs and Malicious Strings persistently existent in these malware by articulating an analysis framework.
Keywords: application program interfaces; feature extraction; invasive software; APIs; backdoor creation; cyber security threat; data espionage; malicious codes; malicious strings; malware; static feature extraction; trojans; Accuracy; Feature extraction; Lead; Malware; Sensitivity (ID#: 16-10080)


W. Li, J. Ge and G. Dai, “Detecting Malware for Android Platform: An SVM-Based Approach,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 464-469. doi: 10.1109/CSCloud.2015.50
Abstract: In recent years, Android has become one of the most popular mobile operating systems because of numerous mobile applications (apps) it provides. However, the malicious Android applications (malware) downloaded from third-party markets have significantly threatened users' security and privacy, and most of them remain undetected due to the lack of efficient and accurate malware detection techniques. In this paper, we study a malware detection scheme for Android platform using an SVM-based approach, which integrates both risky permission combinations and vulnerable API calls and use them as features in the SVM algorithm. To validate the performance of the proposed approach, extensive experiments have been conducted, which show that the proposed malware detection scheme is able to identify malicious Android applications effectively and efficiently.
Keywords: invasive software; mobile computing; support vector machines; API calls; Android platform; SVM-based approach; application program interface; malware detection; mobile applications; mobile operating systems; user privacy; user security; Androids; Feature extraction; Humanoid robots; Malware; Mobile communication; Smart phones; Android; Support Vector Machine (SVM); TF-IDF; malware (ID#: 16-10081)


H. Chen, L. J. Zhang, B. Hu, S. Z. Long and L. H. Luo, “On Developing and Deploying Large-File Upload Services of Personal Cloud Storage,” Services Computing (SCC), 2015 IEEE International Conference on, New York, NY, 2015, pp. 371-378. doi: 10.1109/SCC.2015.58
Abstract: Personal cloud storage is rapidly gaining popularity. A number of Internet service providers, such as Google and Baidu, entered this emerging market and developed a variety of cloud storage services. These ubiquitous services allow people to access personal files all over the world at anytime. With the prevalence of mobile Internet and rich media on web, more and more people use cloud storage for storing working documents, music, private photos and movies. Nevertheless, the size of the media files is often beyond the upper limit that a normal form-based file upload allows hence dedicated large-file upload services are required to be developed. Although various cloud vendors offer versatile cloud storage services, very little is known about the detailed development and deployment of the large-file upload services. This paper proposes a complete solution of large-file upload service, with the contributions in manyfold: Firstly, we do not limit the maximum size of a large file that can be uploaded. This is extremely practical to store huge database files from ERP tools. Secondly, we developed large-file upload service APIs that have very strict verification of correctness, to reduce the risk of data inconsistency. Thirdly, we extend the service developed recently for team collaboration with the capability of handling large files. Fourthly, this paper is arguably the first one that formalizes the testing and deployment procedures of large-file upload services with the help of Docker. In general, most large-file upload services are exposed to the public, facing security and performance issues, which brings much concern. With the proposed Docker-based deployment strategy, we can replicate the large-file upload service agilely and locally, to satisfy massive private or local deployment of KDrive. Finally, we evaluate and analyze the proposed strategies and technologies in accordance to the experimental results.
Keywords: Internet; application program interfaces; cloud computing; mobile computing; storage management; Docker-based deployment strategy; ERP tools; Internet service providers; cloud storage services; database files; large-file upload service APIs; local KDrive deployment; media files; mobile Internet; normal form-based file upload; personal cloud storage; risk reduction; ubiquitous services; Cloud computing; Context; Databases; Google; Media; Servers; Testing; Docker; Large-file Upload; Personal Cloud Storage; Team Collaboration (ID#: 16-10082)


N. Thamsirarak, T. Seethongchuen and P. Ratanaworabhan, “A Case for Malware that Make Antivirus Irrelevant,” Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2015 12th International Conference on, Hua Hin, 2015, pp. 1-6. doi: 10.1109/ECTICon.2015.7206972
Abstract: Most security researchers realize that the effectiveness of antivirus software (AV) is questionable at best. However, people in the general public still use it daily, perhaps for a lack of better alternatives. It is well-known that signature-based detection technique used in almost all commercial and non-commercial AV cannot be completely effective against zero-day malware. Many evaluations conducted by renowned security firms confirm this. These evaluations often employ sophisticated malware, involve elaborated scheme, and require more resources than what is available to an average person to replicate. This paper investigates the creation of simple zero-day malware that can comprehensively exploit hosts and protractedly evade the installed AV products. What we discovered is alarming, but illuminating. Our malware, written in a high-level language using well-documented APIs, are able to bypass AV detection and launch full-fledged exploits similar to sophisticated malware. In addition, they are able to stay undetected for much longer than other previously reported zero-day malware. We attribute such success to the unreadiness of AV products against malware in intermediate language form. On a positive note, a firewall-like AV product that, to a certain extent, incorporates behavioral-based detection is able to warn against our malware.
Keywords: application program interfaces; computer viruses; digital signatures; firewalls; APIs; AV detection; antivirus software; firewall-like AV product; signature-based detection technique; zero-day malware; Floods; Malware; Software; Testing; Uniform resource locators; Viruses (medical); Antivirus software evaluation; signature-based detection; zero-day exploits (ID#: 16-10083)


J. Xue et al., “Task-D: A Task Based Programming Framework for Distributed System,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1663-1668. doi: 10.1109/HPCC-CSS-ICESS.2015.299
Abstract: We present Task-D, a task-based distributed programming framework. Traditionally, programming for distributed programs requires using either low-level MPI or high-level pattern based models such as Hadoop/Spark. Task based models are frequently and well used for multicore and heterogeneous environment rather than distributed. Our Task-D tries to bridge this gap by creating a higher-level abstraction than MPI, while providing more flexibility than Hadoop/Spark for task-based distributed programming. The Task-D framework alleviates programmers from considering the complexities involved in distributed programming. We provide a set of APIs that can be directly embedded into user code to enable the program to run in a distributed fashion across heterogeneous computing nodes. We also explore the design space and necessary features the runtime should support, including data communication among tasks, data sharing among programs, resource management, memory transfers, job scheduling, automatic workload balancing and fault tolerance, etc. A prototype system is realized as one implementation of Task-D. A distributed ALS algorithm is implemented using the Task-D APIs, and achieved significant performance gains compared to Spark based implementation. We conclude that task-based models can be well suitable to distributed programming. Our Task-D is not only able to improve the programmability for distributed environment, but also able to leverage the performance with effective runtime support.
Keywords: application program interfaces; message passing; parallel programming; automatic workload balancing; data communication; distributed ALS algorithm; distributed programming; distributed system; heterogeneous computing node; high-level pattern based; job scheduling; low-level MPI; resource management; task-D API; task-based programming framework; Algorithm design and analysis; Data communication; Fault tolerance; Fault tolerant systems; Programming; Resource management; Synchronization (ID#: 16-10084)


P. J. Chen and Y. W. Chen, “Implementation of SDN Based Network Intrusion Detection and Prevention System,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 141-146. doi: 10.1109/CCST.2015.7389672
Abstract: In recent years, the rise of software-defined networks (SDN) have made network control more flexible, easier to set up and manage, and have provided a stronger ability to adapt to the changing demands of application development and network conditions. The network becomes easier to maintain, but also achieves improved security as a result of SDN. The architecture of SDN is designed for Control Plane and Forwarding Plane separation and uses open APIs to realize programmable control. SDN allows for the importing of third-party applications to improve network service, or even provide a new network service. In this paper, we present a defense mechanism, which can find attack packets previously identified through the Sniffer function, and once the abnormal flow is found, the protection mechanism of the Firewall function will be activated. For the capture of the packets, available libraries will be used to determine the properties and contents of the malicious packet, and to anticipate any possible attacks. Through the prediction of all latent malicious behaviors, our new defense algorithm can prevent potential losses like system failures or crashes and reduce the risk of being attacked.
Keywords: application program interfaces; firewalls; software defined networking; SDN based network intrusion detection and prevention system; control plane separation; defense mechanism; firewall; forwarding plane separation; malicious packet; open APIs; packet sniffer function; software-defined networks; third-party applications; Control systems; Firewalls (computing); Operating systems; Ports (Computers); Routing; Controller; Defense Mechanism; Firewall; OpenFlow; Packet Sniffer; SDN; Software Defined Networks (ID#: 16-10085)


C. H. Lin, P. Y. Sun and F. Yu, “Space Connection: A New 3D Tele-immersion Platform for Web-Based Gesture-Collaborative Games and Services,” Games and Software Engineering (GAS), 2015 IEEE/ACM 4th International Workshop on, Florence, 2015, pp. 22-28. doi: 10.1109/GAS.2015.12
Abstract: The 3D tele-immersion technique has brought a revolutionary change to human interaction-physically apart users can interact naturally with each other through body gesture in a shared 3D virtual environment. The scheme of cloud- or Web-based applications on the other hand facilitates global connections among players without the need to equip with additional devices. To realize Web-based 3D immersion techniques, we propose Space Connection that integrates techniques for virtual collaboration and motion sensing techniques with the aim of pushing motion sensing a step forward to seamless collaboration among multiple users. Space Connection provides not only human-computer interaction but also enables instant human- to-human collaboration with body gestures beyond physical space boundary. Technically, to develop gesture-interactive applications, it requires parsing signals of motion sensing devices, passing network data transformation, and synchronizing states among multiple users. The challenge for developing web-based applications comes from that there is no native library for browser applications to access the application interfaces of motion sensing devices due to the security sandbox policy. We further develop a new socket transmission protocol that provides transparent APIs for browsers and external devices. We develop an interactive ping pong game and a rehabilitation system as two example applications of the presented technique.
Keywords: Web services; application program interfaces; computer games; gesture recognition; groupware; human computer interaction; virtual reality; 3D teleimmersion technique; Web-based gesture collaborative game; Web-based gesture collaborative services; gesture-interactive applications; human interaction; human-computer interaction; instant human-to-human collaboration; motion sensing device; motion sensing technique; network data transformation; physical space boundary; security sandbox policy; shared 3D virtual environment; socket transmission protocol; space connection; state synchronisation; transparent API; virtual collaboration; Browsers; Collaboration; Games; Sensors; Servers; Sockets; Three-dimensional displays; Kinect applications; Motion sensing; Space Connection (ID#: 16-10086)


J. Horalek, R. Cimler and V. Sobeslav, “Virtualization Solutions for Higher Education Purposes,” Radioelektronika (RADIOELEKTRONIKA), 2015 25th International Conference, Pardubice, 2015, pp. 383-388. doi: 10.1109/RADIOELEK.2015.7128970
Abstract: Utilization of virtualization and cloud computing technologies is very topical. A large number different of technologies, tools and software solutions exists. Successful adoption of this kind of solution may effectively support the education of specialized topics such as programming, operating system, computer networks, security and many others. This solution offers remote access, automated deployment of infrastructure, API or virtualized study environment etc. The goal of this paper is to explore the broad number of virtualization technologies and proposed the solution which deals with the analysis, design and practical implementation of the virtual laboratory that can serve as an educational tool.
Keywords: application program interfaces; cloud computing; computer aided instruction; further education; virtualisation; API; automated infrastructure deployment; cloud computing technologies; computer networks; educational tool; operating system; remote access; security; software solutions; virtual laboratory; virtualization technologies; virtualized study environment; Computers; Education; Hardware; Protocols; Servers; Software; Virtualization (ID#: 16-10087)


V. Mehra, V. Jain and D. Uppal, “DaCoMM: Detection and Classification of Metamorphic Malware,” Communication Systems and Network Technologies (CSNT), 2015 Fifth International Conference on, Gwalior, 2015, pp. 668-673. doi: 10.1109/CSNT.2015.62
Abstract: With the fast and vast upliftment of IT sector in 21st century, the question for system security also accounts. As on one side, the IT field is growing with positivity, malware attacks are also arising on the other. Hence, a great challenge for zero day malware attack. Also, malware authors of metamorphic malware and polymorphic malware gain and extra advantage through mutation engine and virus generation toolkits as they can produce as many malware as they want. Our approach focuses on detection and classification of metamorphic malware. MM are hardest to detect by Antivirus Scanners because they differ structurally. We had gathered a total of 600 malware including those also that bypasses the AVS and 150 benign files. These files are disassembled, preprocessed, control flow graphs and API call graphs are generated. We had proposed an algorithm-Gourmand Feature Selection algorithm for selecting desired features from call graphs. Classification is done through WEKA tool, for which J-48 has given the most accuracy of 99.10%. Once the metamorphic malware are detected, they are classified according to their families using the histograms and Chi-square distance formula.
Keywords: application program interfaces; computer viruses; feature selection; pattern classification; API call graphs; DaCoMM; IT sector; WEKA tool; antivirus scanners; control flow graphs; gourmand feature selection algorithm; metamorphic malware classification; metamorphic malware detection; mutation engine; polymorphic malware; system security; virus generation toolkits; zero day malware attack; Classification algorithms; Engines; Flow graphs; Generators; Histograms; Malware; Software; code obfuscation; histograms; metamorphic malware (ID#: 16-10088)


Y. Liu, “Teaching Programming on Cloud: A Perspective Beyond Programming,” 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), Vancouver, BC, 2015, pp. 594-599. doi: 10.1109/CloudCom.2015.101
Abstract: This paper presents the design and implementation of a programming on cloud course. Teaching programming on cloud embraces topics of the cloud service model, architectural patterns, REST APIs, data models, schema free databases, the MapReduce paradigm and quality of services such as scalability, availability and security. The design of this programming course focuses on the breadth of the essential topics and their intrinsic connection as a roadmap. This enables students with programming skills but no cloud computing background to achieve an overview of the structure of a cloud-based service. This further guides students to make design decision on what (and how) technologies can be adopted by means of a practical project development of a service application on cloud.
Keywords: cloud computing; computer aided instruction; computer science education; educational courses; programming; MapReduce; REST API; architectural pattern; cloud course; cloud service model; data model; programming course; quality of service; schema free database; teaching; Cloud computing; Computer architecture; Data models; Databases; Programming; Servers; big data; course design (ID#: 16-10089)


A. Moawad, T. Hartmann, F. Fouquet, G. Nain, J. Klein and Y. Le Traon, “Beyond Discrete Modeling: A Continuous and Efficient Model for IoT,” Model Driven Engineering Languages and Systems (MODELS), 2015 ACM/IEEE 18th International Conference on, Ottawa, ON, 2015, pp. 90-99. doi: 10.1109/MODELS.2015.7338239
Abstract: Internet of Things applications analyze our past habits through sensor measures to anticipate future trends. To yield accurate predictions, intelligent systems not only rely on single numerical values, but also on structured models aggregated from different sensors. Computation theory, based on the discretization of observable data into timed events, can easily lead to millions of values. Time series and similar database structures can efficiently index the mere data, but quickly reach computation and storage limits when it comes to structuring and processing IoT data. We propose a concept of continuous models that can handle high-volatile IoT data by defining a new type of meta attribute, which represents the continuous nature of IoT data. On top of traditional discrete object-oriented modeling APIs, we enable models to represent very large sequences of sensor values by using mathematical polynomials. We show on various IoT datasets that this significantly improves storage and reasoning efficiency.
Keywords: Big Data; Internet of Things; application program interfaces; computation theory; data structures; object-oriented methods; API; Big data; Internet-of-Things; IoT data processing; computation theory; database structure; discrete object-oriented modeling; high-volatile IoT data structuring; mathematical polynomial; time series; Computational modeling; Context; Data models; Mathematical model; Object oriented modeling; Polynomials; Time series analysis; Continuous modeling; Discrete modeling; Extrapolation; IoT; Polynomial (ID#: 16-10090)


R. Ko, H. M. Lee, A. B. Jeng and T. E. Wei, “Vulnerability Detection of Multiple Layer Colluding Application Through Intent Privilege Checking,” IT Convergence and Security (ICITCS), 2015 5th International Conference on, Kuala Lumpur, 2015, pp. 1-7. doi: 10.1109/ICITCS.2015.7293036
Abstract: In recent years, the privilege escalation attacks can be performed based on collusion attacks. However, a novel privilege escalation attack is Multiple Layer Collusion Attack, which can divide collusion applications into three parts: Spyware, Deputy and Delivery. Spyware steals private data and transmits data to Deputy. Next, Deputy doesn't need to declare any permissions and just bypass data to Delivery. Colluding attack escapes from malware detection through Deputy. In this paper, we propose a mechanism which is capable to detect both capability and deputy leaks. First, we decode APK file to resources and disassembly code. To extract function calls, our system constructs correlation map from source data to intent through API calls, in which, URIs are potential permissions whether Intent has vulnerabilities or not. Hence, we need to trace the potential function-call and overcome the Inter- component communication. The experiment results prove that deputy applications exist in Android official market, Google Play.
Keywords: Android (operating system); application program interfaces; data privacy; API calls; APK file; Android; Google Play; intent privilege checking; multiple layer colluding application; multiple layer collusion attack; private data; privilege escalation attacks; spyware; vulnerability detection; Androids; Computer science; Correlation; Decision trees; Humanoid robots; Spyware (ID#: 16-10091)


W. B. Gardner, A. Gumtie and J. D. Carter, “Supporting Selective Formalism in CSP++ with Process-Specific Storage,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1057-1065. doi: 10.1109/HPCC-CSS-ICESS.2015.265
Abstract: Communicating Sequential Processes (CSP) is a formal language whose primary purpose is to model and verify concurrent systems. The CSP++ toolset was created to realize the concept of selective formalism by making machine-readable CSPm specifications both executable (through automatic C++ code generation) and extensible (by allowing integration of C++ user-coded functions, UCFs). However, UCFs were limited by their inability to share data with each other, thus their application was constrained to solving simple problems in isolation. We extend CSP++ by providing UCFs in the same CSP process with safe access to a shared storage area, similar in concept and API to Pthreads' thread-local storage, enabling cooperation between them and granting them the ability to undertake more complex tasks without breaking the formalism of the underlying specification. Process-specific storage is demonstrated with a line-following robot case study, applying CSP++ in a soft real-time system. Also described is the Eclipse plug-in that supports the CSPm design flow.
Keywords: C++ language; application program interfaces; communicating sequential processes; concurrency (computers); control engineering computing; formal languages; formal specification; formal verification; program compilers; real-time systems; robots; storage management; API; C++ user-coded function; CSP++; CSPm design flow; Eclipse plug-in; Pthread thread-local storage; UCF; automatic C++ code generation; concurrent system modelling; concurrent system verification; formal language; line-following robot case study; machine-readable CSPm specification; process-specific storage; selective formalism; soft real-time system; Libraries; Real-time systems; Robot sensing systems; Switches; System recovery; Writing; C++; CSPm; Eclipse; Timed CSP; code generation; embedded systems; formal methods; model-based design; selective formalism; soft real-time; software synthesis (ID#: 16-10092)


X. Chen, G. Sime, C. Lutteroth and G. Weber, “OAuthHub — A Service for Consolidating Authentication Services,” Enterprise Distributed Object Computing Conference (EDOC), 2015 IEEE 19th International, Adelaide, SA, 2015, pp. 201-210. doi: 10.1109/EDOC.2015.36
Abstract: OAuth has become a widespread authorization protocol to allow inter-enterprise sharing of user preferences and data: a Consumer that wants access to a user's protected resources held by a Service Provider can use OAuth to ask for the user's authorization for access to these resources. However, it can be tedious for a Consumer to use OAuth as a way to organize user identities, since doing so requires supporting all Service Providers that the Consumer would recognize as users' “identity providers”. Each Service Provider added requires extra work, at the very least, registration at that Service Provider. Different Service Providers may differ slightly in the API they offer, their authentication/authorization process or even their supported version of OAuth. The use of different OAuth Service Providers also creates privacy, security and integration problems. Therefore OAuth is an ideal candidate for Software as a Service, while posing interesting challenges at the same time. We use conceptual modelling to derive new high-level models and provide an analysis of the solution space. We address the aforementioned problems by introducing a trusted intermediary — OAuth Hub — into this relationship and contrast it with a variant, OAuth Proxy. Instead of having to support and control different OAuth providers, Consumers can use OAuth Hub as a single trusted intermediary to take care of managing and controlling how authentication is done and what data is shared. OAuth Hub eases development and integration issues by providing a consolidated API for a range of services. We describe how a trusted intermediary such as OAuth Hub can fit into the overall OAuth architecture and discuss how it can satisfy demands on security, reliability and usability.
Keywords: cloud computing; cryptographic protocols; API; OAuth service providers; OAuthHub; authentication services; authorization protocol; software as a service; Analytical models; Authentication; Authorization; Privacy; Protocols; Servers (ID#: 16-10093)


A. Lashgar and A. Baniasadi, “Rethinking Prefetching in GPGPUs: Exploiting Unique Opportunities,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 72-77. doi: 10.1109/HPCC-CSS-ICESS.2015.145
Abstract: In this paper we investigate static memory access predictability in GPGPU workloads, at the thread block granularity. We first show that a significant share of accessed memory addresses can be predicted using thread block identifiers. We build on this observation and introduce a hardware-software prefetching scheme to reduce average memory access time. Our proposed scheme issues the memory requests of thread block before it starts execution. The scheme relies on static analyzer to parse the kernel and find predictable memory accesses. Runtime API calls pass this information to the hardware. Hardware dynamically prefetches the data of each thread block based on this information. In our scheme, prefetch accuracy is controlled by software (static analyzer and API calls) and hardware controls the prefetching timeliness. We introduce few machine models to explore the design space and performance potential behind the scheme. Our evaluation shows that the scheme can achieve a performance improvement of 59% over the baseline without prefetching.
Keywords: application program interfaces; graphics processing units; multi-threading; program diagnostics; storage management; API calls; GPGPU workloads; accessed memory address; average memory access time reduction; design space; dynamic data prefetching; hardware control; hardware-software prefetching scheme; kernel parsing; memory requests; performance improvement; predictable memory accesses; prefetching timeliness; runtime API calls; software control; static analyzer; static memory access predictability; thread block; thread block granularity; thread block identifiers; Arrays; Graphics processing units; Hardware; Indexes; Kernel; Prefetching; CUDA; GPGPU; prefetch cache (ID#: 16-10094)


K. Patel, I. Dube, L. Tao and N. Jiang, “Extending OWL to Support Custom Relations,” Cyber Security and Cloud Computing (CSCloud), 2015 IEEE 2nd International Conference on, New York, NY, 2015, pp. 494-499. doi: 10.1109/CSCloud.2015.74
Abstract: Web Ontology Language (OWL) is used by domain experts to encode knowledge. OWL primarily only supports the subClassOf (is-a or inheritance) relation. Various other relations, such as partOf, are essential for representing information in various fields including all engineering disciplines. The current syntax of OWL does not support the declaration and usage of new custom relations. Workarounds to emulate custom relations do exist, but they add syntax burden to knowledge modelers and don't support accurate semantics for inference engines. This paper proposes minimal syntax extension to OWL for declaring custom relations with special attributes, and applying them in knowledge representation. Domain experts can apply custom relations intuitively and concisely as they do with the familiar built-in subClassOf relation. We present our additions to the OWL API for the declaration, application, and visualization of custom relations. We outline our revision and additions to the ontology editor Protégé so its users could visually declare, apply and remove custom relations according to our enriched OWL syntax. Work relating to our modification of the OWLViz plugin for custom relations visualization is also discussed.
Keywords: application program interfaces; data visualisation; inference mechanisms; knowledge representation languages; ontologies (artificial intelligence); programming language semantics; OWL API; OWL syntax; OWLViz plugin modification; Web ontology language; custom relation visualization; custom relations; engineering disciplines; inference engines; information representation; knowledge encoding; knowledge modelers; knowledge representation; ontology editor Protégé; subClassOf relation; syntax extension; Engines; OWL; Ontologies; Syntactics; Visualization; Custom relation; Knowledge Representation; OWLAPI; Protégé (ID#: 16-10095)


L. Herscheid, D. Richter and A. Polze, “Hovac: A Configurable Fault Injection Framework for Benchmarking the Dependability of C/C++ Applications,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 1-10. doi: 10.1109/QRS.2015.12
Abstract: The increasing usage of third-party software and complexity of modern software systems makes dependability, in particular robustness against faulty code, an ever more important concern. To compare and quantitatively assess the dependability of different software systems, dependability benchmarks are needed. We present a configurable tool for dependability benchmarking, Hovac, which uses DLL API hooking to inject faults into third party library calls. Our fault classes are implemented based on the Common Weakness Enumeration (CWE) database, a community maintained source of real life software faults and errors. Using two example applications, we discuss a detailed and systematic approach to benchmarking the dependability of C/C++ applications using our tool.
Keywords: C++ language; software fault tolerance; software libraries; software tools; C/C++ applications; CWE database; DLL API hooking; Hovac; common weakness enumeration database; configurable fault injection framework; configurable tool; dependability benchmarking; fault classes; faulty code; software errors; software faults; software systems complexity; software systems dependability; third party library calls; third-party software usage; Benchmark testing; Databases; Libraries; Operating systems; Robustness; benchmarking; dependability; fault injection; open source; software reliability; third-party library (ID#: 16-10096)


H. S. Sheshadri, S. R. B. Shree and M. Krishna, “Diagnosis of Alzheimer's Disease Employing Neuropsychological and Classification Techniques,” IT Convergence and Security (ICITCS), 2015 5th International Conference on, Kuala Lumpur, 2015, pp. 1-6. doi: 10.1109/ICITCS.2015.7292973
Abstract: All over the world, a large number of people are suffering from brain related diseases. Diagnosing of these diseases is the requirement of the day. Dementia is one such brain related disease. This causes loss of cognitive functions such as reasoning, memory and other mental abilities which may be due to trauma or normal ageing. Alzheimer's disease is one of the types of the dementia which accounts to 60-80% of mental disorders [1]. For the diagnosis of such diseases many tests are conducted. In this paper, the authors have collected the data of 466 subjects by conducting neuro psychological test. The subjects are classified as demented or not using machine learning techniques. The authors have preprocessed the data. The data set is classified using Naive Bayes, Jrip and Random Forest. The data set is evaluated using explorer, knowledge flow and API. WEKA tool is used for the analysis purpose. Results show Jrip and Random forest performs better compared to Naive Bayes.
Keywords: Bayes methods; brain; cognition; data analysis; data mining; diseases; learning (artificial intelligence); medical computing; neurophysiology; patient diagnosis; pattern classification; trees (mathematics); API; Alzheimer's disease diagnosis; Jrip; WEKA tool; brain related disease; classification technique; cognitive function;   data set classification; dementia; knowledge flow; machine learning; memory; mental ability; mental disorder; naive Bayes; neuropsychological technique; neuropsychological test; normal ageing; random forest; reasoning; trauma; Cancer; Classification algorithms; Data mining; Data visualization; Dementia (ID#: 16-10097)


S. Das, A. Singh, S. P. Singh and A. Kumar, “A Low Overhead Dynamic Memory Management System for Constrained Memory Embedded Systems,” Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, New Delhi, 2015, pp. 809-815. doi: (not provided)
Abstract: Embedded systems programming often involve choosing the worst case static memory allocation for most applications over a dynamic allocation approach. Such a design decision is rightly justified in terms of reliability, security and real time performance requirements from such low end systems. However with the introduction of public key cryptography and dynamic reconfiguration in IP enabled sensing devices for use in several “Internet of Things” applications, dynamic memory allocation in embedded devices, is becoming more important than ever before. While several embedded operating systems like MantisOS, SOS and Contiki provide dynamic memory allocation support, they usually lack flexibility or have relatively large memory overhead. In this paper we introduce two novel dynamic memory allocation schemes, ST_MEMMGR (without memory compaction) and ST_COMPACT_MEMMGR (with memory compaction), with a close compliance with the libc memory allocation API. Both designs take into account the very limited RAM (1KB - 64KB) in most microcontrollers. Experimental results show that ST_MEMMGR has a 256 - 5376 bytes lesser memory overhead than similar non-compaction based open source allocators like heapLib and memmgr. Similarly, ST_COMPACT_MEMMGR is observed to have 33% smaller memory descriptor as compared to Contiki's managed memory allocator with similar performance in terms of execution speed.
Keywords: application program interfaces; embedded systems; operating systems (computers); storage management; Internet of Things; ST_COMPACT_MEMMGR; ST_MEMMGR; constrained memory embedded systems; dynamic memory allocation support; dynamic reconfiguration; embedded operating systems; libc memory allocation API; low overhead dynamic memory management system; public key cryptography; worst case static memory allocation; Compaction; Dynamic scheduling; Embedded systems; Memory management; Protocols; Random access memory; Resource management; Dynamic Memory Management; Embedded Systems; Memory Compaction; Memory Fragmentation; Microcontrollers; WSN (ID#: 16-10098)


A. Banchs et al., “A Novel Radio Multiservice Adaptive Network Architecture for 5G Networks,” 2015 IEEE 81st Vehicular Technology Conference (VTC Spring), Glasgow, 2015, pp. 1-5. doi: 10.1109/VTCSpring.2015.7145636
Abstract: This paper proposes a conceptually novel, adaptive and future-proof 5G mobile network architecture. The proposed architecture enables unprecedented levels of network customisability, ensuring stringent performance, security, cost and energy requirements to be met; as well as providing an API-driven architectural openness, fuelling economic growth through over-the-top innovation. Not following the 'one system fits all services' paradigm of current architectures, the architecture allows for adapting the mechanisms executed for a given service to the specific service requirements, resulting in a novel service- and context-dependent adaptation of network functions paradigm. The technical approach is based on the innovative concept of adaptive (de)composition and allocation of mobile network functions, which flexibly decomposes the mobile network functions and places the resulting functions in the most appropriate location. By doing so, access and core functions no longer (necessarily) reside in different locations, which is exploited to jointly optimize their operation when possible. The adaptability of the architecture is further strengthened by the innovative software-defined mobile network control and mobile multi-tenancy concepts.
Keywords: 5G mobile communication; application program interfaces; 5G mobile network architecture; API-driven architectural openness; context-dependent adaptation; economic growth; innovative software-defined mobile network control; mobile multitenancy concepts; mobile network functions; network customisability; novel radio multiservice adaptive network architecture; stringent performance; Adaptive systems; Computer architecture; Mobile communication; Mobile computing; Quality of service; Radio access networks; Resource management (ID#: 16-10099)


Ó M. Pereira and R. L. Aguiar, “Multi-Purpose Adaptable Business Tier Components Based on Call Level Interfaces,” Computer and Information Science (ICIS), 2015 IEEE/ACIS 14th International Conference on, Las Vegas, NV, 2015, pp. 215-221. doi: 10.1109/ICIS.2015.7166596
Abstract: Call Level Interfaces (CLI) play a key role in business tiers of relational and on some NoSQL database applications whenever a fine tune control between application tiers and the host databases is a key requirement. Unfortunately, in spite of this significant advantage, CLI are low level API, this way not addressing high level architectural requirements. Among the examples we emphasize two situations: a) the need to decouple or not to decouple the development process of business tiers from the development process of application tiers and b) the need to automatically adapt business tiers to new business and/or security needs at runtime. To tackle these CLI drawbacks, and simultaneously keep their advantages, this paper proposes an architecture relying on CLI from which multi-purpose business tiers components are built, herein referred to as Adaptable Business Tier Components (ABTC). Beyond the reference architecture, this paper presents a proof of concept based on Java and Java Database Connectivity (an example of CLI).
Keywords: Java; SQL; application program interfaces; business data processing; database management systems; ABTC; CLI drawbacks; Java database connectivity; NoSQL database applications; adaptable business tier components; application tiers; call level interfaces; high level architectural requirements; low level API; multipurpose adaptable business tier components; multipurpose business tier components; Access control; Buildings; Business; Databases; Java; Runtime; component; middleware; reuse; software architecture (ID#: 16-10100)


T. Nguyen, “Using Unrestricted Mobile Sensors to Infer Tapped and Traced User Inputs,” Information Technology - New Generations (ITNG), 2015 12th International Conference on, Las Vegas, NV, 2015, pp. 151-156. doi: 10.1109/ITNG.2015.29
Abstract: As of January 2014, 58 percent of Americans over the age of 18 own a smart phone. Of these smart phones, Android devices provide some security by requiring that third party application developers declare to users which components and features their applications will access. However, the real time environmental sensors on devices that are supported by the Android API are exempt from this requirement. We evaluate the possibility of exploiting the freedom to discretely use these sensors and expand on previous work by developing an application that can use the gyroscope and accelerometer to interpret what the user has written, even if trace input is used. Trace input is a feature available on Samsung's default keyboard as well as in many popular third-party keyboard applications. The inclusion of trace input in a key logger application increases the amount of personal information that can be captured since users may choose to use the time-saving trace-based input as opposed to the traditional tap-based input. In this work, we demonstrate that it is indeed possible to recover both tap and trace inputted text using only motion sensor data.
Keywords: accelerometers; application program interfaces; gyroscopes; invasive software; smart phones; Android API; Android device; accelerometer; key logger application; keyboard application; mobile security; motion sensor data; personal information; real-time environmental sensor; smart phone; tapped user input; traced user input; unrestricted mobile sensor; Accelerometers; Accuracy; Feature extraction; Gyroscopes; Keyboards; Sensors; Support vector machines; key logger; mobile malware; motion sensors; spyware (ID#: 16-10101)


C. Banse and S. Rangarajan, “A Secure Northbound Interface for SDN Applications,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 834-839. doi: 10.1109/Trustcom.2015.454
Abstract: Software-Defined Networking (SDN) promises to introduce flexibility and programmability into networks by offering a northbound interface (NBI) for developers to create SDN applications. However, current designs and implementations have several drawbacks, including the lack of extended security features. In this paper, we present a secure northbound interface, through which an SDN controller can offer network resources, such as statistics, flow information or topology data, via a REST-like API to registered SDN applications. A trust manager ensures that only authenticated and trusted applications can utilize the interface. Furthermore, a permission system allows for fine-grained authorization and access control to the aforementioned resources. We present a prototypical implementation of our interface and developed example applications using our interface, including an SDN management dashboard.
Keywords: application program interfaces; computer network security; network interfaces; software defined networking; API; NBI; SDN controller; SDN management dashboard; access control; fine-grained authorization; secure northbound interface; software-defined networking; trusted application; Access control; Network topology; Protocols; Switches; Topology; SDN; Software-Defined Networking; network security; northbound interface; trust (ID#: 16-10102)


L. Wu, X. Du and H. Zhang, “An Effective Access Control Scheme for Preventing Permission Leak in Android,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015, pp. 57-61. doi: 10.1109/ICCNC.2015.7069315
Abstract: In the Android system, each application runs in its own sandbox, and the permission mechanism is used to enforce access control to the system APIs and applications. However, permission leak could happen when an application without certain permission illegally gain access to protected resources through other privileged applications. We propose SPAC, a component-level system permission based access control scheme that can help developers better secure the public components of their applications. In the SPAC scheme, obscure custom permissions are replaced by explicit system permissions. We extend current permission checking mechanism so that multiple permissions are supported on component level. SPAC has been implemented on a Nexus 4 smartphone, and our evaluation demonstrates its effectiveness in mitigating permission leak vulnerabilities.
Keywords: Android (operating system); application program interfaces; authorisation; Android system; Nexus 4 smartphone; SPAC scheme; component-level system permission based access control scheme; permission checking mechanism; permission leak prevention; permission leak vulnerabilities; permission mechanism; public components; system API; Access control; Androids; Google; Humanoid robots; Information security; Receivers; Permission leak; access control; smartphone security (ID#: 16-10103)


M. Jemel and A. Serhrouchni, “Toward User's Devices Collaboration to Distribute Securely the Client Side Storage,” 2015 International Conference on Protocol Engineering (ICPE) and International Conference on New Technologies of Distributed Systems (NTDS), Paris, 2015, pp. 1-6. doi: 10.1109/NOTERE.2015.7293479
Abstract: Web application and browsers are adopting intensively the client side storage. In fact, this strategy ensures a high user's quality of experience, offline application usage and server load reduction. In this paper, we concentrate on all devices equipped with a browser to integrate the distribution of data stored locally by HTML5 APIs. Therefore, a decentralized browser-to-browser data distribution is ensured across different user's devices within the same Personal Area.
Keywords: Internet; application program interfaces; hypermedia markup languages; quality of experience; security of data; storage allocation; HTML5 API; WebRTC; chromium code; client side storage; decentralized browser-to-browser data distribution; device collaboration; local storage API; quality of experience; secure remote data; server load reduction; Browsers; Databases; Encryption; Protocols; HTML5; Local Storage API; Secure remote data management (ID#: 16-10104)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Cryptography with Photons 2015


SoS Logo

Cryptography with Photons



Quantum cryptography uses the transfer of photons using a filter that indicates the orientation of the photon sent. Eavesdropping on the communication affects it. This property is of interest to the Science of Security community in building secure cyber-physical systems, and for resiliency and compositionality. The work cited here was presented in 2015.

B. Archana and S. Krithika, “Implementation of BB84 Quantum Key Distribution Using OptSim,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 457-460. doi: 10.1109/ECS.2015.7124946
Abstract: This paper proposes a cryptographic method know as quantum cryptography. Quantum cryptography uses quantum channel to exchange key securely and keeps unwanted parties or eavesdroppers from learning sensitive information. A technique called Quantum Key Distribution (QKD) is used to share random secret key by encoding the information in quantum states. Photons are the quantum material used for encoding. QKD provides an unique way of sharing random sequence of bits between users with a level of security not attainable with any other classical cryptographic methods. In this paper, BB84 protocol is used to implement QKD, that deals with the photon polarization states used to transmit the telecommunication information with high level of security using optical fiber. In this paper we have implemented BB84 protocol using photonic simulator OptSim 5.2.
Keywords: cryptographic protocols; quantum cryptography; BB84 protocol; BB84 quantum key distribution; QKD; cryptographic method; eavesdroppers; learning sensitive information; optical fiber; photon polarization states; photonic simulator OptSim 5.2; quantum channel; quantum material; quantum states; random secret key; telecommunication information; Cryptography; Photonics; Polarization; Protocols; Quantum entanglement; BB84protocol; OptSim5.2; Quantum Mechanism(QM); QuantumKey Distribution (QKD); Quantumcryptography(QC); photonpolarization (ID#: 16-11339)


B. G. Norton, M. Ghadimi, V. Blums and D. Kielpinski, “Monolithic Optical Integration for Scalable Trapped-Ion Quantum Information Processing,” Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015 11th Conference on, Busan, 2015, pp. 1-2. doi: 10.1109/CLEOPR.2015.7376434
Abstract: Quantum information processing (QIP) promises to radically change the outlook for secure communications, both by breaking existing cryptographic protocols and offering new quantum protocols in their place. A promising technology for QIP uses arrays of atomic ions that are trapped in ultrahigh vacuum and manipulated by lasers. Over the last several years, work in my research group has led to the demonstration of a monolithically integrated, scalable optical interconnect for trapped-ion QIP. Our interconnect collects single photons from trapped ions using a diffractive mirror array, which is fabricated directly on a chip-type ion trap using a CMOS-compatible process. Based on this interconnect, we have proposed an architecture that couples trapped ion arrays with photonic integrated circuits to achieve compatibility with current telecom networks. Such tightly integrated, highly parallel systems open the prospect of long-distance quantum cryptography.
Keywords: CMOS integrated circuits; cryptographic protocols; integrated optics; mirrors; optical arrays; optical communication; optical fabrication; optical interconnections; quantum cryptography; quantum optics; security of data; CMOS-compatible process; QIP; chip-type ion trap; cryptographic protocols; diffractive mirror array; long-distance quantum cryptography; monolithic optical integration; photonic integrated circuits; quantum protocols; scalable optical interconnect; scalable trapped-ion quantum information processing; secure communications; ultrahigh vacuum; Charge carrier processes; Computer architecture; Information processing; Ions; Mirrors; Optical diffraction; Optical waveguides (ID#: 16-11340)


T. Graham, C. Zeitler, J. Chapman, P. Kwiat, H. Javadi and H. Bernstein, “Superdense Teleportation and Quantum Key Distribution for Space Applications,” 2015 IEEE International Conference on Space Optical Systems and Applications (ICSOS), New Orleans, LA, 2015, pp. 1-7. doi: 10.1109/ICSOS.2015.7425090
Abstract: The transfer of quantum information over long distances has long been a goal of quantum information science and is required for many important quantum communication and computing protocols. When these channels are lossy and noisy, it is often impossible to directly transmit quantum states between two distant parties. We use a new technique called superdense teleportation to communicate quantum information deterministically with greatly reduced resources, simplified measurements, and decreased classical communication cost. These advantages make this technique ideal for communicating quantum information for space applications. We are currently implementing an superdense teleportation lab demonstration, using photons hyperentangled in polarization and temporal mode to communicate a special set of two-qubit, single-photon states between two remote parties. A slight modification of the system readily allows it to be used to implement quantum cryptography as well. We investigate the possibility of implementation from an Earth's orbit to ground. We will discuss our current experimental progress and the design challenges facing a practical demonstration of satellite-to-Earth SDT.
Keywords: optical communication; quantum computing; quantum cryptography; quantum entanglement; satellite communication; teleportation; hyperentangled photons; lossy channels; noisy channels; quantum communication; quantum information; quantum key distribution; quantum states; satellite-to-Earth SDT; space applications; superdense teleportation; two-qubit single-photon states; Extraterrestrial measurements; Photonics; Protocols; Quantum entanglement; Satellites; Teleportation; Superdense teleportation;  (ID#: 16-11341)


K. W. C. Chan, M. E. Rifai, P. Verma, S. Kak and Y. Chen, “Multi-Photon Quantum Key Distribution Based on Double-Lock Encryption,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_QELS.2015.FF1A.3
Abstract: We present a quantum key distribution protocol based on the double-lock cryptography. It exploits the asymmetry in the detection strategies between the legitimate users and the eavesdropper. With coherent states, the mean photon number can be as larger as 10.
Keywords: light coherence; multiphoton processes; photodetectors; quantum cryptography; quantum optics; coherent states; double-lock cryptography; double-lock encryption; mean photon number; multiphoton quantum key distribution; photodetection strategies; Authentication; Computers; Error probability; Photonics; Protocols; Quantum cryptography (ID#: 16-11342)


M. Koashi, “Quantum Key Distribution with Coherent Laser Pulse Train: Security Without Monitoring Disturbance,” Photonics North, 2015, Ottawa, ON, 2015, pp. 1-1. doi: 10.1109/PN.2015.7292456
Abstract: Conventional quantum key distribution (QKD) schemes determine the amount of leaked information through estimation of signal disturbance. Here we present a QKD protocol based on an entirely different principle, which works without monitoring the disturbance. The protocol is implemented with a laser, an interferometer with a variable delay, and photon detectors. It is capable of producing a secret key when the bit error rate is high and the communication time is short.
Keywords: high-speed optical techniques; light coherence; quantum cryptography; quantum optics; QKD; bit error rate; coherent laser pulse train; photon detectors; quantum key distribution; secret key; variable delay; Delays; Estimation; Monitoring; Photonics; Privacy; Protocols; Security; differential phase shift keying; information-disturbance trade off; variable delay (ID#: 16-11343)


C. J. Chunnilall, “Metrology for Quantum Communications,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_AT.2015.AF1J.6
Abstract: Industrial technologies based on the production, manipulation, and detection of single and entangled photons are emerging, and quantum key distribution via optical fibre is one of the most commercially-advanced. The National Physical Laboratory is developing traceable performance metrology for the quantum devices used in these technologies. This is part of a broader effort to develop metrological techniques and standards to accelerate the development and commercial uptake of new industrial quantum communication technologies based on single photons. This presentation will give an overview of the work carried out at NPL and within the wider European community, and highlight plans for the future.
Keywords: fibre optic sensors; photon counting; quantum cryptography; quantum entanglement; National Physical Laboratory; entangled photons; metrology; optical fibre; quantum communications; quantum devices; quantum key distribution; single photons; Communication systems; Detectors; Metrology; Optical fibers; Optical transmitters; Photonics; Security (ID#: 16-11344)


D. Bunandar, Z. Zhang, J. H. Shapiro and D. R. Englund, “Practical High-Dimensional Quantum Key Distribution with Decoy States,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1103/PhysRevA.91.022336
Abstract: We propose a high-dimensional quantum key distribution protocol secure against photon-number splitting attack by employing only one or two decoy states. Decoy states dramatically increase the protocol's secure distance.
Keywords: cryptographic protocols; quantum cryptography; quantum optics; security of data; decoy states; high-dimensional quantum key distribution protocol; photon-number splitting attack; protocol secure distance; Correlation; Dispersion; Photonics; Protocols; Security; System-on-chip (ID#: 16-11345)


N. Li, S. M. Cui, Y. M. Ji, K. s. Feng and L. Shi, “Analysis for Device Independent Quantum Key Distribution Based on the Law of Large Number,” 2015 IEEE Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, 2015, pp. 1073-1076. doi: 10.1109/IAEAC.2015.7428723
Abstract: Measuring device-independent quantum key distribution scheme can remove all detectors' side-channel flaw, combined with decoy state program to achieve absolute security of quantum key distribution. In this paper, the statistical law of large numbers fluctuate limited key length measuring device-independent quantum key distribution scheme in single-photon counting rate and BER (Bit Error Rate) were analyzed, and the key length of the single-photon N = 106 ~ 1012 counting rate and the key generation rate simulation were performed. Simulation results show that: in the optical fiber transmission, with decreasing key length 300 km, the secure transmission distance, decreased respectively to 260 km (N = 1010) and 75 km (N = 106). When N = 1012, secure transmission distance is reached at 295km, close to the theoretical limit.
Keywords: error statistics; quantum cryptography; BER; bit error rate; decoy state program; key generation rate simulation; key length measuring device-independent quantum key distribution scheme; law of large number; optical fiber transmission; secure transmission distance; security; side-channel flaw; single-photon counting rate; statistical law; Decision support systems; Force measurement; Frequency modulation; Navigation; Q measurement; Measuring device-independent; QKD; law of large numbers; three-intensity decoy-state program (ID#: 16-11346)


F. Piacentini et al., “Metrology for Quantum Communication,” 2015 IEEE Globecom Workshops (GC Wkshps), San Diego, CA, 2015, pp. 1-5. doi: 10.1109/GLOCOMW.2015.7413960
Abstract: INRIM is making efforts to produce a metrology for quantum communication purposes, ranging from the establishment of measurement procedures for specific quantities related to QKD components, namely pseudo single-photon sources and detectors, to the implementation of novel QKD protocol based on paradigm other than non-commuting observables, to the development of quantum tomographic techniques, to the realization and characterization of a quasi-noiseless single-photon source. In particular in this paper we summarize this last activity together with the description of the preliminary results related to a four-wave mixing source that our group realized in order to obtain a source with a narrow band low noise single photon emission, a demanding feature for applications to quantum repeaters and memories.
Keywords: multiwave mixing; quantum cryptography; INRIM; QKD components; QKD protocol; four-wave mixing source; measurement procedures; narrow band low noise single photon emission; pseudo single-photon sources; quantum communication metrology; quantum tomographic techniques; quasi-noiseless single-photon source; Cesium; Communication systems; Four-wave mixing; Laser beams; Laser excitation; Metrology; Photonics (ID#: 16-11347)


U. S. Chahar and K. Chatterjee, “A Novel Differential Phase Shift Quantum Key Distribution Scheme for Secure Communication,” Computing and Communications Technologies (ICCCT), 2015 International Conference on, Chennai, 2015,
pp. 156-159. doi: 10.1109/ICCCT2.2015.7292737
Abstract: Quantum key distribution is used for secure communication between two parties for generation of secret key. Differential Phase Shift Quantum Key Distribution is a new and unique QKD protocol that is different from traditional ones, providing simplicity and practicality. This paper presents Delay Selected DPS-QKD scheme in which it uses a weak coherent pulse train, and features simple configuration and efficient use of the time domain. All detected photon participate to form a secure key bits and resulting in a higher key creation efficiency.
Keywords: cryptographic protocols; differential phase shift keying; quantum cryptography; telecommunication security; time-domain analysis; QKD protocol; coherent pulse train; delay selected DPS-QKD scheme; differential phase shift quantum key distribution scheme; secret key generation; secure communication; secure key bits; time domain analysis; Delays; Detectors; Differential phase shift keying; Photonics; Protocols; Security; Differential Phase Shift; Differential phase shift keying protocol; Quantum Key Distribution (ID#: 16-11348)


G. S. Kanter, “Fortifying Single Photon Detectors to Quantum Hacking Attacks by Using Wavelength Upconversion,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_AT.2015.JW2A.7
Abstract: Upconversion detection can isolate the temporal and wavelength window over which light can be efficiency received. Using appropriate designs the ability of an eavesdropper to damage, measure, or control QKD receiver components is significantly constricted.
Keywords: optical control; optical design techniques; optical receivers; optical testing; optical wavelength conversion; optical windows; photodetectors; photon counting; quantum cryptography; QKD receiver component control; QKD receiver component measurement; optical designs; quantum hacking attacks; single-photon detectors; temporal window; wavelength upconversion detection; wavelength window; Band-pass filters; Computer crime; Detectors; Insertion loss; Monitoring; Photonics; Receivers (ID#: 16-11349)


T. Horikiri, “Quantum Key Distribution with Mode-Locked Two-Photon States,” Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015 11th Conference on, Busan, 2015, pp. 1-2. doi: 10.1109/CLEOPR.2015.7376514
Abstract: Quantum key distribution (QKD) with mode-locked two-photon states is discussed. The photon source with a comb-like second-order correlation function is shown to be useful for implementing long distance time-energy entanglement QKD.
Keywords: laser mode locking; optical correlation; quantum cryptography; quantum entanglement; quantum optics; two-photon processes; comblike second-order correlation function; long distance time-energy entanglement QKD; mode-locked two-photon states; quantum key distribution; Cavity resonators; Correlation; Detectors; Photonics; Signal resolution; Timing; Yttrium (ID#: 16-11350)


X. Tan, S. Cheng, J. Li and Z. Feng, “Quantum Key Distribution Protocol Using Quantum Fourier Transform,” Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, Gwangiu, 2015,
pp. 96-101. doi: 10.1109/WAINA.2015.8
Abstract: A quantum key distribution protocol is proposed base on the discrete quantum Fourier transform. In our protocol, we perform Fourier transform on each particle of the sequence to encode the qubits and insert sufficient decoy photons into the sequence for preventing eavesdropping. Furthermore, we prove the security of this protocol with it's immunization to intercept-measurement attack, intercept-resend attack and entanglement-measurement attack. Then, we analyse the efficiency of the protocol, the efficiency of our protocol is about 25% that higher than many other protocols. Also, the proposed protocol has another advantage that it is completely compatible with quantum computation and more easy to realize in the distributed quantum secure computation.
Keywords: cryptographic protocols; discrete Fourier transforms; quantum cryptography; discrete quantum Fourier transform; distributed quantum secure computation; eavesdropping; immunization; intercept-measurement attack; intercept-resend attack; quantum key distribution protocol; Atmospheric measurements; Fourier transforms; Particle measurements; Photonics; Protocols; Quantum computing; Security; Intercept-resend attack; Quantum Fourier transform; Quantum key distribution; Unitary operation (ID#: 16-11351)


D. Aktas, B. Fedrici, F. Kaiser, L. Labonté and S. Tanzilli, “Distributing Energy-Time Entangled Photon Pairs in Demultiplexed Channels Over 110 Km,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_QELS.2015.FTu2A.6
Abstract: We propose a novel approach to quantum cryptography using the latest demultiplexing technology to distribute photonic entanglement over a fully fibred network. We achieve unprecedented bit-rates, beyond the state of the art for similar approaches.
Keywords: demultiplexing; optical fibre networks; quantum cryptography; quantum entanglement; quantum optics; demultiplexed channels; demultiplexing technology; distance 110 km; energy-time entangled photon pairs; fully fibred network; photonic entanglement; quantum cryptography; Bit rate; Optical filters; Optimized production technology; Photonics; Quantum cryptography; Quantum entanglement; Standards (ID#: 16-11352)


M. Koashi, “Round-Robin Differential-Phase-Shift QKD Protocol,” Lasers and Electro-Optics Pacific Rim (CLEO-PR), 2015 11th Conference on, Busan, 2015, pp. 1-2. doi: 10.1109/CLEOPR.2015.7376020
Abstract: Conventional quantum key distribution (QKD) schemes determine the amount of leaked information through estimation of signal disturbance. Here we present a QKD protocol based on an entirely different principle, which works without monitoring the disturbance.
Keywords: cryptographic protocols; differential phase shift keying; optical communication; quantum cryptography; quantum optics; leaked information; quantum key distribution schemes; round-robin differential-phase-shift QKD protocol; signal disturbance; Delays; Detectors; Optical interferometry; Photonics; Privacy; Protocols; Receivers (ID#: 16-11353)


J. M. Vilardy O., M. S. Millán and E. Pérez-Cabré, “Secure Image Encryption and Authentication Using the Photon Counting Technique in the Gyrator Domain,” Signal Processing, Images and Computer Vision (STSIVA), 2015 20th Symposium on, Bogota, 2015, pp. 1-6. doi: 10.1109/STSIVA.2015.7330460
Abstract: In this work, we present the integration of the photon counting technique (PhCT) with an encryption system in the Gyrator domain (GD) for secure image authentication. The encryption system uses two random phase masks (RPMs), one RPM is defined at the spatial domain and the other RPM is defined at the GD, in order to encode the image to encrypt (original image) into random noise. The rotation angle of the Gyrator transform adds a new key that increases the security of the encryption system. The decryption system is an inverse system with respect to the encryption system. The PhCT limits the information content of an image in a nonlinear, random and controlled way; the photon-limited image only has a few pixels of information, this type of image is usually known as sparse image. We apply the PhCT over the encrypted image. The resulting image in the decryption system is not a copy of the original image, this decrypted image is a random code that should contain the sufficient information for the authentication of the original image using a nonlinear correlation technique. Finally, we evaluate the peak-to-correlation energy metric for different values of the parameters involved in the encryption and authentication systems, in order to test the verification capability of the authentication system.
Keywords: cryptography; image processing; photon counting; random noise; gyrator domain; inverse system; nonlinear correlation technique; peak-to-correlation energy metric; photon counting technique; random noise; random phase masks; secure image authentication; secure image encryption; sparse image; Authentication; Correlation; Encryption; Gyrators; Photonics; Transforms (ID#: 16-11354)


E. Y. Zhu, C. Corbari, A. V. Gladyshev, P. G. Kazansky, H. K. Lo and L. Qian, “Multi-Party Agile QKD Network with a Fiber-Based Entangled Source,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_AT.2015.JW2A.10
Abstract: A multi-party quantum key distribution scheme is experimentally demonstrated by utilizing a poled fiber-based broadband polarization-entangled source and dense wavelength-division multiplexing. Entangled photon pairs are delivered over 40-km of fiber, with secure key rates of more than 20 bits/s observed.
Keywords: optical fibre networks; optical fibre polarisation; quantum cryptography; quantum entanglement; quantum optics; wavelength division multiplexing; bit rate 20 bit/s; dense wavelength-division multiplexing; entangled photon pairs; fiber-based entangled source; multiparty Agile QKD network; multiparty quantum key distribution scheme; poled fiber-based broadband polarization-entangled source; secure key rates; size 40 km; Adaptive optics; Broadband communication; Optical polarization; Optical pumping; Photonics; Wavelength division multiplexing (ID#: 16-11355)


S. Kleis, R. Herschel and C. G. Schaeffer, “Simple and Efficient Detection Scheme for Continuous Variable Quantum Key Distribution with M-ary Phase-Shift-Keying,” 2015 Conference on Lasers and Electro-Optics (CLEO), San Jose, CA, 2015, pp. 1-2. doi: 10.1364/CLEO_SI.2015.SW3M.7
Abstract: A detection scheme for discriminating coherent states in quantum key distribution systems employing PSK is proposed. It is simple and uses only standard components. Its applicability at extremely low power levels of as low as 0.045 photons per symbol is experimentally verified.
Keywords: light coherence; optical modulation; phase shift keying; photodetectors; quantum cryptography; quantum optics; PSK; coherent states discrimination; continuous variable quantum key distribution; detection scheme; m-ary phase-shift-keying; Modulation; Optical mixing; Optical receivers; Optical transmitters; Photonics; Signal to noise ratio (ID#: 16-11356)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Data Race Vulnerabilities 2015


SoS Logo

Data Race Vulnerabilities



A race condition is a flaw that occurs when the timing or ordering of events affects a program’s correctness. A data race happens when there are two memory accesses in a program where both target the same location and are performed concurrently by two threads. For the Science of Security, data races may impact compositionality. The research work cited here was presented in 2015.

D. Last, “Using Historical Software Vulnerability Data to Forecast Future Vulnerabilities,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-7. doi: 10.1109/RWEEK.2015.7287429
Abstract: The field of network and computer security is a never-ending race with attackers, trying to identify and patch software vulnerabilities before they can be exploited. In this ongoing conflict, it would be quite useful to be able to predict when and where the next software vulnerability would appear. The research presented in this paper is the first step towards a capability for forecasting vulnerability discovery rates for individual software packages. This first step involves creating forecast models for vulnerability rates at the global level, as well as the category (web browser, operating system, and video player) level. These models will later be used as a factor in the predictive models for individual software packages. A number of regression models are fit to historical vulnerability data from the National Vulnerability Database (NVD) to identify historical trends in vulnerability discovery. Then, k-NN classification is used in conjunction with several time series distance measurements to select the appropriate regression models for a forecast. 68% and 95% confidence bounds are generated around the actual forecast to provide a margin of error. Experimentation using this method on the NVD data demonstrates the accuracy of these forecasts, as well as the accuracy of the confidence bounds forecasts. Analysis of these results indicates which time series distance measures produce the best vulnerability discovery forecasts.
Keywords: pattern classification; regression analysis; security of data; software packages; time series; computer security; k-NN classification; regression model; software package; software vulnerability data; time series distance measure; vulnerability forecasting; Accuracy; Market research; Predictive models; Software packages; Time series analysis; Training; cybersecurity; vulnerability discovery model; vulnerability prediction (ID#: 16-11192)


F. Schuster, T. Tendyck, C. Liebchen, L. Davi, A.-R. Sadeghi and T. Holz, “Counterfeit Object-Oriented Programming: On the Difficulty of Preventing Code Reuse Attacks in C++ Applications,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 745-762. doi: 10.1109/SP.2015.51
Abstract: Code reuse attacks such as return-oriented programming (ROP) have become prevalent techniques to exploit memory corruption vulnerabilities in software programs. A variety of corresponding defenses has been proposed, of which some have already been successfully bypassed -- and the arms race continues. In this paper, we perform a systematic assessment of recently proposed CFI solutions and other defenses against code reuse attacks in the context of C++. We demonstrate that many of these defenses that do not consider object-oriented C++ semantics precisely can be generically bypassed in practice. Our novel attack technique, denoted as counterfeit object-oriented programming (COOP), induces malicious program behavior by only invoking chains of existing C++ virtual functions in a program through corresponding existing call sites. COOP is Turing complete in realistic attack scenarios and we show its viability by developing sophisticated, real-world exploits for Internet Explorer 10 on Windows and Fire fox 36 on Linux. Moreover, we show that even recently proposed defenses (CPS, T-VIP, vfGuard, and VTint) that specifically target C++ are vulnerable to COOP. We observe that constructing defenses resilient to COOP that do not require access to source code seems to be challenging. We believe that our investigation and results are helpful contributions to the design and implementation of future defenses against control flow hijacking attacks.
Keywords: C++ language; Turing machines; object-oriented programming; security of data; C++ applications; C++ virtual functions; CFI solutions; COOP; CPS; Firefox 36; Internet Explorer 10; Linux; ROP; T-VIP; Turing complete; VTint; Windows; code reuse attack prevention; code reuse attacks; control flow hijacking attacks; counterfeit object-oriented programming; malicious program behavior; memory corruption vulnerabilities; return-oriented programming; software programs; source code; vfGuard; Aerospace electronics; Arrays; Layout; Object oriented programming; Runtime; Semantics; C++; CFI; ROP; code reuse attacks (ID#: 16-11193)


Z. Wu, K. Lu and X. Wang, “Efficiently Trigger Data Races Through Speculative Execution,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS),  New York, NY, 2015, pp. 90-95. doi: 10.1109/HPCC-CSS-ICESS.2015.57
Abstract: Harmful data races hidden in concurrent programs are hard to be detected due to non-determinism. Many race detectors report a large number of benign data races. To detect the harmful data races automatically, previous tools dynamically execute program and actively insert delay to create real race condition, checking whether failure occurs due to the race. If so, a harmful race is detected. However, performance may be affected due to the inserted delay. We use speculative execution to alleviate this problem. Unlike previous tools that suspend one thread's memory access to wait for another thread's memory access, we continue to execute this thread's memory accesses and do not suspend this thread until it is going to execute a memory access that may change the effort of race. Therefore, real race condition will be created with less delay or even no delay. To our knowledge, this is the first technique that can trigger data races by speculative execution. The speculative execution does not affect the detection of harmful races. We have implemented a prototype tool and experimented on some real world programs. Results show that our tool can detect harmful races effectively. By speculative execution, the performance is improved significantly.
Keywords: concurrency control; parallel programming; program compilers; security of data; concurrent programs; data race detection; dynamic program execution; nondeterminism; race detectors; speculative execution; thread memory access; Concurrent computing; Delays; Instruction sets; Instruments; Message systems; Programming; Relays; concurrent program; dynamic analysis; harmful data race (ID#: 16-11194)


J. Adebayo and L. Kagal, “A Privacy Protection Procedure for Large Scale Individual Level Data,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 120-125. doi: 10.1109/ISI.2015.7165950
Abstract: We present a transformation procedure for large scale individual level data that produces output data in which no linear combinations of the resulting attributes can yield the original sensitive attributes from the transformed data. In doing this, our procedure eliminates all linear information regarding a sensitive attribute from the input data. The algorithm combines principal components analysis of the data set with orthogonal projection onto the subspace containing the sensitive attribute(s). The algorithm presented is motivated by applications where there is a need to drastically 'sanitize' a data set of all information relating to sensitive attribute(s) before analysis of the data using a data mining algorithm. Sensitive attribute removal (sanitization) is often needed to prevent disparate impact and discrimination on the basis of race, gender, and sexual orientation in high stakes contexts such as determination of access to loans, credit, employment, and insurance. We show through experiments that our proposed algorithm outperforms other privacy preserving techniques by more than 20 percent in lowering the ability to reconstruct sensitive attributes from large scale data.
Keywords: data analysis; data mining; data privacy; principal component analysis; data mining algorithm; large scale individual level data; orthogonal projection; principal component analysis; privacy protection procedure; sanitization; sensitive attribute removal; Data privacy; Loans and mortgages; Noise; Prediction algorithms; Principal component analysis; Privacy; PCA; privacy preserving (ID#: 16-11195)


J. García, “Broadband Connected Aircraft Security,” 2015 Integrated Communication, Navigation and Surveillance Conference (ICNS), Herndon, VA, USA, 2015, pp. 1-23. doi: 10.1109/ICNSURV.2015.7121291
Abstract: There is an inter-company race among service providers to offer the highest speed connections and services to the passenger. With some providers offering up to 50Mbps per aircraft and global coverage, traditional data links between aircraft and ground are becoming obsolete.
Keywords: (not provided) (ID#: 16-11197)


D. H. Summerville, K. M. Zach and Y. Chen, “Ultra-Lightweight Deep Packet Anomaly Detection for Internet of Things Devices,” 2015 IEEE 34th International Performance Computing and Communications Conference (IPCCC), Nanjing, 2015, pp. 1-8. doi: 10.1109/PCCC.2015.7410342
Abstract: As we race toward the Internet of Things (IoT), small embedded devices are increasingly becoming network-enabled. Often, these devices can't meet the computational requirements of current intrusion prevention mechanisms or designers prioritize additional features and services over security; as a result, many IoT devices are vulnerable to attack. We have developed an ultra-lightweight deep packet anomaly detection approach that is feasible to run on resource constrained IoT devices yet provides good discrimination between normal and abnormal payloads. Feature selection uses efficient bit-pattern matching, requiring only a bitwise AND operation followed by a conditional counter increment. The discrimination function is implemented as a lookup-table, allowing both fast evaluation and flexible feature space representation. Due to its simplicity, the approach can be efficiently implemented in either hardware or software and can be deployed in network appliances, interfaces, or in the protocol stack of a device. We demonstrate near perfect payload discrimination for data captured from off the shelf IoT devices.
Keywords: Internet of Things; feature selection; security of data; table lookup; Internet of Things devices; IoT devices; bit-pattern matching; bitwise AND operation; conditional counter increment;  lookup-table; ultra-lightweight deep packet anomaly detection approach; Computational complexity; Detectors; Feature extraction; Hardware; Hidden Markov models; Payloads; Performance evaluation; network anomaly detection (ID#: 16-11198)


B. M. Bhatti and N. Sami, “Building Adaptive Defense Against Cybercrimes Using Real-Time Data Mining,” Anti-Cybercrime (ICACC), 2015 First International Conference on, Riyadh, 2015, pp. 1-5. doi: 10.1109/Anti-Cybercrime.2015.7351949
Abstract: In today's fast changing world, cybercrimes are growing at perturbing pace. At the very definition of it, cybercrimes get engendered by capitalizing on threats and exploitation of vulnerabilities. However, recent history reveals that such crimes often come with surprises and seldom follow the trends. This puts the defense systems behind in the race, because of their inability to identify new patters of cybercrime and to ameliorate to the required levels of security. This paper visualizes the empowerment of security systems through real-time data mining by the virtue of which these systems will be able to dynamically identify patterns of cybercrimes. This will help those security systems stepping up their defense capabilities, while adapting to the required levels posed by newly germinating patterns. In order to confine within scope of this paper, the application of this approach is being discussed in the context of selected scenarios of cybercrime.
Keywords: computer crime; data mining; perturbation techniques; adaptive cybercrime defense system; real-time data mining; security systems; vulnerability exploitation; Computer crime; Data mining; Engines; Internet; Intrusion detection; Real-time systems; Cybercrime; Cybercrime Pattern Recognition (CPR); Information Security; Real-time Data Mining Engine (RTDME); Real-time Security Protocol (RTSP); Realtime Data Mining; TPAC (Threat Prevention & Adaptation Code); Threat Prevention and Response Algorithm Generator (TPRAG) (ID#: 16-11199)


K. V. Muhongya and M. S. Maharaj, “Visualising and Analysing Online Social Networks,” Computing, Communication and Security (ICCCS), 2015 International Conference on, Pamplemousses, 2015, pp. 1-6.  doi: 10.1109/CCCS.2015.7374121
Abstract: The immense popularity of online social networks generates sufficient data, that when carefully analysed, can reveal unexpected realities. People are using them to establish relationships in the form of friendships. Based on data collected, students' networks were extracted, visualized and analysed to reflect the connections among South African communities using Gephi. The analysis revealed a slow progress in terms of connections among communities from different ethnic groups in South Africa. This was facilitated through analysis of data collected through Netvizz as well as by using Gephi to visualize social media network structures.
Keywords: data visualisation; social networking (online); Gephi; South African communities; analysing online social networks; student network; visualising online social networks; visualize social media network structures; Business; Data visualization; Facebook; Image color analysis; Joining processes; Media; Gephi; Online social network; betweeness centrality; closeness centrality; graph; race; visualization (ID#: 16-11200)


W. A. R. d. Souza and A. Tomlinson, “SMM Revolutions,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1466-1472.
doi: 10.1109/HPCC-CSS-ICESS.2015.278
Abstract: The System Management Mode (SMM) is a highly privileged processor operating mode in x86 platforms. The goal of the SMM is to perform system management functions, such as hardware control and power management. Because of this, SMM has powerful resources. Moreover, its executive software executes unnoticed by any other component in the system, including operating systems and hypervisors. For that reason, SMM has been exploited in the past to facilitate attacks, misuse, or alternatively, building security tools capitalising on its resources. In this paper, we discuss how the use of the SMM has been contributing to the arms race between system's attackers and defenders. We analyse the main work published on attacks, misuse and implementing security tools in the SMM and how the SMM has been modified to respond to those issues. Finally, we discuss how Intel Software Guard Extensions (SGX) technology, a sort of “hypervisor in processor”, presents a possible answer to the issue of using SMM for security purposes.
Keywords: operating systems (computers); security of data; virtualisation; Intel Software Guard Extensions technology; SGX technology; SMM; hardware control; hypervisor; hypervisors; operating systems; power management; processor operating mode; system attackers; system defenders; system management mode; Hardware; Operating systems; Process control; Registers; Security; Virtual machine monitors; PC architecture; SGX; SMM; security; virtualization (ID#: 16-11201)


S. Pietrowicz, B. Falchuk, A. Kolarov and A. Naidu, “Web-Based Smart Grid Network Analytics Framework,” Information Reuse and Integration (IRI), 2015 IEEE International Conference on, San Francisco, CA, 2015, pp. 496-501. doi: 10.1109/IRI.2015.82
Abstract: As utilities across the globe continue to deploy Smart Grid technology, there is an immediate and growing need for analytics, diagnostics and forensics tools akin to those commonly employed in enterprise IP networks to provide visibility and situational awareness into the operation, security and performance of Smart Energy Networks. Large-scale Smart Grid deployments have raced ahead of mature management tools, leaving gaps and challenges for operators and asset owners. Proprietary Smart Grid solutions have added to the challenge. This paper reports on the research and development of a new vendor-neutral, packet-based, network analytics tool called MeshView that abstracts information about system operation from low-level packet detail and visualizes endpoint and network behavior of wireless Advanced Metering Infrastructure, Distribution Automation, and SCADA field networks. Using real utility use cases, we report on the challenges and resulting solutions in the software design, development and Web usability of the framework, which is currently in use by several utilities.
Keywords: Internet; power engineering computing; smart power grids; software engineering; Internet protocols; MeshView tool; SCADA field network; Web usability; Web-based smart grid network analytics framework; distribution automation; enterprise IP networks; smart energy networks; smart grid technology; software design; software development; wireless advanced metering infrastructure; Conferences; Advanced Meter Infrastructure; Big data visualization; Cybersecurity; Field Area Networks; Network Analytics; Smart Energy; Smart Grid; System scalability; Web management (ID#: 16-11202)


M. Phillips, B. M. Knoppers and Y. Joly, “Seeking a ‘Race to the Top’ in Genomic Cloud Privacy?,” Security and Privacy Workshops (SPW), 2015 IEEE, San Jose, CA, 2015, pp. 65-69. doi: 10.1109/SPW.2015.26
Abstract: The relationship between data-privacy lawmakers and genomics researchers may have gotten off on the wrong foot. Critics of protectionism in the current laws advocate that we abandon the existing paradigm, which was formulated in an entirely different medical research context. Genomic research no longer requires physically risky interventions that directly affect participants' integrity. But to simply strip away these protections for the benefit of research projects neglects not only new concerns about data privacy, but also broader interests that research participants have in the research process. Protectionism and privacy should not be treated as unwelcome anachronisms. We should instead seek to develop an updated, positive framework for data privacy and participant participation and collective autonomy. It is beginning to become possible to imagine this new framework, by reflecting on new developments in genomics and bioinformatics, such as secure remote processing, data commons, and health data co-operatives.
Keywords: bioinformatics; cloud computing; data privacy; genomics; security of data; collective autonomy; data commons; genomic cloud privacy; genomics research; health data cooperatives; medical research; participant participation; protectionism; secure remote processing; Bioinformatics; Cloud computing; Context; Data privacy; Genomics; Law; Privacy; data protection; health data co-operatives; privacy (ID#: 16-11203)


N. Koutsopoulos, M. Northover, T. Felden and M. Wittiger, “Advancing Data Race Investigation and Classification Through Visualization,” Software Visualization (VISSOFT), 2015 IEEE 3rd Working Conference on, Bremen, 2015, pp. 200-204. doi: 10.1109/VISSOFT.2015.7332437
Abstract: Data races in multi-threaded programs are a common source of serious software failures. Their undefined behavior may lead to intermittent failures with unforeseeable, and in embedded systems, even life-threatening consequences. To mitigate these risks, various detection tools have been created to help identify potential data races. However, these tools produce thousands of data race warnings, often in text-based format, which makes the manual assessment process slow and error-prone. Through visualization, we aim to speed up the data race assessment process by reducing the amount of information to be investigated, and to provide a versatile interface that quality assurance engineers can use to investigate data race warnings. The ultimate goal of our integrated software suite, called RaceView, is to improve the usability of the data race information to such an extent that the elimination of data races can be incorporated into the regular software development process.
Keywords: data visualisation; multi-threading; pattern classification; program diagnostics; software quality; RaceView; data race assessment process; data race classification; data race elimination; data race information usability; data race warnings; integrated software suite; interface; multithreaded programs; quality assurance engineers; software development process; visualization; Data visualization; Instruction sets; Manuals; Merging; Navigation; Radiation detectors; data race detection; graph navigation; graph visualization; static analysis; user interface (ID#: 16-11204)


J. Schimmel, K. Molitorisz, A. Jannesari and W. F. Tichy, “Combining Unit Tests for Data Race Detection,” Automation of Software Test (AST), 2015 IEEE/ACM 10th International Workshop on, Florence, 2015, pp. 43-47. doi: 10.1109/AST.2015.16
Abstract: Multithreaded programs are subject to data races. Data race detectors find such defects by static or dynamic inspection of the program. Current race detectors suffer from high numbers of false positives, slowdown, and false negatives. Because of these disadvantages, recent approaches reduce the false positive rate and the runtime overhead by applying race detection only on a subset of the whole program. To achieve this, they make use of parallel test cases, but this has other disadvantages: Parallel test cases have to be engineered manually, cover code regions that are affected by data races, and execute with input data that provoke the data races. This paper introduces an approach that does not need additional parallel use cases to be engineered. Instead, we take conventional unit tests as input and automatically generate parallel test cases, execution contexts and input data. As can be observed, most real-world software projects nowadays have high test coverages, so a large information base as input for our approach is already available. We analyze and reuse input data, initialization code, and mock objects that conventional unit tests already contain. With this information, no further oracles are necessary for generating parallel test cases. Instead, we reuse the knowledge that is already implicitly available in conventional unit tests. We implemented our parallel test case generation strategy in a tool called Test Merge. To evaluate these test cases we used them as input for the dynamic race detector CHESS that evokes all possible thread interleavings for a given program. We evaluated Test Merge using six sample programs and one industrial application with a high test case coverage of over 94%. For this benchmark, Test Merge identified all previously known data races and even revealed previously unknown ones.
Keywords: multi-threading; program testing; CHESS; TestMerge; data race detectors; dynamic race detector; multithreaded programs; parallel test case generation; thread interleavings; unit tests; Computer bugs; Context; Customer relationship management; Detectors; Schedules; Software; Testing; Data Races; Multicore Software Engineering; Unit Testing (ID#: 16-11205)


S. W. Park, O. K. Ha and Y. K. Jun, “A Loop Filtering Technique for Reducing Time Overhead of Dynamic Data Race Detection,” 2015 8th International Conference on Database Theory and Application (DTA), Jeju, 2015, pp. 29-32. doi: 10.1109/DTA.2015.18
Abstract: Data races are the hardest defect to handle in multithread programs due to their nondeterministic interleaving of concurrent threads. The main drawback of data race detection using dynamic techniques is the additional overhead of monitoring program execution and analyzing every conflicting memory operation. Thus, it is important to reduce the additional overheads for debugging data races. This paper presents a loop filtering technique that rules out repeatedly execution regions of loops from the monitoring targets in the multithread programs. The empirical results using multithread programs show that the filtering technique reduces the average runtime overhead to 60% of that of dynamic data race detection.
Keywords: concurrency (computers); monitoring; multi-threading; program debugging; concurrent threads; data races debugging; dynamic data race detection; dynamic techniques; loop filtering technique; monitoring program execution; multithread programs; nondeterministic interleaving; Databases; Filtering; Monitoring; Performance analysis; Runtime; Multithread programs; data race detection; dynamic analysis; filtering; runtime overheads (ID#: 16-11207)


C. Jia, C. Yang and W. K. Chan, “Architecturing Dynamic Data Race Detection as a Cloud-Based Service,” Web Services (ICWS), 2015 IEEE International Conference on, New York, NY, 2015, pp. 345-352. doi: 10.1109/ICWS.2015.54
Abstract: A web-based service consists of layers of programs (components) in the technology stack. Analyzing program executions of these components separately allows service vendors to acquire insights into specific program behaviors or problems in these components, thereby pinpointing areas of improvement in their offering services. Many existing approaches for testing as a service take an orchestration approach that splits components under test and the analysis services into a set of distributed modules communicating through message-based approaches. In this paper, we present the first work in providing dynamic analysis as a service using a virtual machine (VM)-based approach on dynamic data race detection. Such a detection needs to track a huge number of events performed by each thread of a program execution of a service component, making such an analysis unsuitable to use message passing to transit huge numbers of events individually. In our model, we instruct VMs to perform holistic dynamic race detections on service components and only transfer the detection results to our service selection component. With such result data as the guidance, the service selection component accordingly selects VM instances to fulfill subsequent analysis requests. The experimental results show that our model is feasible.
Keywords: Web services; cloud computing; program diagnostics; virtual machines; VM-based approach; Web-based service; cloud-based service; dynamic analysis-as-a-service; dynamic data race detection; message-based approach; orchestration approach; program behavior; program execution analysis; program execution thread; virtual machine; Analytical models; Clocks; Detectors; Instruction sets; Optimization; Performance analysis; cloud-based usage model; data race detection; dynamic analysis; service engineering; service selection strategy (ID#: 16-11208)


K. Shankari and N. G. B. Amma, “Clasp: Detecting Potential Deadlocks and Its Removal by Iterative Method,” 2015 Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, 2015, pp. 1-5. doi: 10.1109/GET.2015.7453824
Abstract: Considering a multithreaded code there is a possibility of getting deadlocks while running it which cannot provide the necessary output from the required program. It is necessary to eliminate the deadlock to get the process to be successful. In this proposed system which actively eliminates the dependencies that are removable. This can cause potential deadlock localization. It is done in an iterative manner. It can detect the dependencies in iteration based. It identifies the deadlock and then it confirms using its techniques. It can be obtained by finding the lock dependencies and dividing them into partitions and then validating the thread specific partitions and then it again searches the dependencies iteratively to eliminate them. The bugs in the multithreaded program can be traced. When a data race is identified it is isolated and then removed. By using a scheduler the bug is removed. It can increase the execution time of the code. By iterating this process the code can be free from bugs and deadlocks. It can be applied real world problems and can be used to detect the problems that causing a deadlock.
Keywords: concurrency control; iterative methods; multi-threading; program debugging; system recovery; Clasp; bugs; code execution time; data race; deadlock removal; iterative method; multithreaded program; potential deadlock detection; thread specific partitions; Algorithm design and analysis; Clocks; Computer bugs; Heuristic algorithms; Instruction sets; Synchronization; System recovery; data races; deadlock; lock dependencies; multi threaded code (ID#: 16-11209)


J. R. Wilcox, P. Finch, C. Flanagan and S. N. Freund, “Array Shadow State Compression for Precise Dynamic Race Detection (T),” Automated Software Engineering (ASE), 2015 30th IEEE/ACM International Conference on, Lincoln, NE, 2015, pp. 155-165. doi: 10.1109/ASE.2015.19
Abstract: Precise dynamic race detectors incur significant time and space overheads, particularly for array-intensive programs, due to the need to store and manipulate analysis (or shadow) state for every element of every array. This paper presents SlimState, a precise dynamic race detector that uses an adaptive, online algorithm to optimize array shadow state representations. SlimState is based on the insight that common array access patterns lead to analogous patterns in array shadow state, enabling optimized, space efficient representations of array shadow state with no loss in precision. We have implemented SlimState for Java. Experiments on a variety of benchmarks show that array shadow compression reduces the space and time overhead of race detection by 27% and 9%, respectively. It is particularly effective for array-intensive programs, reducing space and time overheads by 35% and 17%, respectively, on these programs.
Keywords: Java; program testing; system monitoring; Java; SLIMSTATE; adaptive online algorithm; analogous patterns; array access patterns; array shadow state compression; array shadow state representations; array-intensive programs; precise dynamic race detection; space efficient representations; space overhead; time overhead; Arrays; Clocks; Detectors; Heuristic algorithms; Instruction sets; Java; Synchronization; concurrency; data race detection; dynamic analysis (ID#: 16-11210)


A. S. Rajam, L. E. Campostrini, J. M. M. Caamaño and P. Clauss, “Speculative Runtime Parallelization of Loop Nests: Towards Greater Scope and Efficiency,” Parallel and Distributed Processing Symposium Workshop (IPDPSW), 2015 IEEE International, Hyderabad, 2015, pp. 245-254. doi: 10.1109/IPDPSW.2015.10
Abstract: Runtime loop optimization and speculative execution are becoming more and more prominent to leverage performance in the current multi-core and many-core era. However, a wider and more efficient use of such techniques is mainly hampered by the prohibitive time overhead induced by centralized data race detection, dynamic code behavior modeling and code generation. Most of the existing Thread Level Speculation (TLS) systems rely on slicing the target loops into chunks, and trying to execute the chunks in parallel with the help of a centralized performance-penalizing verification module that takes care of data races. Due to the lack of a data dependence model, these speculative systems are not capable of doing advanced transformations and, more importantly, the chances of rollback are high. The poly tope model is a well known mathematical model to analyze and optimize loop nests. The current state-of-art tools limit the application of the poly tope model to static control codes. Thus, none of these tools can handle codes with while loops, indirect memory accesses or pointers. Apollo (Automatic Polyhedral Loop Optimizer) is a framework that goes one step beyond, and applies the poly tope model dynamically by using TLS. Apollo can predict, at runtime, whether the codes are behaving linearly or not, and applies polyhedral transformations on-the-fly. This paper presents a novel system, which extends the capability of Apollo to handle codes whose memory accesses are not necessarily linear. More generally, this approach expands the applicability of the poly tope model at runtime to a wider class of codes.
Keywords: multiprocessing systems; optimisation; parallel programming; program compilers; program verification; Apollo; TLS; automatic polyhedral loop optimizer; centralized data race detection; centralized performance-penalizing verification module; code generation; data dependence model; dynamic code behavior modeling; loop nests; many-core era; memory accesses; multicore era; polyhedral transformations; polytope model; prohibitive time overhead; runtime loop optimization; speculative execution; speculative runtime parallelization; static control codes; thread level speculation systems; Adaptation models; Analytical models; Mathematical model; Optimization; Predictive models; Runtime; Skeleton; Automatic parallelization; Polyhedral model; Thread level speculation; loop optimization; non affine accesses (ID#: 16-11211)


C. Segulja and T. S. Abdelrahman, “Clean: A Race Detector with Cleaner Semantics,” 2015 ACM/IEEE 42nd Annual International Symposium on Computer Architecture (ISCA), Portland, OR, 2015, pp. 401-413. doi: 10.1145/2749469.2750395
Abstract: Data races make parallel programs hard to understand. Precise race detection that stops an execution on first occurrence of a race addresses this problem, but it comes with significant overhead. In this work, we exploit the insight that precisely detecting only write-after-write (WAW) and read-after-write (RAW) races suffices to provide cleaner semantics for racy programs. We demonstrate that stopping an execution only when these races occur ensures that synchronization-free-regions appear to be executed in isolation and that their writes appear atomic. Additionally, the undetected racy executions can be given certain deterministic guarantees with efficient mechanisms. We present CLEAN, a system that precisely detects WAW and RAW races and deterministically orders synchronization. We demonstrate that the combination of these two relatively inexpensive mechanisms provides cleaner semantics for racy programs. We evaluate both software-only and hardware-supported CLEAN. The software-only CLEAN runs all Pthread benchmarks from the SPLASH-2 and PARSEC suites with an average 7.8x slowdown. The overhead of precise WAW and RAW detection (5.8x) constitutes the majority of this slowdown. Simple hardware extensions reduce the slowdown of CLEAN's race detection to on average 10.4% and never more than 46.7%.
Keywords: parallel programming; programming language semantics; synchronisation; CLEAN system; PARSEC; Pthread benchmarks; RAW races; SPLASH-2; WAW races; cleaner semantics; data races; deterministic guarantees; hardware-supported CLEAN; parallel programs; race detection; race detector; racy executions; racy programs; read-after-write races; software-only CLEAN; synchronization-free-regions; write-after-write races; Instruction sets; Switches; Synchronization (ID#: 16-11212)


P. Wang, D. J. Dean and X. Gu, “Understanding Real World Data Corruptions in Cloud Systems,” Cloud Engineering (IC2E), 2015 IEEE International Conference on, Tempe, AZ, 2015, pp. 116-125. doi: 10.1109/IC2E.2015.41
Abstract: Big data processing is one of the killer applications for cloud systems. MapReduce systems such as Hadoop are the most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in cloud data processing, which not only has serious impact on the integrity of individual application results but also affects the performance and availability of the whole data processing system. In this paper, we present a comprehensive study on 138 real world data corruption incidents reported in Hadoop bug repositories. We characterize those data corruption problems in four aspects: (1) what impact can data corruption have on the application and system? (2) how is data corruption detected? (3) what are the causes of the data corruption? and (4) what problems can occur while attempting to handle data corruption? Our study has made the following findings: (1) the impact of data corruption is not limited to data integrity, (2) existing data corruption detection schemes are quite insufficient: only 25% of data corruption problems are correctly reported, 42% are silent data corruption without any error message, and 21% receive imprecise error report. We also found the detection system raised 12% false alarms, (3) there are various causes of data corruption such as improper runtime checking, race conditions, inconsistent block states, improper network failure handling, and improper node crash handling, and (4) existing data corruption handling mechanisms (i.e., data replication, replica deletion, simple re-execution) make frequent mistakes including replicating corrupted data blocks, deleting uncorrupted data blocks, or causing undesirable resource hogging.
Keywords: cloud computing; data handling; Hadoop; MapReduce systems; big data processing; cloud data processing; cloud systems; data corruption; data corruption problems; data integrity; improper network failure handling; improper node crash handling; inconsistent block states; race conditions; real world data corruptions; runtime checking; Availability; Computer bugs; Data processing; Radiation detectors; Software; Yarn (ID#: 16-11213)


J. Zarei, M. M. Arefi and H. Hassani, “Bearing Fault Detection Based on Interval Type-2 Fuzzy Logic Systems for Support Vector Machines,” Modeling, Simulation, and Applied Optimization (ICMSAO), 2015 6th International Conference on, Istanbul, 2015, pp. 1-6. doi: 10.1109/ICMSAO.2015.7152214
Abstract: A method based on Interval Type-2 Fuzzy Logic Systems (IT2FLSs) for combination of different Support Vector Machines (SVMs) in order to bearing fault detection is the main argument of this paper. For this purpose, an experimental setup has been provided to collect data samples of stator current phase a of the induction motor using healthy and defective bearing. The defective bearing has an inner race hole with the diameter 1-mm that is created by the spark. An Interval Type-2 Fuzzy Fusion Model (IT2FFM) has been presented that is consists of two phases. Using this IT2FFM, testing data samples have been classified. A comparison between T1FFM, IT2FFM, SVMs and also Adaptive Neuro Fuzzy Inference Systems (ANFIS) in classification of testing data samples has been done and the results show the effectiveness of the proposed ITFFM.
Keywords: electrical engineering computing; fault diagnosis; fuzzy logic; fuzzy neural nets; fuzzy reasoning; fuzzy set theory; induction motors; machine bearings; mechanical engineering computing; pattern classification; stators; support vector machines; ANFIS; IT2FFM; Interval Type-2 Fuzzy Fusion Model; SVM; T1FFM; adaptive neuro fuzzy inference systems; bearing fault detection; defective bearing; healthy bearing; induction motor; inner race hole; interval type-2 fuzzy logic systems; size 1 mm; stator current phase; support vector machines; testing data sample classification; Accuracy; Fault detection; Fuzzy logic; Fuzzy sets; Kernel; Support vector machines; Testing; Bearing; Fault Detection; Support Vector Machines; Type-2 fuzzy logic system (ID#: 16-11214)


R. Z. Haddad, C. A. Lopez, J. Pons-Llinares, J. Antonino-Daviu and E. G. Strangas, “Outer Race Bearing Fault Detection in Induction Machines Using Stator Current Signals,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 801-808. doi: 10.1109/INDIN.2015.7281839
Abstract: This paper discusses the effect of the operating load as well as the suitability of combined startup and steady-state analysis for the detection of bearing faults in induction machines, Motor Current Signature Analysis and Linear Discriminant Analysis are used to detect and estimate the severity of an outer race bearing fault. The algorithm is based on using the machine stator current signals, instead of the conventional vibration signals, which has the advantages of simplicity and low cost of the necessary equipment. The machine stator current signals are analyzed during steady state and start up using Fast Fourier Transform and Short Time Fourier Transform. For steady state operation, two main changes in the spectrum compared to the healthy case: firstly, new harmonics related to bearing faults are generated, and secondly, the amplitude of the grid harmonics changes with the degree of the fault. For start up signals, the energy of the current signal frequency within a specific frequency band related to the bearing fault increases with the fault severity. Linear Discriminant Analysis classification is used to detect a bearing fault and estimate its severity for different loads using the amplitude of the grid harmonics as features for the classifier. Experimental data were collected from a 1.1 kW, 400V, 50 Hz induction machine in healthy condition, and two severities of outer race bearing fault at three different load levels: no load, 50% load, and 100% load.
Keywords: asynchronous machines; fast Fourier transforms; fault diagnosis; machine bearings; stators; bearing faults detection; fast Fourier transform; fault severity; grid harmonics amplitude; induction machines; linear discriminant analysis; machine stator current signals; motor current signature analysis; outer race bearing fault; short time Fourier transform; steady-state analysis; Fault detection; Harmonic analysis; Induction machines; Stators; Steady-state; Torque; Vibrations; Ball bearing; Fast Fourier Transform; Induction machine; Linear Discriminant Analysis; Outer race bearing fault; Short Time Fourier Transform (ID#: 16-11215)


S. Saidi and Y. Falcone, “Dynamic Detection and Mitigation of DMA Races in MPSoCs,” Digital System Design (DSD), 2015 Euromicro Conference on, Funchal, 2015, pp. 267-270. doi: 10.1109/DSD.2015.77
Abstract: Explicitly managed memories have emerged as a good alternative for multicore processors design in order to reduce energy and performance costs. Memory transfers then rely on Direct Memory Access (DMA) engines which provide a hardware support for accelerating data. However, programming explicit data transfers is very challenging for developers who must manually orchestrate data movements through the memory hierarchy. This is in practice very error-prone and can easily lead to memory inconsistency. In this paper, we propose a runtime approach for monitoring DMA races. The monitor acts as a safeguard for programmers and is able to enforce at runtime a correct behavior w.r.t the semantics of the program execution. We validate the approach using traces extracted from industrial benchmarks and executed on the multiprocessor system-on-chip platform STHORM. Our experiments demonstrate that the monitoring algorithm has a low overhead (less than 1.5 KB) of on-chip memory consumption and an overhead of less than 2% of additional execution time.
Keywords: multiprocessing systems; storage management; system-on-chip; DMA engines; DMA races monitoring; MPSoC; STHORM; accelerating data; data movements; data transfers; direct memory access engines; dynamic detection and mitigation; energy reduction; hardware support; memories management; memory hierarchy; memory inconsistency; memory transfers; monitoring algorithm; multicore processors design; multiprocessor system-on-chip platform; on-chip memory consumption; performance costs; program execution semantics; runtime approach; Benchmark testing; Memory management; Monitoring; Program processors; Runtime; System-on-chip (ID#: 16-11216)


P. Chatarasi and V. Sarkar, “Extending Polyhedral Model for Analysis and Transformation of OpenMP Programs,” 2015 International Conference on Parallel Architecture and Compilation (PACT), San Francisco, CA, 2015, pp. 490-491. doi: 10.1109/PACT.2015.57
Abstract: The polyhedral model is a powerful algebraic framework that has enabled significant advances in analysis and transformation of sequential affine (sub)programs, relative to traditional AST-based approaches. However, given the rapid growth of parallel software, there is a need for increased attention to using polyhedral compilation techniques to analyze and transform explicitly parallel programs. In our PACT'15 paper titled “Polyhedral Optimizations of Explicitly Parallel Programs” [1, 2], we addressed the problem of analyzing and transforming programs with explicit parallelism that satisfy the serial-elision property, i.e., the property that removal of all parallel constructs results in a sequential program that is a valid (albeit inefficient) implementation of the parallel program semantics. In this poster, we address the problem of analyzing and transforming more general OpenMP programs that do not satisfy the serial-elision property. Our contributions include the following: (1) An extension of the polyhedral model to represent input OpenMP programs, (2) Formalization of May Happen in Parallel (MHP) and Happens before (HB) relations in the extended model, (3) An approach for static detection of data races in OpenMP programs by generating race constraints that can be solved by an SMT solver such as Z3, and (4) An approach for transforming OpenMP programs.
Keywords: algebra; parallel programming; program compilers; AST-based approach; OpenMP programs; SMT solver; algebraic framework; parallel programs; parallel software; polyhedral compilation techniques; polyhedral model; sequential affine (sub)programs; serial-elision property; Analytical models; Instruction sets; Parallel architectures; Parallel processing; Schedules; Semantics
(ID#: 16-11217)


J. Huang, Q. Luo and G. Rosu, “GPredict: Generic Predictive Concurrency Analysis,” 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Florence, 2015, pp. 847-857. doi: 10.1109/ICSE.2015.96
Abstract: Predictive trace analysis (PTA) is an effective approach for detecting subtle bugs in concurrent programs. Existing PTA techniques, however, are typically based on adhoc algorithms tailored to low-level errors such as data races or atomicity violations, and are not applicable to high-level properties such as “a resource must be authenticated before use“ and “a collection cannot be modified when being iterated over”. In addition, most techniques assume as input a globally ordered trace of events, which is expensive to collect in practice as it requires synchronizing all threads. In this paper, we present GPredict: a new technique that realizes PTA for generic concurrency properties. Moreover, GPredict does not require a global trace but only the local traces of each thread, which incurs much less runtime overhead than existing techniques. Our key idea is to uniformly model violations of concurrency properties and the thread causality as constraints over events. With an existing SMT solver, GPredict is able to precisely predict property violations allowed by the causal model. Through our evaluation using both benchmarks and real world applications, we show that GPredict is effective in expressing and predicting generic property violations. Moreover, it reduces the runtime overhead of existing techniques by 54% on DaCapo benchmarks on average.
Keywords: concurrency control; program debugging; program diagnostics; DaCapo benchmarks; GPredict; PTA; SMT solver; concurrent programs; generic predictive concurrency analysis; local traces; predictive trace analysis; subtle bug detection; Concurrent computing; Java; Prediction algorithms; Predictive models; Runtime; Schedules; Syntactics (ID#: 16-11219)


R. Pang, A. Baretto, H. Kautz and J. Luo, “Monitoring Adolescent Alcohol Use via Multimodal Analysis in Social Multimedia,” Big Data (Big Data), 2015 IEEE International Conference on, Santa Clara, CA, 2015, pp. 1509-1518. doi: 10.1109/BigData.2015.7363914
Abstract: Underage drinking or adolescent alcohol use is a major public health problem that causes more than 4,300 annual deaths. Traditional methods for monitoring adolescent alcohol consumption are based on surveys, which have many limitations and are difficult to scale. The main limitations include 1) respondents may not provide accurate, honest answers, 2) surveys with closed-ended questions may have a lower validity rate than other question types, 3) respondents who choose to respond may be different from those who chose not to respond, thus creating bias, 4) cost, 5) small sample size, and 6) lack of temporal sensitivity. We propose a novel approach to monitoring underage alcohol use by analyzing Instagram users' contents in order to overcome many of the limitations of surveys. First, Instagram users' demographics (such as age, gender and race) are determined by analyzing their selfie photos with automatic face detection and face analysis techniques supplied by a state-of-the-art face processing toolkit called Face++. Next, the tags associated with the pictures uploaded by users are used to identify the posts related to alcohol consumption and discover the existence of drinking patterns in terms of time, frequency and location. To that end, we have built an extensive dictionary of drinking activities based on internet slang and major alcohol brands. Finally, we measure the penetration of alcohol brands among underage users within Instagram by analyzing the followers of such brands in order to evaluate to what extent they might influence their followers' drinking behaviors. Experimental results using a large number of Instagram users have revealed several findings that are consistent with those of the conventional surveys, thus partially validating the proposed approach. Moreover, new insights are obtained that may help develop effective intervention. We believe that this approach can be effectively applied to other domains of public health.
Keywords: face recognition; medical computing; multimedia computing; social networking (online); Face++; Instagram user content analysis; Instagram user demographics; Internet slang; adolescent alcohol consumption monitoring; adolescent alcohol monitoring; automatic face detection; drinking behaviors; face analysis techniques; face processing toolkit; major alcohol brands; multimodal analysis; public health problem; selfie photo analysis; social multimedia; temporal sensitivity; underage alcohol usage monitoring; underage drinking; Big data; Conferences; Decision support systems; Dictionaries; Handheld computers; Media; Multimedia communication; data mining; social media; social multimedia; underage drinking public health (ID#: 16-11220)


G. A. Skrimpas et al., “Detection of Generator Bearing Inner Race Creep by Means of Vibration and Temperature Analysis,” Diagnostics for Electrical Machines, Power Electronics and Drives (SDEMPED), 2015 IEEE 10th International Symposium on, Guarda, 2015, pp. 303-309. doi: 10.1109/DEMPED.2015.7303706
Abstract: Vibration and temperature analysis are the two dominating condition monitoring techniques applied to fault detection of bearing failures in wind turbine generators. Relative movement between the bearing inner ring and generator axle is one of the most severe failure modes in terms of secondary damages and development. Detection of bearing creep can be achieved reliably based on continuous trending of the amplitude of vibration running speed harmonic and temperature absolute values. In order to decrease the number of condition indicators which need to be assessed, it is proposed to exploit a weighted average descriptor calculated based on the 3rd up to 6th harmonic orders. Two cases of different bearing creep severity are presented, showing the consistency of the combined vibration and temperature data utilization. In general, vibration monitoring reveals early signs of abnormality several months prior to any permanent temperature increase, depending on the fault development.
Keywords: condition monitoring; creep; electric generators; failure analysis; fault diagnosis; harmonic analysis; machine bearings; thermal analysis; vibrations; bearing failures; bearing inner ring; condition monitoring techniques; fault detection; generator axle; generator bearing inner race creep; temperature absolute values; temperature analysis; vibration analysis; vibration running speed harmonic; weighted average descriptor; wind turbine generators; Creep; Generators; Harmonic analysis; Market research; Shafts; Vibrations; Wind turbines; Condition monitoring; angular resampling; bearing creep; rotational looseness; vibration analysis
(ID#: 16-11221)


M. Charles, T. N. Miano, X. Zhang, L. E. Barnes and J. M. Lobo, “Monitoring Quality Indicators for Screening Colonoscopies,” Systems and Information Engineering Design Symposium (SIEDS), 2015, Charlottesville, VA, 2015, pp. 171-175. doi: 10.1109/SIEDS.2015.7116968
Abstract: The detection rate of adenomas in screening colonoscopies is an important quality indicator for endoscopists. Successful detection of adenomas is linked to reduced cancer incidence and mortality. This study focuses on evaluating the performance of endoscopists on adenoma detection rate (ADR), polyp detection rate (PDR), and scope withdrawal time. The substitution of PDR for ADR has been suggested due the reliance of ADR calculation on pathology reports. We compare these metrics to established clinical guidelines and to the performance of other individual endoscopists. Our analysis (n = 2730 screening colonoscopies) found variation in ADR for 14 endoscopists, ranging from 0.20 to 0.41. PDR ranged from 0.38 to 0.62. Controlling for age, sex, race, withdrawal time, and the presence of a trainee endoscopist accounted for 34% of variation in PDR but failed to account for any variation in ADR. The Pearson correlation between PDR and ADR is 0.82. These results suggest that PDR has significant value as a quality indicator. The reported variation in detection rates after controlling for case mix signals the need for greater scrutiny of individual endoscopist skill. Understanding the root cause of this variation could potentially lead to better patient outcomes.
Keywords: cancer; endoscopes; medical image processing; object detection; ADR; PDR; Pearson correlation; adenomas detection rate; cancer incidence; cancer mortality; clinical guidelines; endoscopists; pathology reports; polyp detection rate; quality indicator monitoring; screening colonoscopies; Cancer; Colonoscopy; Endoscopes; Guidelines; Logistics; Measurement; Predictive models; Electronic Medical Records; Health Data; Machine Learning; Physician Performance (ID#: 16-11222)


B. Bayart, A. Vartanian, P. Haefner and J. Ovtcharova, “TechViz XL Helps KITs Formula Student Car ‘Become Alive,’” 2015 IEEE Virtual Reality (VR), Arles, 2015, pp. 395-396. doi: 10.1109/VR.2015.7223462
Abstract: TechViz has been a supporter of Formula Student at KIT for several years reflecting the companys long-term commitment to enhance engineering and education by providing students with powerful VR system software to connect curriculum to real-world applications. Incorporating immersive visualisation and interaction environment into Formula Student vehicle design is proven to deliver race day success, by helping to detect faults and optimise product life cycle. The TechViz LESC system helps to improve the car design despites the short limit of time, thanks to the direct visualisation in the VR system of the CAD mockup and the ease of usage for non-VR experts.
Keywords: automobiles; computer aided instruction; data visualisation; graphical user interfaces; human computer interaction; virtual reality; CAD mockup; KITs formula student car; TechViz LESC system; TechViz XL; VR system software; fault detection; formula student vehicle design; immersive visualisation; interaction environment; product life cycle optimization; virtual reality system; Companies; Hardware; Solid modeling; Three-dimensional displays; Vehicles; Virtual reality; Visualization (ID#: 16-11223)


T. Sim and L. Zhang, “Controllable Face Privacy,” Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Ljubljana, 2015, pp. 1-8. doi: 10.1109/FG.2015.7285018
Abstract: We present the novel concept of Controllable Face Privacy. Existing methods that alter face images to conceal identity inadvertently also destroy other facial attributes such as gender, race or age. This all-or-nothing approach is too harsh. Instead, we propose a flexible method that can independently control the amount of identity alteration while keeping unchanged other facial attributes. To achieve this flexibility, we apply a subspace decomposition onto our face encoding scheme, effectively decoupling facial attributes such as gender, race, age, and identity into mutually orthogonal subspaces, which in turn enables independent control of these attributes. Our method is thus useful for nuanced face de-identification, in which only facial identity is altered, but others, such gender, race and age, are retained. These altered face images protect identity privacy, and yet allow other computer vision analyses, such as gender detection, to proceed unimpeded. Controllable Face Privacy is therefore useful for reaping the benefits of surveillance cameras while preventing privacy abuse. Our proposal also permits privacy to be applied not just to identity, but also to other facial attributes as well. Furthermore, privacy-protection mechanisms, such as k-anonymity, L-diversity, and t-closeness, may be readily incorporated into our method. Extensive experiments with a commercial facial analysis software show that our alteration method is indeed effective.
Keywords: computer vision; data privacy; face recognition; image coding; L-diversity mechanism; computer vision analysis; controllable face privacy concept; face de-identification; face encoding scheme; face images; facial attributes; identity alteration control; k-anonymity mechanism; mutually orthogonal subspaces; privacy-protection mechanisms; subspace decomposition; t-closeness mechanism; Cameras; Detectors; Face; Privacy; Shape; Training; Visual analytics (ID#: 16-11224)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Dynamical Systems 2015


SoS Logo

Dynamical Systems



Research into dynamical systems cited here focuses on non-linear and chaotic dynamical systems and in proving abstractions of dynamical systems through numerical simulations. Many of the applications studied are cyber-physical systems and are relevant to the Science of Security hard problems of resiliency, predictive metrics, and composability. These works were presented in 2015.

R. K. Yedavalli, “Security and Vulnerability in the Stabilization of Networks of Controlled Dynamical Systems via Robustness Framework,” 2015 American Control Conference (ACC), Chicago, IL, 2015, pp. 5396-5401. doi: 10.1109/ACC.2015.7172183
Abstract: This paper addresses the issues of security and vulnerability of links in the stabilization of networked control systems from robustness viewpoint. As is done in recent research, we view network security as a robustness issue. However, in this paper we shed considerable new insight into this topic and offer a new and differing perspective. We argue that `robustness' aspect is a common theme related to both vulnerability and security. This paper puts forth the viewpoint that vulnerability of a networked system is a manifestation of the combination of two types of robustness, namely ‘qualitative robustness’ and ‘quantitative robustness’. In other words, the entire robustness concept is treated as a combination of qualitative robustness and quantitative robustness, wherein qualitative robustness is linked to the system’s nature of interactions and interconnections i.e. system’s structure while quantitative robustness is linked to the system dynamics. Put it another way, qualitative robustness is independent of magnitudes and depends only on the signs of the system dynamics matrix whereas quantitative robustness is purely a function of the quantitative information (both magnitudes and signs) of the entries of the system dynamics matrix. In that sense, these two concepts are inter-related, each influencing and complementing the other. Applying these notions to the networked control systems represented by ‘dynamical structure functions’, it is shown that any specific dynamical structure function originated by a state space represenation, is a function of both qualitative and quantitative robustness. In other words, vulnerability of links in that network is determined by both the signs and magnitudes of that state space matrix of that dynamical structure function. Thus the notion in the recent literature that `vulnerability depends on the system structure, not the dynamics and the robustness, which depends on the dynamics, and not on the system structure' is disput- d and clear justification for our viewpoint is provided by newly introduced notions of ‘qualitative robustness’ and ‘quantitative robustness’. This paper then presents few specific dynamical structure functions that possess a large number of non-vulnerable links which is desirable for a secure network. The proposed concepts are illustrated with many useful examples.
Keywords: matrix algebra; networked control systems; robust control; security; controlled dynamical systems; dynamical structure functions; link security; link vulnerability; network security; networked control system stabilization; qualitative robustness; quantitative robustness; robustness framework; state space matrix; state space represenation; system dynamics; Indexes; Jacobian matrices; Robustness; Security; Stability criteria; Transfer functions (ID#: 16-10196)


Yingjun Yuan, Zhitao Huang, Fenghua Wang and Xiang Wang, “Radio Specific Emitter Identification Based on Nonlinear Characteristics of Signal,” Communications and Networking (BlackSeaCom), 2015 IEEE International Black Sea Conference on, Constanta, 2015, pp. 77-81. doi: 10.1109/BlackSeaCom.2015.7185090
Abstract: Radio Specific Emitter Identification (SEI) is the technique which identifies the individual radio emitter based on the received signals’ specific properties called signals’ Radio Frequency Fingerprint (RFF). SEI is very significant for improving the security of wireless networks. A novel SEI method which treats the emitter as a nonlinear dynamical system is proposed in this paper. The method works based on the signal’s nonlinear characteristics which result from the unintentional and unavoidable physical-layer imperfections. The reconstructed phase space (RPS) is used as the tool for analyzing the nonlinearity. The entire characteristics of RPS and state changes characteristics of points in RPS are extracted to form RFF. To evaluate the availability of the RFF, the signals from four wireless network cards are collected by a signal acquisition system. The proposed RFF’s discrimination capabilities are visually analyzed using the boxplot. The results of visual analysis and classification demonstrate that this method is effective.
Keywords: nonlinear dynamical systems; radio networks; signal reconstruction; telecommunication security; nonlinear dynamical system; nonlinear signal characteristics; phase space reconstruction; radio specific emitter identification; signal nonlinear characteristics; wireless network security; Conferences; Feature extraction; Fingerprint recognition; Nonlinear dynamical systems; Transient analysis; Visualization; Wireless networks; Specific emitter identification; nonlinearity; phase space; radio frequency fingerprint (ID#: 16-10197) 


J. Wang, Y. Wang, X. Wen, T. Yang and Q. Ding, “The Simulation Research and NIST Test of Binary Quantification Methods for the Typical Chaos,” 2015 Third International Conference on Robot, Vision and Signal Processing (RVSP), Kaohsiung, 2015,
pp. 180-184. doi: 10.1109/RVSP.2015.50
Abstract: In this paper, we study into the binary quantification methods of Logistic, Tent and Lorenz three typical chaos, and apply direction quantization, threshold quantization and interval quantization to quantify typical chaos signal respectively. On the basic of studying the standards of NIST test and the test suite of STS, we do a lot of NIST tests and analysis on the three typical chaos sequence to find the best quantification method for the three typical chaos, and study the impact of the system parameters, the initial value and the length of sequence on the digital chaotic sequence, and achieve better chaotic sequence in randomness, which provide some theoretical guidance to the digital secure communication.
Keywords: chaotic communication; quantisation (signal); NIST test; binary quantification methods; chaos; chaotic sequence; digital chaotic sequence; direction quantization; quantification method; threshold quantization; Chaotic communication; Logistics; NIST; Nonlinear dynamical systems; Quantization (signal); Security; binary quantification; digital secure communication (ID#: 16-10198)


C. Luo and D. Zeng, “Multivariate Embedding Based Causality Detection with Short Time Series,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 138-140. doi: 10.1109/ISI.2015.7165954
Abstract: Existing causal inference methods for social media usually rely on limited explicit causal context, preassume certain user interaction model, or neglect the nonlinear nature of social interaction, which could lead to bias estimations of causality. Besides, they often require sufficiently long time series to achieve reasonable results. Here we propose to take advantage of multivariate embedding to perform causality detection in social media. Experimental results show the efficacy of the proposed approach in causality detection and user behavior prediction in social media.
Keywords: causality; inference mechanisms; social networking (online); time series; bias estimations; causal inference methods; multivariate embedding based causality detection; social interaction; social media; user behavior prediction; user interaction model; Manifolds; Media; Neural networks; Nonlinear dynamical systems; Social network services; Time series analysis; Training; causality detection; multivariate embedding; nonlinear dynamic system; user influence (ID#: 16-10199)


C. Lee, H. Shim and Y. Eun, “Secure and Robust State Estimation Under Sensor Attacks, Measurement Noises, and Process Disturbances: Observer-Based Combinatorial Approach,” Control Conference (ECC), 2015 European, Linz, 2015, pp. 1872-1877. doi: 10.1109/ECC.2015.7330811
Abstract: This paper presents a secure and robust state estimation scheme for continuous-time linear dynamical systems. The method is secure in that it correctly estimates the states under sensor attacks by exploiting sensing redundancy, and it is robust in that it guarantees a bounded estimation error despite measurement noises and process disturbances. In this method, an individual Luenberger observer (of possibly smaller size) is designed from each sensor. Then, the state estimates from each of the observers are combined through a scheme motivated by error correction techniques, which results in estimation resiliency against sensor attacks under a mild condition on the system observability. Moreover, in the state estimates combining stage, our method reduces the search space of a minimization problem to a finite set, which substantially reduces the required computational effort.
Keywords: continuous time systems; error correction; linear systems; observers; redundancy; robust control; security; Luenberger observer; bounded estimation error; continuous-time linear dynamical system; error correction technique; observer-based combinatorial approach; robust state estimation; search space; secure state estimation; sensor attack; Indexes; Minimization; Noise measurement; Observers; Redundancy; Robustness (ID#: 16-10200)


D. Palma, P. L. Montessoro, G. Giordano and F. Blanchini, “A Dynamic Algorithm for Palmprint Recognition,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 659-662. doi: 10.1109/CNS.2015.7346883
Abstract: Most of the existing techniques for palmprint recognition are based on metrics that evaluate the distance between a pair of features. These metrics are typically based on static functions. In this paper we propose a new technique for palmprint recognition based on a dynamical system approach, focusing on preliminary experimental results. The essential idea is that the procedure iteratively eliminates points in both images to be compared which do not have enough close neighboring points in the image itself and in the comparison image. As a result of the iteration, in each image the surviving points are those having enough neighboring points in the comparison image. Our preliminary experimental results show that the proposed dynamic algorithm is competitive and slightly outperforms some state-of-the-art methods by achieving a higher genuine acceptance rate.
Keywords: palmprint recognition; biometric systems; dynamic algorithm; dynamical system approach; iteration; Biometrics (access control); Databases; Feature extraction; Heuristic algorithms; Security; Yttrium (ID#: 16-10201)


S. W. Neville, M. Elgamal and Z. Nikdel, “Robust Adversarial Learning and Invariant Measures,” Communications, Computers and Signal Processing (PACRIM), 2015 IEEE Pacific Rim Conference on, Victoria, BC, 2015, pp. 529-535. doi: 10.1109/PACRIM.2015.7334893
Abstract: A number of open cyber-security challenges are arising due to the rapidly evolving scale, complexity, and heterogeneity of modern IT systems and networks. The ease with which copious volumes of operational data can be collected from such systems has produced a strong interest in the use of machine learning (ML) for cyber-security, provided that ML can itself be made sufficiently immune to attack. Adversarial learning (AL) is the domain focusing on such issues and an arising AL theme is the need to ensure that ML solutions make use of robust input measurement features (i.e., the data sets used for ML training must themselves be robust against adversarial influences). This observation leads to further open questions, including: “What formally denotes sufficient robustness?”, “Must robust features necessarily exist for all IT systems?”, “Do robust features necessarily provide complete coverage of the attack space?”, etc. This work shows that these (and other) open AL questions can be usefully re-cast in terms of the classical dynamical system’s problem of needing to focus analyses on a system’s invariant measures. This re-casting is useful as a large body of mature dynamical systems theory exists concerning invariant measures which can then be applied to cyber-security. To our knowledge this the first work to identify and highlight this potentially useful cross-domain linkage.
Keywords: learning (artificial intelligence); security of data; ML training; adversarial learning; cross-domain linkage; cyber-security; machine learning; Complexity theory; Computer security; Extraterrestrial measurements; Focusing; Robustness; Sensors
(ID#: 16-10202)


H. A. Kingravi, H. Maske and G. Chowdhary, “Kernel Controllers: A Systems-Theoretic Approach for Data-Driven Modeling and Control of Spatiotemporally Evolving Processes,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 7365-7370. doi: 10.1109/CDC.2015.7403382
Abstract: We consider the problem of modeling, estimating, and controlling the latent state of a spatiotemporally evolving continuous function using very few sensor measurements and actuator locations. Our solution to the problem consists of two parts: a predictive model of functional evolution, and feedback based estimator and controllers that can robustly recover the state of the model and drive it to a desired function. We show that layering a dynamical systems prior over temporal evolution of weights of a kernel model is a valid approach to spatiotemporal modeling that leads to systems theoretic, control-usable, predictive models. We provide sufficient conditions on the number of sensors and actuators required to guarantee observability and controllability. The approach is validated on a large real dataset, and in simulation for the control of spatiotemporally evolving function.
Keywords: estimation theory; feedback; predictive control; simulation; system theory; actuator locations; data-driven modeling; feedback based estimator; kernel controllers; predictive model; ; spatiotemporally evolving continuous function; systems-theoretic approach; Dictionaries; High definition video; Hilbert space; Kernel; Mathematical model; Predictive models; Spatiotemporal phenomena (ID#: 16-10203)


J. Zhang, “An Image Encryption Scheme Based on Cat Map and Hyperchaotic Lorenz System,” Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, Ghaziabad, 2015, pp. 78-82. doi: 10.1109/CICT.2015.134
Abstract: In recent years, chaos-based image cipher has been widely studied and a growing number of schemes based on permutation-diffusion architecture have been proposed. However, recent studies have indicated that those approaches based on low-dimensional chaotic maps/systems have the drawbacks of small key space and weak security. In this paper, a security improved image cipher which utilizes cat map and hyper chaotic Lorenz system is reported. Compared with ordinary chaotic systems, hyper chaotic systems have more complex dynamical behaviors and number of system variables, which demonstrate a greater potential for constructing a secure cryptosystem. In diffusion stage, a plaintext related key stream generation strategy is introduced, which further improves the security against known/chosen-plaintext attack. Extensive security analysis has been performed on the proposed scheme, including the most important ones like key space analysis, key sensitivity analysis and various statistical analyses, which has demonstrated the satisfactory security of the proposed scheme.
Keywords: cryptography; image processing; statistical analysis; cat map; chaos-based image cipher; complex dynamical behaviors; cryptosystem; hyperchaotic Lorenz system; image encryption scheme; key sensitivity analysis; key space analysis; key stream generation strategy; low-dimensional chaotic maps; permutation-diffusion architecture; security analysis; Chaotic communication; Ciphers; Correlation; Encryption; image cipher; permutation-diffusion (ID#: 16-10204)


W. Qi et al., “A Dynamic Reactive Power Reserve Optimization Method to Enhance Transient Voltage Stability,” Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2015 IEEE International Conference on, Shenyang, 2015, pp. 1523-1528. doi: 10.1109/CYBER.2015.7288171
Abstract: Dynamic reactive power reserve of power system is vital to improve transient voltage security. A novel definition of reactive power reserve considering transient voltage security in transient procession is proposed. Participation factor to evaluate reactive power reserve’s contribution to transient voltage stability is computed through trajectory sensitivity method. Then an optimization model to enhance transient voltage stability is built and a solving algorithm is proposed. Based on an analysis of the transient voltage stability characteristics of East China Power Grid, the effectiveness of the proposed dynamical reactive power reserve optimization approach for improving transient voltage stability of large-scale AC-DC hybrid power systems is verified.
Keywords: AC-DC power convertors; optimisation; power grids; power system control; power system transient stability; reactive power control; voltage regulators; East China power grid; dynamic reactive power reserve optimization method; large-scale AC-DC hybrid power systems; participation factor; trajectory sensitivity method; transient procession; transient voltage security improvement; transient voltage stability enhancement; Optimization; Power system dynamics; Power system stability; Reactive power; Stability analysis; Transient analysis; AC-DC hybrid power system; Dynamic reactive power reserve; optimization method; transient voltage stability (ID#: 16-10205)


N. Ye, R. Geng, X. Song, Q. Wang and Z. Ning, “Hierarchic Topology Management by Decision Model and Smart Agents in Space Information Networks,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1817-1825. doi: 10.1109/HPCC-CSS-ICESS.2015.122
Abstract: Space information network, which is envisioned as a new type of self-organizing networks constituted by information systems of land, sea, air, and space, has attracted tremendous interest recently. In this paper, to improve the data delivery performance and the network scalability of space information networks, a new hierarchic topology management scheme based on decision model and smart agents is proposed. Different from the schemes studied in mobile ad hoc networks and wireless sensor networks, the proposed algorithm in space information networks introduces a decision model based on analytic hierarchy process (AHP) to first select cluster heads, and then forms non-overlapping k-hop clusters. The proposed dynamical self-maintenance mechanisms take not only the node mobility but also the cluster equalization into consideration. Smart mobile agents are used to migrate and duplicate functions of cluster heads in a recruiting way, besides of cluster merger/partition disposal, reaffiliation management and adaptive adjustment of information update period. Simulation experiments are performed to evaluate the performance of the proposed algorithm in terms of network scalability, overhead of clustering and reaffiliation frequency. It is shown from the analytical and simulation results that the proposed hierarchic topology management algorithm significantly improves the performance and the scalability of space information networks.
Keywords: analytic hierarchy process; computer network performance evaluation; information networks; merging; pattern clustering; telecommunication network topology; AHP; adaptive adjustment; cluster equalization; cluster head function duplication; cluster head function migration; cluster head selection; cluster merger disposal; cluster partition disposal; clustering overhead; data delivery performance; decision model; dynamical self-maintenance mechanisms; hierarchic topology management; information systems; information update period; network scalability improvement; node mobility; nonoverlapping k-hop clusters; performance evaluation; reaffiliation frequency; reaffiliation management; self-organizing networks; smart mobile agents; space information networks; Clustering algorithms; Network topology; Satellites; Scalability; Space vehicles; Topology; Wireless sensor networks; smart agent; topology management (ID#: 16-10206)


S. Gulhane and S. Bodkhe, “DDAS Using Kerberos with Adaptive Huffman Coding to Enhance Data Retrieval Speed and Security,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7086987
Abstract: The increasing fad of deploying application over the web and store as well as retrieve database to/from particular server. As data stored in distributed manner so scalability, flexibility, reliability and security are important aspects need to be considered while established data management system. There are several systems for database management. After reviewing Distributed data aggregation service (DDAS) system which is relying on Blobseer it found that it provide a high level performance in aspects such as data storage as a Blob (Binary large objects) and data aggregation. For complicated analysis and instinctive mining of scientific data, Blobseer serve as a repository backend. WS-Aggregation is the another framework which is viewed as a web services but it is actually carried out aggregation of data. In this framework for executing multi-site queries a single-site interface is provided to the clients. Simple storage service (S3) is another type of storage utility. This S3 system provides an anytime available and low cost service. Kerberos is a method which provides a secure authentication as only authorized clients are able to access distributed database. Kerberos consist of four steps i.e. Authentication Key exchange, Ticket granting service Key exchange, Client/Server service exchange and Build secure communication. Adaptive Huffman method to writing (also referred to as Dynamic Huffman method) is associate accommodative committal to writing technique basic of Huffman coding. It permits compression as well as decompression of data and also permits building the code because the symbols square measure is being transmitted, having no initial information of supply distribution, that enables one-pass cryptography and adaptation to dynamical conditions in data.
Keywords: Huffman codes; Web services; cryptography; data mining; distributed databases; query processing; Blob; Blobseer; DDAS; Kerberos; WS-Aggregation; adaptive Huffman coding; authentication key exchange; binary large objects; client-server service exchange; data aggregation; data management system; data retrieval security; data retrieval speed; data storage; distributed data aggregation service system; distributed database; dynamic Huffman method; instinctive scientific data mining; multisite queries; one-pass cryptography; secure communication; Authentication; Catalogs; Distributed databases; Memory; Servers; XML; adaptive huffman method; blobseer; kerberos; simple storage service; ws aggregation (ID#: 16-10207)


M. De Paula and G. G. Acosta, “Trajectory Tracking Algorithm for Autonomous Vehicles Using Adaptive Reinforcement Learning,” OCEANS 2015 - MTS/IEEE Washington, Washington, DC, 2015, pp. 1-8. doi: (not provided)
Abstract: The off-shore industry requires periodic monitoring of underwater infrastructure for preventive maintenance and security reasons. The tasks in hostile environments can be achieved automatically through autonomous robots like UUV, AUV and ASV. When the robot controllers are based on prespecified conditions they could not function properly in these hostile changing environments. It is beneficial to count with adaptive control strategies that are capable to readapt the control policies when deviations, from the desired behavior, are detected. In this paper, we present an online selective reinforcement learning approach for learning reference tracking control policies given no prior knowledge of the dynamical system. The proposed approach enables real-time adaptation of the control policies, executed by the adaptive controller, based on ongoing interactions with the non-stationary environments. Applications on surface vehicles under nonstationary and perturbed environments are simulated. The presented simulation results demonstrate the performance of this novel algorithm to find optimal control policies that solve the trajectory tracking control task in unmanned vehicles.
Keywords: adaptive control; intelligent robots; learning (artificial intelligence); mobile robots; optimal control; preventive maintenance; remotely operated vehicles; trajectory control; adaptive reinforcement learning; autonomous robots; autonomous vehicles; hostile environments; learning reference tracking control policies; nonstationary environments; off-shore industry; online selective reinforcement learning approach; optimal control policies; periodic monitoring; perturbed environments; robot controllers; security reasons; surface vehicles; trajectory tracking control task; underwater infrastructure; unmanned vehicles; Gaussian distribution; Monitoring; Robots; Security; Vehicles; Autonomous vehicles; cognitive control; reinforcement learning; trajectory tracking
(ID#: 16-10208)


Bo Yang, Bo Li, Mao Yang, Zhongjiang Yan and Xiaoya Zuo, “Mi-MMAC: MIMO-Based Multi-Channel MAC Protocol for WLAN,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 223-226. doi: (not provided)
Abstract: In order to meet the proliferating demands in wireless local area networks (WLANs), the multi-channel media access control (MMAC) technology has attracted a considerable attention to exploit the increasingly scarce spectrum resources more efficiently. This paper proposes a novel multi-channel MAC to resolve the congestion on the control channel, named as Mi-MMAC, by multiplexing the control-radio and the data-radio as a multiple-input multiple-output (MIMO) array, working on both the control channel and the data channels alternately. Furthermore, we model Mi-MMAC as an M/M/k queueing system and obtain a closed-form approximate formula of the saturation throughput. Simulation results validate our model and analysis, and we demonstrate that the saturation throughput gain of the proposed protocol is close to 3.3 times compared with the dynamical channel assignment (DCA) protocol [1] under the few collisions condition.
Keywords: MIMO communication; access protocols; approximation theory; queueing theory; telecommunication congestion control; wireless LAN; wireless channels; DCA protocol; M/M/k queueing system; MIMO; Mi-MMAC; WLAN; closed form approximate formula; control channel; control radio; data channels; data radio; dynamical channel assignment; media access control; multichannel MAC protocol; multiple-input multiple-output array; scarce spectrum resources; DH-HEMTs; Mobile communication; Multiplexing; Protocols; Queueing analysis; Switches; Transceivers; Media access control; Multi-channel; Multiple-input multiple-output; Wireless LAN
(ID#: 16-10209)


Y. Nakahira and Y. Mo, “Dynamic State Estimation in the Presence of Compromised Sensory Data,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 5808-5813. doi: 10.1109/CDC.2015.7403132
Abstract: In this article, we consider the state estimation problem of a linear time invariant system in adversarial environment. We assume that the process noise and measurement noise of the system are l∞ functions. The adversary compromises at most γ sensors, the set of which is unknown to the estimation algorithm, and can change their measurements arbitrarily. We first prove that if after removing a set of 2γ sensors, the system is undetectable, then there exists a destabilizing noise process and attacker’s input to render the estimation error unbounded. For the case that the system remains detectable after removing an arbitrary set of 2γ sensors, we construct a resilient estimator and provide an upper bound on the l∞ norm of the estimation error. Finally, a numerical example is provided to illustrate the effectiveness of the proposed estimator design.
Keywords: invariance; linear systems; measurement errors; measurement uncertainty; state estimation; compromised sensory data; dynamic state estimation; estimation error; estimator design; l∞ functions; linear time invariant system; measurement noise; measurements arbitrarily; process noise; Estimation error; Robustness; Security; Sensor systems; State estimation (ID#: 16-10210)


K. G. Vamvoudakis et al., “Autonomy and Machine Intelligence in Complex Systems: A Tutorial,” 2015 American Control Conference (ACC), Chicago, IL, 2015, pp. 5062-5079. doi: 10.1109/ACC.2015.7172127
Abstract: This tutorial paper will discuss the development of novel state-of-the-art control approaches and theory for complex systems based on machine intelligence in order to enable full autonomy. Given the presence of modeling uncertainties, the unavailability of the model, the possibility of cooperative/non-cooperative goals and malicious attacks compromising the security of teams of complex systems, there is a need for approaches that respond to situations not programmed or anticipated in design. Unfortunately, existing schemes for complex systems do not take into account recent advances of machine intelligence. We shall discuss on how to be inspired by the human brain and combine interdisciplinary ideas from different fields, i.e. computational intelligence, game theory, control theory, and information theory to develop new self-configuring algorithms for decision and control given the unavailability of model, the presence of enemy components and the possibility of network attacks. Due to the adaptive nature of the algorithms, the complex systems will be capable of breaking or splitting into parts that are themselves autonomous and resilient. The algorithms discussed will be characterized by strong abilities of learning and adaptivity. As a result, the complex systems will be fully autonomous, and tolerant to communication failures.
Keywords: artificial intelligence; game theory; information theory; large-scale systems; learning systems; adaptive systems; complex systems; computational intelligence; control theory; learning; machine intelligence; network attacks; self-configuring algorithms; Complex systems; Computational modeling; Control systems; Machine intelligence; Mathematical model; Uncertainty; Vehicles; Autonomy; cyber-physical systems; networks (ID#: 16-10211)


K. G. Vamvoudakis and J. P. Hespanha, “Model-Free Plug-n-Play Optimization Techniques to Design Autonomous and Resilient Complex Systems,” 2015 American Control Conference (ACC), Chicago, IL, 2015, pp. 5081-5081. doi: 10.1109/ACC.2015.7172129
Abstract: Summary form only given: This talk will focus on model-free distributed optimization based algorithms for complex systems with formal optimality and robustness guarantees. Given the presence of modeling uncertainties, the unavailability of the model, the possibility of cooperative/non-cooperative goals and malicious attacks compromising the security of networked teams, there is a need for completely model-free plug-n-play approaches that respond to situations not programmed or anticipated in design, in order to guarantee mission completion. Unfortunately, existing schemes for complex systems do not take into account recent advances of computational intelligence. This talk will combine interdisciplinary ideas from different fields, i.e. computational intelligence, game theory, control theory, and information theory to develop new self-configuring algorithms for decision and control given the unavailability of model, the presence of enemy components and the possibility of measurement and jamming network attacks. Due to the adaptive nature of the algorithms, the complex systems will be capable of breaking or splitting into parts that are themselves autonomous and resilient. The proposed algorithms will be provided with guaranteed optimality and robustness and will be able to enable complete on-board autonomy, to multiply engagement capability, and enable coordination of distributed, heterogeneous teams of manned/unmanned vehicles and humans.
Keywords: large-scale systems; mobile robots; optimisation; vehicles; complex systems; computational intelligence; control theory; cooperative goals; enemy components; engagement capability; formal optimality; game theory; heterogeneous teams; information theory; interdisciplinary ideas; jamming network attacks; manned vehicles; measurement attacks; model-free plug-n-play optimization techniques; noncooperative goals; on-board autonomy; robustness guarantees; self-configuring algorithms; unmanned vehicles; Algorithm design and analysis; Complex systems; Computational intelligence; Computational modeling; Control systems; Optimization; Robustness (ID#: 16-10212)


L. Pan, H. Voos, Y. Li, M. Darouach and S. Hu, “Uncertainty Quantification of Exponential Synchronization for a Novel Class of Complex Dynamical Networks with Hybrid TVD Using PIPC,” The 27th Chinese Control and Decision Conference (2015 CCDC), Qingdao, 2015, pp. 125-130. doi: 10.1109/CCDC.2015.7161678
Abstract: This paper investigates the Uncertainty Quantification (UQ) of Exponential Synchronization (ES) problems for a new class of Complex Dynamical Networks (CDNs) with hybrid Time-Varying Delay (TVD) and Non-Time-Varying Delay (NTVD) nodes by using coupling Periodically Intermittent Pinning Control (PIPC) which has three switched intervals in every period. Based on Kronecker product rules, Lyapunov Stability Theory (LST), Cumulative Distribution Function (CDF), and PIPC method, the robustness of the control algorithm with respect to the value of the final time is studied. Moreover, we assume a normal distribution for the time and used the Stochastic Collocation (SC) method [1] with different values of nodes and collocation points to quantify the sensitivity. For different numbers of nodes, the results show that the ES errors converge to zero with a high probability. Finally, to verify the effectiveness of our theoretical results, Nearest-Neighbor Network (NNN) and Barabási-Albert Network (BAN) consisting of coupled non-delayed and delay Chen oscillators are studied and to demonstrate that the accuracies of the ES and PIPC are robust to variations of time.
Keywords: Lyapunov methods; complex networks; convergence; delays; large-scale systems; normal distribution; periodic control; robust control; stochastic processes; switching systems (control); synchronisation; BAN; Barabási-Albert Network; CDF; CDN; Kronecker product rule; LST; Lyapunov stability theory; NNN; NTVD node; PIPC method; collocation points; complex dynamical network; control algorithm robustness; cumulative distribution function; delay Chen oscillator; error convergence; exponential synchronization problem; hybrid TVD; hybrid time-varying delay; nearest-neighbor network; nondelayed Chen oscillator; nontime-varying delay; normal distribution; periodically intermittent pinning control; probability; sensitivity quantification; stochastic collocation method; switched interval; time variation; uncertainty quantification; Artificial neural networks; Chaos; Couplings; Delays; Switches; Synchronization; Complex Dynamical Networks (CDNs); Exponential Synchronization (ES); Periodically Intermittent Pinning Control (PIPC);Time-varying Delay (TVD) (ID#: 16-10213)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Effectiveness and Work Factor Metrics 2015 – 2016 (Part 1)


SoS Logo

Effectiveness and Work Factor Metrics

2015 – 2016 (Part 1)


Measurement to determine the effectiveness of security systems is an essential element of the Science of Security. The work cited here was presented in 2015 and 2016.

I. Kotenko and E. Doynikova, “Countermeasure Selection in SIEM Systems Based on the Integrated Complex of Security Metrics,” 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, Turku, 2015, pp. 567-574. doi: 10.1109/PDP.2015.34
Abstract: The paper considers a technique for countermeasure selection in security information and event management (SIEM) systems. The developed technique is based on the suggested complex of security metrics. For the countermeasure selection the set of security metrics is extended with an additional level needed for security decision support. This level is based on the countermeasure effectiveness metrics. Key features of the suggested technique are application of the attack and service dependencies graphs, the introduced model of the countermeasure and the suggested metrics of the countermeasure effectiveness, cost and collateral damage. Other important feature of the technique is providing the solution on the countermeasure implementation in any time on the base of the current security state and security events.
Keywords: decision support systems; graph theory; security of data; software metrics; SIEM systems; attack dependencies graphs; countermeasure selection; integrated complex; security decision support; security events; security information and event management; security metrics; security state; service dependencies graphs; Authentication; Measurement; Risk management; Taxonomy; attack graphs; countermeasures; cyber security; risk assessment  (ID#: 16-10214)


M. Ge and D. S. Kim, “A Framework for Modeling and Assessing Security of the Internet of Things,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 776-781. doi: 10.1109/ICPADS.2015.102
Abstract: Internet of Things (IoT) is enabling innovative applications in various domains. Due to its heterogeneous and wide scale structure, it introduces many new security issues. To address the security problem, we propose a framework for security modeling and assessment of the IoT. The framework helps to construct graphical security models for the IoT. Generally, the framework involves five steps to find attack scenarios, analyze the security of the IoT through well-defined security metrics, and assess the effectiveness of defense strategies. The benefits of the framework are presented via a study of two example IoT networks. Through the analysis results, we show the capabilities of the proposed framework on mitigating impacts of potential attacks and evaluating the security of large-scale networks.
Keywords: Internet of Things; security of data; IoT networks; defense strategies effectiveness; graphical security models; large-scale networks; security assessment; security metrics; security modeling; Analytical models; Body area networks; Computational modeling; Measurement; Network topology; Security; Wireless communication; Attack Graphs; Hierarchical Attack Representation Model; Security Analysis (ID#: 16-10215)


P. Pandey and E. A. Snekkenes, “A Performance Assessment Metric for Information Security Financial Instruments,” Information Society (i-Society), 2015 International Conference on, London, 2015, pp. 138-145. doi: 10.1109/i-Society.2015.7366876
Abstract: Business interruptions caused by cyber-attacks pose a serious threat to revenue and share price of the organisation. Furthermore, recent cyber-attacks on various organisations prove that the technical controls, security policies, and regulatory compliance are not sufficient to mitigate the cyber risks. In such a scenario, the residual cyber risk can be mitigated with cyber-insurance policies and with information security derivatives (financial instruments). Information security derivatives are a new class of financial instruments designed to provide an alternate risk mitigation mechanism to reduce the potential adverse impact of an information security event. However, there is a lack of research on the metrics to measure the performance of information security derivatives in mitigating the underlying risk. This article examines the basic requirements to assess the performance of information security derivatives. Furthermore, the article presents three metrics, namely hedge ratio, hedge effectiveness, and hedge efficiency to formulate and evaluate a cyber risk mitigation strategy devised with information security derivatives. Also, the application of these metrics is demonstrated in an imaginary scenario. The accurate measure of performance of information security derivatives is of practical importance for effective risk management strategy.
Keywords: business data processing; risk management; security of data; business interruptions; cyber risk mitigation strategy; cyber-attacks; cyber-insurance policies; hedge effectiveness; hedge efficiency; hedge ratio; information security derivatives; information security financial instruments; performance assessment metric; regulatory compliance; residual cyber risk; risk management strategy; risk mitigation mechanism; Correlation; Information security; Instruments; Measurement; Portfolios; Risk management; Financial Instrument; Hedge Effectiveness; Hedge Efficiency; Information Security; Risk Management (ID#: 16-10216)


F. Dai, K. Zheng, S. Luo and B. Wu, “Towards a Multiobjective Framework for Evaluating Network Security Under Exploit Attacks,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7186-7191. doi: 10.1109/ICC.2015.7249473
Abstract: Exploit attacks have been one of the major threats to computer network systems, the damage of which has been extensively studied and numerous countermeasures have been proposed to defend against them. In this work, we propose a multiobjective optimization framework to facilitate evaluation of network security under exploit attacks. Our approach explores a promising avenue of integrating attack graph methodology to evaluate network security. In particular, we innovatively utilize attack graph based security metrics to model exploit attacks and dynamically measure security risk under these attacks. Then a multiobjective problem is formulated to maximize network exploitability and security impact under feasible exploit compositions. Furthermore, an artificial immune algorithm is employed to solve the formulated problem. We conduct a series of simulation experiments on hypothetical network models to testify the performance of proposed mechanism. Simulation results show that our approach can innovatively solve the security evaluation problem under multiple decision variables with feasibility and effectiveness.
Keywords: artificial immune systems; computer network security; graph theory; artificial immune algorithm; attack graph based security metrics; attack graph methodology; computer network security evaluation; exploit attacks; multiobjective optimization framework; security risk; Analytical models; Communication networks; Measurement; Optimization; Security; Sociology; Statistics; attack graph; exploit attack; network security evaluation (ID#: 16-10217)


B. Duncan and M. Whittington, “The Importance of Proper Measurement for a Cloud Security Assurance Model,” 2015 IEEE 7th International Conference on Cloud Computing Technology and Science (CloudCom), Vancouver, BC, 2015, pp. 517-522. doi: 10.1109/CloudCom.2015.91
Abstract: Defining proper measures for evaluating the effectiveness of an assurance model, which we have developed to ensure cloud security, is vital to ensure the successful implementation and continued running of the model. We need to understand that with security being such an essential component of business processes, responsibility must lie with the board. The board must be responsible for defining their security posture on all aspects of the model, and therefore must also be responsible for defining what the necessary measures should be. Without measurement, there can be no control. However, it will also be necessary to properly engage with cloud service providers to achieve a more meaningful degree of security for the cloud user.
Keywords: business data processing; cloud computing; security of data; business process; cloud security assurance model; cloud service provider; security posture; Cloud computing; Companies; Complexity theory; Privacy; Security; Standards; assurance; audit; cloud service providers; compliance; measurement; privacy; security; service level agreements; standards (ID#: 16-10218)


N. Aminudin, T. K. A. Rahman, N. M. M. Razali, M. Marsadek, N. M. Ramli and M. I. Yassin, “Voltage Collapse Risk Index Prediction for Real Time System’s Security Monitoring,” Environment and Electrical Engineering (EEEIC), 2015 IEEE 15th International Conference on, Rome, 2015, pp. 415-420. doi: 10.1109/EEEIC.2015.7165198
Abstract: Risk based security assessment (RBSA) for power system security deals with the impact and probability of uncertainty to occur in the power system. In this study, the risk of voltage collapse is measured by considering the L-index as the impact of voltage collapse while Poisson probability density function is used to model the probability of transmission line outage. The prediction of voltage collapse risk index in real time requires precise, reliable and short processing time. To facilitate this analysis, Artificial Intelligent using Generalize Regression Neural Network (GRNN) technique is proposed where the spread value is determined using Cuckoo Search (CS) optimization method. To validate the effectiveness of the proposed method, the performance of GRNN with optimized spread value obtained using CS is compared with heuristic approach.
Keywords: Poisson distribution; neural nets; optimisation; power system dynamic stability; power system measurement; power system security; power transmission reliability; probability; real-time systems; risk management; GRNN; L-index; Poisson probability density function; artificial intelligent; cuckoo search; generalize regression neural network; optimization method; real time system; risk based security assessment; security monitoring; transmission line outage; voltage collapse risk index prediction; Indexes; Optimization; Power system stability; Power transmission lines; Security; Transmission line measurements; Voltage measurement; Risk based security assessment; cuckoo search optimization; voltage collapse (ID#: 16-10219)


H. Jiang, Y. Zhang, J. J. Zhang and E. Muljadi, “PMU-Aided Voltage Security Assessment for a Wind Power Plant,” 2015 IEEE Power & Energy Society General Meeting, Denver, CO, 2015, pp. 1-5. doi: 10.1109/PESGM.2015.7286274
Abstract: Because wind power penetration levels in electric power systems are continuously increasing, voltage stability is a critical issue for maintaining power system security and operation. The traditional methods to analyze voltage stability can be classified into two categories: dynamic and steady-state. Dynamic analysis relies on time-domain simulations of faults at different locations; however, this method needs to exhaust faults at all locations to find the security region for voltage at a single bus. With the widely located phasor measurement units (PMUs), the Thevenin equivalent matrix can be calculated by the voltage and current information collected by the PMUs. This paper proposes a method based on a Thevenin equivalent matrix to identify system locations that will have the greatest impact on the voltage at the wind power plant's point of interconnection. The number of dynamic voltage stability analysis runs is greatly reduced by using the proposed method. The numerical results demonstrate the feasibility, effectiveness, and robustness of the proposed approach for voltage security assessment for a wind power plant.
Keywords: phasor measurement; power system security; power system stability; wind power plants; PMU-aided voltage security assessment; Thevenin equivalent matrix; dynamic voltage stability analysis; electric power systems; phasor measurement units; Phasor measurement units; Power system; fault disturbance recorder; phasor measurement unit; voltage security; wind power plant
(ID#: 16-10220)


D. Adrianto and F. J. Lin, “Analysis of Security Protocols and Corresponding Cipher Suites in ETSI M2M Standards,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 777-782. doi: 10.1109/WF-IoT.2015.7389152
Abstract: ETSI, as a standard body in telecommunication industry, has defined a comprehensive set of common security mechanisms to protect the IoT/M2M system. They are Service Bootstrapping, Service Connection, and mId Security. For each mechanism, there are several protocols that we can choose. However, the standards do not describe in what condition a particular protocol will be the best among the others. In this paper we analyze which protocol is the most suitable for the use case where an IoT/M2M application generates a large amount of data in a short period of time. The criteria used include efficiency, cost, and effectiveness of the protocol. Our analysis is done based on the actual measurement of an ETSI standard-compliant prototype.
Keywords: Internet of Things; cryptographic protocols; telecommunication industry; telecommunication security; ETSI M2M standards; ETSI standard-compliant prototype; Internet-of-things; IoT system; cipher suites; common security mechanisms; machine-to-machine communication; security protocol analysis; service bootstrapping; service connection; Authentication; Cryptography; Logic gates; Probes; Protocols; Servers; Machine-to-Machine Communication; The Internet of Things; security protocols (ID#: 16-10221)


Y. Jitsui and A. Kajiwara, “Home Security Monitoring Based Stepped-FM UWB,” 2016 International Workshop on Antenna Technology (iWAT), Cocoa Beach, FL, 2016, pp. 189-191. doi: 10.1109/IWAT.2016.7434839
Abstract: This paper presents the effectiveness of stepped-FM UWB home security sensor. UWB sensor has attracted considerable attention it can be expected to detect human body in a home, not room. Then a few schemes had been suggested to detect an intruder in home or room. However, it is important to detect an intruder prior to breaking in the house. This paper suggests a UWB sensor which can detect not only intruder, but also stranger intruding into a house. It can also estimate the intrusion port (window). The measurements were conducted under five scenarios using our fabricated sensor system installed inside a typical four-room apartment house.
Keywords: security; sensors; ultra wideband technology; UWB sensor; fabricated sensor system; four-room apartment house; home security monitoring; intrusion port; stepped-FM UWB; stepped FM UWB home security sensor; Antenna measurements; Monitoring; Routing; Security; Sensor systems; Trajectory; Transmitting antennas; Ultra-wideband; monitoring; sensor; stepped-fm (ID#: 16-10222)


R. Kastner, W. Hu and A. Althoff, “Quantifying Hardware Security Using Joint Information Flow Analysis,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1523-1528. doi: (not provided)
Abstract: Existing hardware design methodologies provide limited methods to detect security flaws or derive a measure on how well a mitigation technique protects the system. Information flow analysis provides a powerful method to test and verify a design against security properties that are typically expressed using the notion of noninterference. While this is useful in many scenarios, it does have drawbacks primarily related to its strict enforcement of limiting all information flows — even those that could only occur in rare circumstances. Quantitative metrics based upon information theoretic measures provide an approach to loosen such restrictions. Furthermore, they are useful in understanding the effectiveness of security mitigations techniques. In this work, we discuss information flow analysis using noninterference and qualitative metrics. We describe how to use them in a synergistic manner to perform joint information flow analysis. And we use this novel technique to analyze security properties across several different hardware cryptographic cores.
Keywords: Control systems; Design methodology; Hardware; Logic gates; Measurement; Mutual information; Security (ID#: 16-10223)


M. S. Salah, A. Maizate and M. Ouzzif, “Security Approaches Based on Elliptic Curve Cryptography in Wireless Sensor Networks,” 2015 27th International Conference on Microelectronics (ICM), Casablanca, 2015, pp. 35-38. doi: 10.1109/ICM.2015.7437981
Abstract: Wireless sensor networks are ubiquitous in monitoring applications, medical control, environmental control and military activities... In fact, a wireless sensor network consists of a set of communicating nodes distributed over an area in order to measure a given magnitude, or receive and transmit data independently to a base station which is connected to the user via the Internet or a satellite, for example. Each node in a sensor network is an electronic device which has calculation capacity, storage, communication and power. However, attacks in wireless sensor networks can have negative impacts on critical network applications leading to the minimization of security within these networks. So it is important to secure these networks in order to maintain their effectiveness. In this paper, we have initially attempted to study approaches oriented towards cryptography and based on elliptic curves, then we have compared the performance of each method relative to others.
Keywords: public key cryptography; telecommunication security; wireless sensor networks; Internet; base station; communicating nodes; critical network applications; electronic device calculation capacity; electronic device communication; electronic device power; electronic device storage; elliptic curve cryptography; environmental control; magnitude measurement; medical control; military activities; monitoring applications; security approaches; security minimization; ubiquitous network; wireless sensor network; Elliptic curve cryptography; Energy consumption; Irrigation; Jamming; Monitoring; Terrorism; AVL???; CECKM; ECC; RECC-C; RECC-D; Security; WSN (ID#: 16-10224)


A. Kargarian, Yong Fu and Zuyi Li, “Distributed Security-Constrained Unit Commitment for Large-Scale Power Systems,” 2015 IEEE Power & Energy Society General Meeting, Denver, CO, 2015, pp. 1-1. doi: 10.1109/PESGM.2015.7286540
Abstract: Summary from only given. Independent system operators (ISOs) of electricity markets solve the security-constrained unit commitment (SCUC) problem to plan a secure and economic generation schedule. However, as the size of power systems increases, the current centralized SCUC algorithm could face critical challenges ranging from modeling accuracy to calculation complexity. This paper presents a distributed SCUC (D-SCUC) algorithm to accelerate the generation scheduling of large-scale power systems. In this algorithm, a power system is decomposed into several scalable zones which are interconnected through tie lines. Each zone solves its own SCUC problem and a parallel calculation method is proposed to coordinate individual D-SCUC problems. Several power systems are studied to show the effectiveness of the proposed algorithm.
Keywords: distributed algorithms; power generation economics; power markets; power system security; scheduling; D-SCUC problems; distributed SCUC algorithm; distributed security-constrained unit commitment; economic generation schedule; independent system operators; large-scale power systems; parallel calculation method; security-constrained unit commitment problem; Computers; Distance measurement; Economics; Electricity supply industry; Face; Power systems; Schedules (ID#: 16-10225)


M. Cayford, “Measures of Success: Developing a Method for Evaluating the Effectiveness of Surveillance Technology,” Intelligence and Security Informatics Conference (EISIC), 2015 European, Manchester, 2015, pp. 187-187. doi: 10.1109/EISIC.2015.33
Abstract: This paper presents a method for evaluating the effectiveness of surveillance technology in intelligence work. The method contains measures against which surveillance technology would be assessed to determine its effectiveness. Further research, based on interviews of experts, will inform the final version of this method, including a weighting system.
Keywords: surveillance; terrorism; Sproles method; counterterrorism; surveillance technology; Current measurement; Interviews; Privacy; Security; Standards; Surveillance; Weight measurement; effectiveness; intelligence; measures; method; technology
(ID#: 16-10226)


S. Schinagl, K. Schoon and R. Paans, “A Framework for Designing a Security Operations Centre (SOC),” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 2253-2262. doi: 10.1109/HICSS.2015.270
Abstract: Owning a SOC is an important status symbol for many organizations. Although the concept of a 'SOC' can be considered a hype, only a few of them are actually effective in counteracting cybercrime and IT abuse. A literature review reveals that there is no standard framework available and no clear scope or vision on SOCs. In most of the papers, specific implementations are described, although often with a commercial purpose. Our research was focused on identifying and defining the generic building blocks for a SOC, to draft a design framework. In addition, a measurement method has been developed to assess the effectiveness of the protection provided by a SOC.
Keywords: computer crime; IT abuse; SOC; Security Operations Centre design; cybercrime; measurement method; Conferences; Monitoring; Organizations; Security; Standards organizations; System-on-chip; IT Abuse; Intelligence; Value; baseline security; continuous monitoring; damage control; forensic; framework; model; monitoring; pentest; secure service development; sharing knowledge (ID#: 16-10227)


Y. Wu, T. Wang and J. Li, “Effectiveness Analysis of Encrypted and Unencrypted Bit Sequence Identification Based on Randomness Test,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1588-1591. doi: 10.1109/IMCCC.2015.337
Abstract: Encrypted and unencrypted bit sequences identification has great significance of network management. Compared with unencrypted bit sequences, encrypted bit sequences are more random. Randomness tests are used to evaluate the security of cipher algorithms. Whether they could be used to identify the encrypted and unencrypted bit sequences still need to do some further research. We introduced the principle of randomness tests at first. According to the input size limit of each test in the SP800-22 rev1a standard, we selected frequency test, frequency test within a block, runs test, longest run of ones in a block test and cumulative sums test to identify encrypted and unencrypted bit sequences. At the time, we analyzed the preconditions of the selected tests to successfully identify the encrypted and unencrypted bit sequences, then presented the relevant conclusions. Finally, the effectiveness of the selected tests is verified according to the experiments.
Keywords: cryptography; SP800-22 rev1a standard; block test; cipher algorithms; cumulative sums test; effectiveness analysis; frequency test; network management; randomness test; security evaluation; unencrypted bit sequence identification; Ciphers; Encryption; Probability; Protocols; Standards; bit sequences; cipher algorithm; cumulative sums; encryption (ID#: 16-10228)


X. Yuan, P. Tu and Y. Qi, “Sensor Bias Jump Detection and Estimation with Range Only Measurements,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1658-1663. doi: 10.1109/ICInfA.2015.7279552
Abstract: A target can be positioned by wireless communication sensors. In practical system, the range based sensors may have biased measurements. The biases are mostly constant value, but they may jump abruptly in some special scenarios. An on-line bias change detection and estimation algorithm is presented in this paper. This algorithm can detect the jump bias based on Chi-Square Test, and then estimate the jump bias through Modified Augmented Extended Kalman filter. The feasibility and effectiveness of the proposed algorithms are illustrated in comparison with the Augmented Extended Kalman filter by simulations.
Keywords: Kalman filters; estimation theory; nonlinear filters; sensors; Chi-Square test; modified augmented extended Kalman filter; online bias change detection algorithm; online bias change estimation algorithm; range only measurement; sensor bias jump detection; sensor bias jump estimation; wireless communication sensors; Change detection algorithms; Estimation; Noise; Position measurement; Wireless communication; Bias estimation; Jump of bias; Wireless positioning systems; range-only measurements (ID#: 16-10229)


M. M. Hasan and H. T. Mouftah, “Encryption as a Service for Smart Grid Advanced Metering Infrastructure,” 2015 IEEE Symposium on Computers and Communication (ISCC), Larnaca, 2015, pp. 216-221. doi: 10.1109/ISCC.2015.7405519
Abstract: Smart grid advanced metering infrastructure (AMI) bridges between consumers, utilities, and market. Its operation relies on large scale communication networks. At the lowest level, information are acquired by smart meters and sensors. At the highest level, information are stored and processed by smart grid control centers for various purposes. The AMI conveys a big amount of sensitive information. Prevention of unauthorized access to these information is a major concern for smart grid operators. Encryption is the primary security measure for preventing unauthorized access. It incurs various overheads and deployment costs. In recent times, the security as a service (SECaaS) model has introduced a number cloud-based security solutions such as encryption as a service (EaaS). It promises the speed and cost-effectiveness of cloud computing. In this paper, we propose a framework named encryption service for smart grid AMI (ES4AM). The ES4AM framework focuses on lightweight encryption of in-flight AMI data. We also study the feasibility of the framework using relevant simulation results.
Keywords: cloud computing; cryptography; power engineering computing; power markets; power system control; power system measurement; sensors; smart power grids; telecommunication security; ES4AM; EaaS; SECaaS model; communication networks; encryption as a service; number cloud-based security solutions; primary security measure; security as a service; smart grid AMI; smart grid advanced metering infrastructure; smart grid control centers; smart grid operators; unauthorized access; Cloud computing; Encryption; Public key; Servers; Smart grids; encryption; managed security; smart grid (ID#: 16-10230)


Z. Hu, Y. Wang, X. Tian, X. Yang, D. Meng and R. Fan, “False Data Injection Attacks Identification for Smart Grids,” Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2015 Third International Conference on, Beirut, 2015, pp. 139-143. doi: 10.1109/TAEECE.2015.7113615
Abstract: False Data Injection Attacks (FDIA) in Smart Grid is considered to be the most threatening cyber-physics attack. According to the variety of measurement categories in power system, a new method for false data detection and identification is presented. The main emphasis of our research is that we have equivalent measurement transformation instead of traditional weighted least squares state estimation in the process of SE and identify false data by the residual researching method. In this paper, one FDIA attack case in IEEE 14 bus system is designed by exploiting the MATLAB to test the effectiveness of the algorithm. Using this method the false data can be effectively dealt with.
Keywords: IEEE standards; power system security; security of data; smart power grids; FDIA; IEEE 14 bus system; SE; cyberphysical attack threatening; equivalent measurement transformation; false data injection attack identification; power system; residual researching method; smart grid; Current measurement; Pollution measurement; Power measurement; Power systems; State estimation; Transmission line measurements; Weight measurement; false data detection and identification; false data injection attacks; smart grid (ID#: 16-10231)


A. K. Al-Khamis and A. A. Khalafallah, “Secure Internet on Google Chrome: Client Side Anti-Tabnabbing Extension,” Anti-Cybercrime (ICACC), 2015 First International Conference on, Riyadh, 2015, pp. 1-4. doi: 10.1109/Anti-Cybercrime.2015.7351942
Abstract: Electronic transactions rank the top on our daily transactions. Internet became invaluable for government, business, and personal use. This occurred in synchronization with the great increase in online attacks, particularly the development of newest forms from known attacks such as Tabnabbing. Thus, users' confidentiality and personal information must be protected using information security. Tabnabbing is a new form of phishing. The attacker needs nothing to steal credentials except users' preoccupation with other work and exploitation of human memory weakness. The impact of this malicious attempt begins with identity theft and ends with financial loss. That has encouraged some security specialists and researchers to tackle tabnabbing attack, but their studies are still in their infancy and not sufficient. The work done here focuses on developing an effective anti-tabnabbing extension for the Google Chrome browser to protect Internet users from been victims as well as raise their awareness. The system developed has a novel significance due to its effectiveness in detecting a tabnabbing attack and the combination of two famous approaches used to combat online attacks. The success of the system was examined by performance measurements such as confusion matrix and ROC. The system produces promising results.
Keywords: Internet; security of data; Google Chrome browser; Internet users; ROC; client side anti-tabnabbing extension; confusion matrix; electronic transactions; financial loss; human memory weakness; information security; online attacks; personal information; phishing; secure Internet; security specialists; synchronization; tabnabbing attack; Browsers; Business; HTML; Matrix converters; Security; Uniform resource locators; Browser security; Detection; Google Extension; Phishing; Social engineering; Tabnabbing attack; Usable security (ID#: 16-10232)


N. Soule et al., “Quantifying & Minimizing Attack Surfaces Containing Moving Target Defenses,” Resilience Week (RWS), 2015, Philadelphia, PA, 2015, pp. 1-6. doi: 10.1109/RWEEK.2015.7287449
Abstract: The cyber security exposure of resilient systems is frequently described as an attack surface. A larger surface area indicates increased exposure to threats and a higher risk of compromise. Ad-hoc addition of dynamic proactive defenses to distributed systems may inadvertently increase the attack surface. This can lead to cyber friendly fire, a condition in which adding superfluous or incorrectly configured cyber defenses unintentionally reduces security and harms mission effectiveness. Examples of cyber friendly fire include defenses which themselves expose vulnerabilities (e.g., through an unsecured admin tool), unknown interaction effects between existing and new defenses causing brittleness or unavailability, and new defenses which may provide security benefits, but cause a significant performance impact leading to mission failure through timeliness violations. This paper describes a prototype service capability for creating semantic models of attack surfaces and using those models to (1) automatically quantify and compare cost and security metrics across multiple surfaces, covering both system and defense aspects, and (2) automatically identify opportunities for minimizing attack surfaces, e.g., by removing interactions that are not required for successful mission execution.
Keywords: security of data; attack surface minimization; cyber friendly fire; cyber security exposure; dynamic proactive defenses; moving target defenses; resilient systems; timeliness violations; Analytical models; Computational modeling; IP networks; Measurement; Minimization; Security; Surface treatment; cyber security analysis; modeling; threat assessment (ID#: 16-10233)


K. A. Torkura, F. Cheng and C. Meinel, “A Proposed Framework for Proactive Vulnerability Assessments in Cloud Deployments,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015,
pp. 51-57. doi: 10.1109/ICITST.2015.7412055
Abstract: Vulnerability scanners are deployed in computer networks and software to timely identify security flaws and misconfigurations. However, cloud computing has introduced new attack vectors that requires commensurate change of vulnerability assessment strategies. To investigate the effectiveness of these scanners in cloud environments, we first conduct a quantitative security assessment of OpenStack's vulnerability lifecycle and discover severe risk levels resulting from prolonged patch release duration. More specifically, there are long time lags between OpenStack patch releases and patch inclusion in vulnerability scanning engines. This scenario introduces sufficient time for malicious actions and creation of exploits such as zero-days. Mitigating these concern requires systems with current knowledge on events within the vulnerability lifecycle. However, current vulnerability scanners are designed to depend on information about publicly announced vulnerabilities which mostly includes only vulnerability disclosure dates. Accordingly, we propose a framework that would mitigate these risks by gathering and correlating information from several security information sources including exploit databases, malware signature repositories and Bug Tracking Systems. The information is thereafter used to automatically generate plugins armed with current information about zero-day exploits and unknown vulnerabilities. We have characterized two new security metrics to describe the discovered risks.
Keywords: cloud computing; invasive software; OpenStack vulnerability lifecycle; attack vector; bug tracking system; cloud deployment; exploit databases; malware signature repositories; proactive vulnerability assessment; security flaws; vulnerability scanner; Cloud computing; Databases; Engines; Measurement; Security; Cloud security; cloud vulnerabilities; security metrics; vulnerability lifecycle; vulnerability signature; zero-days (ID#: 16-10234)


S. K. Rao, D. Krishnankutty, R. Robucci, N. Banerjee and C. Patel, “Post-Layout Estimation of Side-Channel Power Supply Signatures,” Hardware Oriented Security and Trust (HOST), 2015 IEEE International Symposium on, Washington, DC, 2015,
pp. 92-95. doi: 10.1109/HST.2015.7140244
Abstract: Two major security challenges for integrated circuits (IC) that involve encryption cores are side-channel based attacks and malicious hardware insertions (trojans). Side-channel attacks predominantly use power supply measurements to exploit the correlation of power consumption with the underlying logic operations on an IC. Practical attacks have been demonstrated using power supply traces and either plaintext or cipher-text collected during encryption operations. Also, several techniques that detect trojans rely on detecting anomalies in the power supply in combination with other circuit parameters. Counter-measures against these side-channel attacks as well as detection schemes for hardware trojans are required and rely on accurate pre-fabrication power consumption predictions. However, available state-of-the-art techniques would require prohibitive full-chip SPICE simulations. In this work, we present an optimized technique to accurately estimate the power supply signatures that require significantly less computational resources, thus enabling integration of Design-for-Security (DfS) based paradigms. To demonstrate the effectiveness of our technique, we present data for a DES crypto-system that proves that our framework can identify vulnerabilities to Differential Power Analysis (DPA) attacks. Our framework can be generically applied to other crypto-systems and can handle larger IC designs without loss of accuracy.
Keywords: cryptography; estimation theory; integrated circuit layout; logic testing; power consumption; power supply circuits; security; DES cryptosystem; DPA; DfS; IC; SPICE simulation; anomaly detection; cipher-text; design-for-security; differential power analysis; encryption core; hardware trojan; integrated circuit; logic operation; malicious hardware insertion; plaintext; post-layout estimation; power consumption correlation; power supply measurement; power supply tracing; practical attack; prefabrication power consumption prediction; side-channel based attack; side-channel power supply signature estimation; Correlation; Hardware; Integrated circuits; Power supplies; SPICE; Security; Transient analysis; Hardware Security; Power Supply analysis; Side-channel attacks; Trojan Detection (ID#: 16-10235)


S. Abraham and S. Nair, “Exploitability Analysis Using Predictive Cybersecurity Framework,” Cybernetics (CYBCONF), 2015 IEEE 2nd International Conference on, Gdynia, 2015, pp. 317-323. doi: 10.1109/CYBConf.2015.7175953
Abstract: Managing Security is a complex process and existing research in the field of cybersecurity metrics provide limited insight into understanding the impact attacks have on the overall security goals of an enterprise. We need a new generation of metrics that can enable enterprises to react even faster in order to properly protect mission-critical systems in the midst of both undiscovered and disclosed vulnerabilities. In this paper, we propose a practical and predictive security model for exploitability analysis in a networking environment using stochastic modeling. Our model is built upon the trusted CVSS Exploitability framework and we analyze how the atomic attributes namely Access Complexity, Access Vector and Authentication that make up the exploitability score evolve over a specific time period. We formally define a nonhomogeneous Markov model which incorporates time dependent covariates, namely the vulnerability age and the vulnerability discovery rate. The daily transition-probability matrices in our study are estimated using a combination of Frei's model & Alhazmi Malaiya's Logistic model. An exploitability analysis is conducted to show the feasibility and effectiveness of our proposed approach. Our approach enables enterprises to apply analytics using a predictive cyber security model to improve decision making and reduce risk.
Keywords: Markov processes; authorisation; decision making; risk management; access complexity; access vector; authentication; daily transition-probability matrices; exploitability analysis; nonhomogeneous Markov model; predictive cybersecurity framework; risk reduction; trusted CVSS exploitability framework; vulnerability age; vulnerability discovery rate; Analytical models; Computer security; Measurement; Predictive models; Attack Graph; CVSS; Markov Model; Security Metrics; Vulnerability Discovery Model; Vulnerability Lifecycle Model (ID#: 16-10236)


C. Callegari, S. Giordano and M. Pagano, “Histogram Cloning and CuSum: An Experimental Comparison Between Different Approaches to Anomaly Detection,” Performance Evaluation of Computer and Telecommunication Systems (SPECTS), 2015 International Symposium on, Chicago, IL, 2015, pp. 1-7. doi: 10.1109/SPECTS.2015.7285294
Abstract: Due to the proliferation of new threats from spammers, attackers, and criminal enterprises, Anomaly-based Intrusion Detection Systems have emerged as a key element in network security and different statistical approaches have been considered in the literature. To cope with scalability issues, random aggregation through the use of sketches seems to be a powerful prefiltering stage that can be applied to backbone data traffic. In this paper we compare two different statistical methods to detect the presence of anomalies from such aggregated data. In more detail, histogram cloning (with different distance measurements) and CuSum algorithm (at the bucket level) are tested over A well-known publicly available data set. The performance analysis, presented in this paper, demonstrates the effectiveness of the CuSum when a proper definition of the algorithm, which takes into account the standard deviation of the underlying variables, is chosen.
Keywords: computer network security; data analysis; statistical analysis; CuSum algorithm; aggregated data anomalies; anomaly based intrusion detection systems; backbone data traffic; bucket level; cumulative sum control chart statistics; histogram cloning; network security; scalability issues; statistical methods; Aggregates; Algorithm design and analysis; Cloning; Histograms; Mathematical model; Monitoring; Standards; Anomaly Detection; CUSUM; Histogram Cloning; Network Security; Statistical Traffic Analysis
(ID#: 16-10237)


K. Xiong and X. Chen, “Ensuring Cloud Service Guarantees via Service Level Agreement (SLA)-Based Resource Allocation,” 2015 IEEE 35th International Conference on Distributed Computing Systems Workshops, Columbus, OH, 2015, pp. 35-41. doi: 10.1109/ICDCSW.2015.18
Abstract: This paper studies the problem of resource management and placement for high performance clouds. It is concerned with the three most important performance metrics: response time, throughput, and utilization as Quality of Service (QoS) metrics defined in a Service Level Agreement (SLA). We propose SLA-based approaches for resource management in clouds. Specifically, we first quantify the metrics of trustworthiness, a percentile of response time, and availability. Then, this paper provides the formulation of cloud resource management as a nonlinear optimization problem subject to SLA requirements and further gives the solution of the problem. Finally, we give a solution of this nonlinear optimization problem and demonstrate the effectiveness of proposed solutions through illustrative examples.
Keywords: cloud computing; contracts; nonlinear programming; resource allocation; SLA-based approaches; SLA-based resource allocation; cloud service guarantees; nonlinear optimization problem; quality of service metrics; resource management; service level agreement; Cloud computing; Measurement; Quality of service; Resource management; Security; Servers; Time factors; Performance; Resource Allocation; Service Level Agreement (ID#: 16-10238)


G. Sabaliauskaite, G. S. Ng, J. Ruths and A. P. Mathur, “Empirical Assessment of Methods to Detect Cyber Attacks on a Robot,” 2016 IEEE 17th International Symposium on High Assurance Systems Engineering (HASE), Orlando, FL, 2016, pp. 248-251. doi: 10.1109/HASE.2016.19
Abstract: An experiment was conducted using a robot to investigate the effectiveness of four methods for detecting cyber attacks and analyzing robot failures. Cyber attacks were implemented on three robots of the same make and model through their wireless control mechanisms. Analysis of experimental data indicates the differences in attack detection effectiveness across the detection methods. A method that compares sensors values at each time step to the average historical values, was the most effective. Further, the attack detection effectiveness was the same or lower in actual robots as compared to simulation. Factors such as attack size and timing, influenced attack detection effectiveness.
Keywords: security of data; telerobotics; cyber attack detection; robot failure analysis; wireless control mechanisms; Computer crashes; Data models; Robot sensing systems; Time measurement; cyber-attacks; cyber-physical systems; robots; safety; security (ID#: 16-10239)


Y. Mo and R. M. Murray, “Multi-Dimensional State Estimation in Adversarial Environment,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 4761-4766. doi: 10.1109/ChiCC.2015.7260376
Abstract: We consider the estimation of a vector state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have limited resources and can only manipulate up to l of the m measurements. However, it can the compromise measurements arbitrarily. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” error against all possible manipulations by the attacker and all possible sensor noises. We show that if the system is not observable after removing 2l sensors, then the worst-case error is infinite, regardless of the estimation strategy. If the system remains observable after removing arbitrary set of 2l sensor, we prove that the optimal state estimation can be computed by solving a semidefinite programming problem. A numerical example is provided to illustrate the effectiveness of the proposed state estimator.
Keywords: mathematical programming; minimax techniques; state estimation; adversarial environment; minimax optimization; multidimensional state estimation; semidefinite programming problem; vector state estimation; worst-case error; Indexes; Noise; Optimization; Robustness; Security; State estimation; Estimation; Security (ID#: 16-10240)


N. Antunes and M. Vieira, “On the Metrics for Benchmarking Vulnerability Detection Tools,” 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Rio de Janeiro, 2015, pp. 505-516. doi: 10.1109/DSN.2015.30
Abstract: Research and practice show that the effectiveness of vulnerability detection tools depends on the concrete use scenario. Benchmarking can be used for selecting the most appropriate tool, helping assessing and comparing alternative solutions, but its effectiveness largely depends on the adequacy of the metrics. This paper studies the problem of selecting the metrics to be used in a benchmark for software vulnerability detection tools. First, a large set of metrics is gathered and analyzed according to the characteristics of a good metric for the vulnerability detection domain. Afterwards, the metrics are analyzed in the context of specific vulnerability detection scenarios to understand their effectiveness and to select the most adequate one for each scenario. Finally, an MCDA algorithm together with experts' judgment is applied to validate the conclusions. Results show that although some of the metrics traditionally used like precision and recall are adequate in some scenarios, others require alternative metrics that are seldom used in the benchmarking area.
Keywords: invasive software; software metrics; MCDA algorithm; alternative metrics; benchmarking vulnerability detection tool; software vulnerability detection tool; Benchmark testing; Concrete; Context; Measurement; Security; Standards; Automated Tools; Benchmarking; Security Metrics; Software Vulnerabilities; Vulnerability Detection (ID#: 16-10241)


D. Evangelista, F. Mezghani, M. Nogueira and A. Santos, “Evaluation of Sybil Attack Detection Approaches in the Internet of Things Content Dissemination,” 2016 Wireless Days (WD), Toulouse, France, 2016, pp. 1-6. doi: 10.1109/WD.2016.7461513
Abstract: The Internet of Things (IoT) comprises a diversity of heterogeneous objects that collects data in order to disseminate information to applications. The IoT data dissemination service can be tampered by several types of attackers. Among these, the Sybil attack emerged as the most critical since it operates in the data confidentiality. Although there are approaches against Sybil attack in several services, they disregard the presence of heterogeneous devices and have complex solutions. This paper presents a study highlighting strengths and weaknesses of Sybil attack detection approaches when applied in the IoT content dissemination. An evaluation of the LSD solution was made to assess its effectiveness and efficiency in a IoT network.
Keywords: Authentication; Cryptography; Feature extraction; Internet of things; Measurement; Security and privacy in the Internet of Things; Security in networks; Sybil Detection Techniques (ID#: 16-10242)


X. Yang, D. Lo, X. Xia, Y. Zhang and J. Sun, “Deep Learning for Just-in-Time Defect Prediction,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 17-26. doi: 10.1109/QRS.2015.14
Abstract: Defect prediction is a very meaningful topic, particularly at change-level. Change-level defect prediction, which is also referred as just-in-time defect prediction, could not only ensure software quality in the development process, but also make the developers check and fix the defects in time. Nowadays, deep learning is a hot topic in the machine learning literature. Whether deep learning can be used to improve the performance of just-in-time defect prediction is still uninvestigated. In this paper, to bridge this research gap, we propose an approach Deeper which leverages deep learning techniques to predict defect-prone changes. We first build a set of expressive features from a set of initial change features by leveraging a deep belief network algorithm. Next, a machine learning classifier is built on the selected features. To evaluate the performance of our approach, we use datasets from six large open source projects, i.e., Bugzilla, Columba, JDT, Platform, Mozilla, and PostgreSQL, containing a total of 137,417 changes. We compare our approach with the approach proposed by Kamei et al. The experimental results show that on average across the 6 projects, Deeper could discover 32.22% more bugs than Kamei et al's approach (51.04% versus 18.82% on average). In addition, Deeper can achieve F1-scores of 0.22-0.63, which are statistically significantly higher than those of Kamei et al.'s approach on 4 out of the 6 projects.
Keywords: just-in-time; learning (artificial intelligence); pattern classification; software quality; change-level defect prediction; deep learning; just-in-time defect prediction; machine learning classifier; machine learning literature; Computer bugs; Feature extraction; Logistics; Machine learning; Measurement; Software quality; Training; Cost Effectiveness; Deep Belief Network; Deep Learning;
Just-In-Time Defect Prediction (ID#: 16-10243)


H. Hemmati, “How Effective Are Code Coverage Criteria?,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 151-156. doi: 10.1109/QRS.2015.30
Abstract: Code coverage is one of the main metrics to measure the adequacy of a test case/suite. It has been studied a lot in academia and used even more in industry. However, a test case may cover a piece of code (no matter what coverage metric is being used) but miss its faults. In this paper, we studied several existing and standard control and data flow coverage criteria on a set of developer-written fault-revealing test cases from several releases of five open source projects. We found that a) basic criteria such as statement coverage is very weak (detecting only 10% of the faults), b) combining several control-flow coverage together is better than the strongest criterion alone (28% vs. 19%), c) a basic data-flow coverage can detect many undetected faults (79% of the undetected faults by control-flow coverage can be detected by a basic def/use pair coverage), and d) on average 15% of the faults may not be detected by any of the standard control and data-flow coverage criteria. Classification of the undetected faults showed that they are mostly to do with specification (missing logic).
Keywords: data flow analysis; program testing; public domain software; software quality; code coverage criteria; control-flow coverage; data flow coverage criteria; developer-written fault-revealing test cases; missing logic; open source projects; statement coverage; Arrays; Data mining; Fault diagnosis; Instruments; Java; Measurement; Testing; Code Coverage; Control Flow; Data Flow; Effectiveness; Experiment; Fault Categorization; Software Testing (ID#: 16-10244)


S. Sanders and J. Kaur, “Can Web Pages Be Classified Using Anonymized TCP/IP Headers?,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 2272-2280. doi: 10.1109/INFOCOM.2015.7218614
Abstract: Web page classification is useful in many domains- including ad targeting, traffic modeling, and intrusion detection. In this paper, we investigate whether learning-based techniques can be used to classify web pages based only on anonymized TCP/IP headers of traffic generated when a web page is visited. We do this in three steps. First, we select informative TCP/IP features for a given downloaded web page, and study which of these remain stable over time and are also consistent across client browser platforms. Second, we use the selected features to evaluate four different labeling schemes and learning-based classification methods for web page classification. Lastly, we empirically study the effectiveness of the classification methods for real-world applications.
Keywords: Web sites; online front-ends; security of data; telecommunication traffic; transport protocols; TCP/IP header; Web page classification; ad targeting; client browser platforms; intrusion detection; labeling schemes; learning-based classification methods; learning-based techniques; traffic modeling; Browsers; Feature extraction; IP networks; Labeling; Navigation; Streaming media; Web pages; Traffic Classification; Web Page Measurement (ID#: 16-10245)


X. Luo, J. Li, Z. Jiang and X. Guan, “Complete Observation Against Attack Vulnerability for Cyber-Physical Systems with Application to Power Grids,” 2015 5th International Conference on Electric Utility Deregulation and Restructuring and Power Technologies (DRPT), Changsha, 2015, pp. 962-967. doi: 10.1109/DRPT.2015.7432368
Abstract: This paper presents a novel framework based on system observability to solve the structural vulnerability of cyber-physical systems (CPSs) under attack with application to power grids. The adding power measurement point method is applied to detect the angle and voltage of the bus by adding detection points between two bus lines in the power grid. Then the generator dynamic equations are built to analyze the rotor angles and angular velocities of generators, and the system is simplified by the Power Management Units (PMUs) to observe the status on the generators. According to the impact of a series of attacks on the grid, we make use of grid measurements detection and state estimation to achieve observation status on the grid, enabling the monitor of the entire grid. Finally it is shown that the structural vulnerability against attacks can be solved by combining with the above-mentioned observations. Finally, some simulations are used to demonstrate the effectiveness of the proposed method. It is shown that some attacks can be effectively monitored to improve CPS security.
Keywords: electric generators; phasor measurement; power grids; power system protection; rotors; CPS security; PMU; attack vulnerability; cyber-physical systems; generator dynamic equations; grid measurements detection; power management units; power measurement point; state estimation; structural vulnerability; system observability; Angular velocity; Generators; Monitoring; Phasor measurement units; Power grids; Power measurement; Rotors; CPS; observation; structural vulnerability; undetectable attack
(ID#: 16-10246)


L. M. Putranto, R. Hara, H. Kita and E. Tanaka, “Risk-Based Voltage Stability Monitoring and Preventive Control Using Wide Area Monitoring System,” PowerTech, 2015 IEEE Eindhoven, Eindhoven, 2015, pp. 1-6. doi: 10.1109/PTC.2015.7232547
Abstract: nowadays, power system tends to be operated in heavily stressed load, which can cause voltage stability problem. Moreover, occurrence probability of contingency is increasing due to growth of power system size and complexity. This paper proposes a new preventive control scheme based on voltage stability and security monitoring by means of wide area monitoring systems (WAMS). The proposed control scheme ensures voltage stability under major N-1 line contingencies, which are selected from all possible N-1 contingencies considering their occurrence probability and/or causing load curtailment. Some cases based on IEEE 57-bus test system are used to demonstrate the effectiveness of the proposed method. The demonstration results show that the proposed method can provide important contribution in improving voltage stability and security performance.
Keywords: IEEE standards; load regulation; power system control; power system economics; power system measurement; power system security; power system stability; probability; IEEE 57-bus test system; N-1 line contingency probability; WAMS; load curtailment; power system complexity; power system operation; preventive control scheme; risk-based voltage stability monitoring; security monitoring; wide area monitoring system; Fuels; Generators; Indexes; Power system stability; Security; Stability criteria; Voltage control; Economics Load Dispatch; Load Shedding; Multistage Preventive Control; Optimum Power Flow; Voltage Stability Improvement (ID#: 16-10247)


P. Zhonghua, H. Fangyuan, Z. Yuguo and S. Dehui, “False Data Injection Attacks for Output Tracking Control Systems,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 6747-6752. doi: 10.1109/ChiCC.2015.7260704
Abstract: Cyber-physical systems (CPSs) have been gaining popularity with their high potential in widespread applications, and the security of CPSs becomes a rigorous problem. In this paper, an output track control (OTC) method is designed for discrete-time linear time-invariant Gaussian systems. The output tracking error is regarded as an additional state, Kalman filter-based incremental state observer and LQG-based augmented state feedback control strategy are designed, and Euclidean-based detector is used for detecting the false data injection attacks. Stealthy false data attacks which can completely disrupt the normal operation of the OTC systems without being detected are injected into the sensor measurements and control commands, respectively. Three kinds of numerical examples are employed to illustrate the effectiveness of the designed false data injection attacks.
Keywords: Gaussian processes; Kalman filters; discrete time systems; linear systems; observers; security of data; sensors; state feedback; CPS security; Euclidean-based detector; Kalman filter-based incremental state observer; LQG-based augmented state feedback control strategy; OTC method; OTC systems; cyber-physical systems; discrete-time linear time-invariant Gaussian systems; false data injection attacks; output track control method; output tracking control systems; output tracking error; sensor measurements; Detectors; Robot sensing systems; Security; State estimation; State feedback; Cyber-physical systems; Kalman filter; false data injection attacks; output tracking control (ID#: 16-10248)


A. Basak, F. Zhang and S. Bhunia, “PiRA: IC Authentication Utilizing Intrinsic Variations in Pin Resistance,” Test Conference (ITC), 2015 IEEE International, Anaheim, CA, 2015, pp. 1-8. doi: 10.1109/TEST.2015.7342388
Abstract: The rapidly rising incidences of counterfeit Integrated Circuits (ICs) including cloning attacks pose a significant threat to the semiconductor industry. Conventional functional/structural testing are mostly ineffective to identify different forms of cloned ICs. On the other hand, existing design for security (DfS) measures are often not attractive due to additional design effort, hardware overhead and test cost. In this paper, we propose a novel robust IC authentication approach, referred to as PiRA, to validate the integrity of ICs in presence of cloning attacks. It exploits intrinsic random variations in pin resistances across ICs to create unique chip-specific signatures for authentication. Pin resistance is defined as the resistance looking into or out the pin according to set parameters and biasing conditions, measured by standard tests for IC defect/performance analysis such as input leakage, protection diode and output load current tests. A major advantage of PiRA over existing methodologies is that it incurs virtually zero design effort and overhead. Furthermore, unlike most authentication approaches, it works for all chip types including analog/mixed-signal ICs and can be applied to legacy designs. Theoretical analysis as well as experimental measurements with common digital and analog ICs verify the effectiveness of PiRA.
Keywords: authorisation; integrated circuit testing; mixed analogue-digital integrated circuits; semiconductor diodes; IC defect-performance analysis; PiRA; analog-mixed-signal IC; biasing conditions; chip-specific signatures; cloning attacks; counterfeit integrated circuits; design for security; functional-structural testing; input leakage intrinsic random variations; intrinsic variations; output load current tests; pin resistance; protection diode; robust IC authentication approach; semiconductor industry; set parameters; Authentication; Cloning; Current measurement; Electrical resistance measurement; Integrated circuits; Resistance; Semiconductor device measurement (ID#: 16-10249)


M. Shiozaki, T. Kubota, T. Nakai, A. Takeuchi, T. Nishimura and T. Fujino, “Tamper-Resistant Authentication System with Side-Channel Attack Resistant AES and PUF Using MDR-ROM,” 2015 IEEE International Symposium on Circuits and Systems (ISCAS), Lisbon, 2015, pp. 1462-1465. doi: 10.1109/ISCAS.2015.7168920
Abstract: As a threat of security devices, side-channel attacks (SCAs) and invasive attacks have been identified in the last decade. The SCA reveals a secret key on a cryptographic circuit by measuring power consumption or electromagnetic radiation during the cryptographic operations. We have proposed the MDR-ROM scheme as the low-power and small-area counter-measure against SCAs. Meanwhile, secret data in a nonvolatile memory is analyzed by invasive attacks, and the cryptographic device is counterfeited and cloned by an adversary. We proposed to combine the MDR-ROM scheme with the Physical Unclonable Function (PUF) technique, which is expected as the counter-measure against the counterfeit, and the prototype chip was fabricated with a 180nm CMOS technology. In addition, the keyless entry demonstration system was produced in order to present the effectiveness of SCA resistance and PUF technique. Our experiments confirmed that this demonstration system achieved sufficient tamper resistance.
Keywords: CMOS integrated circuits; cryptography; random-access storage; read-only storage; 180nm CMOS technology; AES; MDR-ROM scheme; PUF; SCA; cryptographic circuit; cryptographic operations; electromagnetic radiation measurement; invasive attacks; low-power counter-measure; nonvolatile memory; physical unclonable function technique; power consumption measurement; secret key; security devices; side-channel attack resistant; small-area counter-measure; tamper-resistant authentication system; Authentication; Correlation; Cryptography; Large scale integration; Power measurement; Read only memory; Resistance; IO-masked dual-rail ROM (MDR-ROM); Siede channel attacks (SCA); physical unclonable function (PUF); tamper-resistant authentication system (ID#: 16-10250)


S. Alsemairi and M. Younis, “Clustering-Based Mitigation of Anonymity Attacks in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417501
Abstract: The use of wireless sensor networks (WSNs) can be advantageous in applications that serve in hostile environments such as security surveillance and military battlefield. The operation of a WSN typically involves collection of sensor measurements at an in-situ Base-Station (BS) that further processes the data and either takes action or reports findings to a remote command center. Thus the BS plays a vital role and is usually guarded by concealing its identity and location. However, the BS can be susceptible to traffic analysis attack. Given the limited communication range of the individual sensors and the objective of conserving their energy supply, the sensor readings are forwarded to the BS over multi-hop paths. Such a routing topology allows an adversary to correlate intercepted transmissions, even without being able to decode them, and apply attack models such as Evidence Theory (ET) in order to determine the position of the BS. This paper proposes a technique to counter such an attack by reshaping the routing topology. Basically, the nodes in a WSN are grouped in unevenly-sized clusters and each cluster has a designated aggregation node (cluster head). An inter-cluster head routes are then formed so that the BS experiences low traffic volume and does not become distinguishable among the WSN nodes. The simulation results confirm the effectiveness of the proposed technique in boosting the anonymity of the BS.
Keywords: military communication; telecommunication network routing; telecommunication traffic; wireless sensor networks; WSN nodes; anonymity attacks; clustering-based mitigation; evidence theory; in-situ base-station; military battlefield; security surveillance; Measurement; Optimized production technology; Receivers; Routing; Security; Topology; Wireless sensor networks (ID#: 16-10251)


Wei Li, Weiyi Qian and Mingqiang Yin, “Portfolio Selection Models in Uncertain Environment,” Fuzzy Systems and Knowledge Discovery (FSKD), 2015 12th International Conference on, Zhangjiajie, 2015, pp. 471-475. doi: 10.1109/FSKD.2015.7381988
Abstract: It is difficult that the security returns are reflected by previous data for portfolio selection (PS) problems. In order to overcome this, we take security returns as uncertain variables. In this paper, two portfolio selection models are presented in uncertain environment. In order to express divergence, the cross-entropy of uncertain variables is introduced into these mathematical models. In two models, we use expected value to express the investment return. At the same time, variance or semivariance expresses the risk, respectively. The mathematical models are solved by the gravitation search algorithm proposed by E. Rashedi. We apply the proposed models to two examples to exhibit effectiveness and correctness of the proposed models.
Keywords: entropy; investment; search problems; gravitation search algorithm; investment return; mathematical models; portfolio selection models; uncertain environment; uncertain variables cross-entropy; Force; Investment; Mathematical model; Measurement uncertainty; Portfolios; Security; Uncertainty; cross-entropy; gravitation search algorithm; portfolio selection problem; uncertain measure (ID#: 16-10252)


J. R. Ward and M. Younis, “A Cross-Layer Defense Scheme for Countering Traffic Analysis Attacks in Wireless Sensor Networks,” Military Communications Conference, MILCOM 2015 - 2015 IEEE, Tampa, FL, 2015, pp. 972-977. doi: 10.1109/MILCOM.2015.7357571
Abstract: In most Wireless Sensor Network (WSN) applications the sensors forward their readings to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary's attack. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to locate the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Published anonymity-boosting techniques mainly focus on a single layer of the communication protocol stack and assume that changes in the protocol operation will not be detectable. In fact, existing single-layer techniques may not be able to protect the network if the adversary could guess what anonymity measure is being applied by identifying which layer is being exploited. In this paper we propose combining physical-layer and network-layer techniques to boost the network resilience to anonymity attacks. Our cross-layer approach avoids the shortcomings of the individual single-layer schemes and allows a WSN to effectively mask its behavior and simultaneously misdirect the adversary's attention away from the BS's location. We confirm the effectiveness of our cross-layer anti-traffic analysis measure using simulation.
Keywords: cryptographic protocols; telecommunication security; telecommunication traffic; wireless sensor networks; WSN; anonymity-boosting techniques; authentication; base station; central sink; communication protocol; cross-layer defense scheme; encryption; network-layer techniques; physical-layer techniques; single-layer techniques; traffic analysis attacks; traffic analysis techniques; Array signal processing; Computer security; Measurement; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-10253)


C. Moreno, S. Kauffman and S. Fischmeister, “Efficient Program Tracing and Monitoring Through Power Consumption — wth a Little Help from the Compiler,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1556-1561. doi: (not included).
Abstract: Ensuring correctness and enforcing security are growing concerns given the complexity of modern connected devices and safety-critical systems. A promising approach is non-intrusive runtime monitoring through reconstruction of program execution traces from power consumption measurements. This can be used for verification, validation, debugging, and security purposes. In this paper, we propose a framework for increasing the effectiveness of power-based program tracing techniques. These systems determine the most likely block of source code that produced an observed power trace (CPU power consumption as a function of time). Our framework maximizes distinguishability between power traces for different code blocks. To this end, we provide a special compiler optimization stage that reorders intermediate representation (IR) and determines the reorderings that lead to power traces with highest distances between each other, thus reducing the probability of misclassification. Our work includes an experimental evaluation, using LLVM for an ARM architecture. Experimental results confirm the effectiveness of our technique.
Keywords: optimisation; power consumption; probability; program compilers; program diagnostics; safety-critical software; IR; compiler optimization stage; distinguishability maximization; intermediate representation; misclassification probability; power consumption measurement; program compiler; program execution trace reconstruction; program monitoring; program tracing; safety-critical system; Electronic mail; Monitoring; Optimization; Power demand; Power measurement; Security; Training (ID#: 16-10254)


L. Zhang, D. Chen, Y. Cao and X. Zhao, “A Practical Method to Determine Achievable Rates for Secure Steganography,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1274-1281. doi: 10.1109/HPCC-CSS-ICESS.2015.62
Abstract: With a chosen steganographic method and a cover image, the steganographer always hesitates about how many bits should be embedded. Though there have been works on theoretical capacity analysis, it is still difficult to apply them in practice. In this paper, we propose a practical method to determine the appropriate hiding rate of a cover image with the purpose of evading possible statistical detections. The core of this method is a non-linear regression, which is used to learn the mapping between the detection rate and the estimated rate with respect to a specific steganographic method. In order to deal with images with different visual contents, multiple regression functions are trained based on image groups with different texture complexity levels. To demonstrate the effectiveness of the proposed method, estimators are constructed for selected steganographic algorithms for both spatial and JPEG transform domains.
Keywords: image watermarking; regression analysis; steganography; transforms; JPEG transform domain; multiple regression function; nonlinear regression method; secure steganography; specific steganographic method; statistical detection; texture complexity level; theoretical capacity analysis; Complexity theory; Entropy; Measurement; Payloads; Security; Transform coding; Yttrium; capacity analysis; estimated rate; non-linear regression (ID#: 16-10255)


N. J. Ahuja and I. Singh, “Innovative Road Map for Leveraging ICT Enabled Tools for Energy Efficiency — from Awareness to Adoption,” Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 702-707. doi: 10.1109/ICACCE.2015.45
Abstract: Educating the energy efficiency measures at grass root levels, ranging from awareness to adoption, is the need of the hour and a very significant step towards energy security. The present work proposes a project-oriented approach based roadmap for the same. The approach initiates with a pre-survey of energy users, in terms of understanding their awareness level, current energy consumption patterns, and ascertaining their proposed adoption level towards innovative energy efficiency measures. It also assesses their interest towards different IT tools and mechanisms including their interface design preferences. Material designed, custom-tailored as per the needs of the users, is proposed to be delivered through identified IT methods. A post-survey done after an active IT intervention period proposes to bring out the variation from the pre-survey. Finally, use of analytical tools in concluding phase adjudges the interventions' effectiveness in terms of awareness generation, technology adoption level, change in energy consumption patterns, and energy savings.
Keywords: energy conservation; energy consumption; power aware computing; power engineering computing; user interfaces; ICT enabled tool; energy consumption pattern; energy efficiency; energy security; innovative road map; interface design preference; project-oriented approach; Current measurement; Energy consumption; Energy efficiency; Energy measurement; Mobile applications; Portals; Training; Computer Based Training; Energy Efficiency; ICT adoption; Mobile applications; Web-based Applications (ID#: 16-10256)


M. J. F. Alenazi and J. P. G. Sterbenz, “Comprehensive Comparison and Accuracy of Graph Metrics in Predicting Network Resilience,” Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, Kansas City, MO, 2015, pp. 157-164. doi: 10.1109/DRCN.2015.7149007
Abstract: Graph robustness metrics have been used largely to study the behavior of communication networks in the presence of targeted attacks and random failures. Several researchers have proposed new graph metrics to better predict network resilience and survivability against such attacks. Most of these metrics have been compared to a few established graph metrics for evaluating the effectiveness of measuring network resilience. In this paper, we perform a comprehensive comparison of the most commonly used graph robustness metrics. First, we show how each metric is determined and calculate its values for baseline graphs. Using several types of random graphs, we study the accuracy of each  robustness metric in predicting network resilience against centrality-based attacks. The results show three conclusions. First, our path diversity metric has the highest accuracy in predicting network resilience for structured baseline graphs. Second, the variance of node-betweenness centrality has mostly the best accuracy in predicting network resilience for Waxman random graphs. Third, path diversity, network criticality, and effective graph resistance have high accuracy in measuring network resilience for Gabriel graphs.
Keywords: graph theory; telecommunication network reliability; telecommunication security; Gabriel graphs; Waxman random graphs; baseline graphs; centrality-based attacks; communication network behavior; comprehensive comparison; effective graph resistance; graph robustness metrics accuracy; network criticality; network resilience measurement; network resilience prediction; node-betweenness centrality variance; path diversity metric; random failures; survivability prediction; targeted attacks; Accuracy; Communication networks; Joining processes; Measurement; Resilience; Robustness; Connectivity evaluation; Fault tolerance; Graph robustness; Graph spectra; Network design; Network resilience; Network science; Reliability; Survivability (ID#: 16-10257)


J. Wang, M. Zhao, Q. Zeng, D. Wu and P. Liu, “Risk Assessment of Buffer ‘Heartbleed’ Over-Read Vulnerabilities,” 2015 45th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Rio de Janeiro, 2015, pp. 555-562. doi: 10.1109/DSN.2015.59
Abstract: Buffer over-read vulnerabilities (e.g., Heartbleed) can lead to serious information leakage and monetary lost. Most of previous approaches focus on buffer overflow (i.e., over-write), which are either infeasible (e.g., canary) or impractical (e.g., bounds checking) in dealing with over-read vulnerabilities. As an emerging type of vulnerability, people need in-depth understanding of buffer over-read: the vulnerability, the security risk and the defense methods. This paper presents a systematic methodology to evaluate the potential risks of unknown buffer over-read vulnerabilities. Specifically, we model the buffer over-read vulnerabilities and focus on the quantification of how much information can be potentially leaked. We perform risk assessment using the RUBiS benchmark which is an auction site prototype modeled after We evaluate the effectiveness and performance of a few mitigation techniques and conduct a quantitative risk measurement study. We find that even simple techniques can achieve significant reduction on information leakage against over-read with reasonable performance penalty. We summarize our experience learned from the study, hoping to facilitate further studies on the over-read vulnerability.
Keywords: Internet; risk management; security of data; Heartbleed; buffer over-read vulnerabilities; defense method; information leakage; monetary lost; risk assessment; security risk; vulnerability method; Benchmark testing; Entropy; Heart rate variability; Measurement; Memory management; Payloads; Risk management (ID#: 16-10258)


K. Z. Ye, E. M. Portnov, L. G. Gagarina and K. Z. Lin, “Method for Increasing Reliability for Transmission State of Power Equipment Energy,” 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Orlando, FL, 2015,
pp. 433-437. doi: 10.1109/GlobalSIP.2015.7418232
Abstract: In this paper the problems of transmitting trustworthy monitoring and control signals through the communication channels of sophisticated telemechanics systems using the IEC 60870-5-101 (104) are debated (104). Mathematically justified discrepancy between concepts of “information veracity” and “information protection from noise in communication channel” is shown. Principles of combined encoding ensuring high level of veracity of systems intended for energy supply are proposed. The paper also presents a methodology for estimating the level of veracity of information signals of systems used in telemechanics and the results of experimental studies of proposed encoding principles effectiveness.
Keywords: IEC standards; encoding; power apparatus; power system measurement; power transmission control; power transmission reliability; protocols; security of data; IEC 60870-5-101 (104); combined encoding; communication channels; control signal transmission; energy supply; information protection; information signal veracity; monitoring signal transmission; power equipment energy transmission state reliability; telemechanics systems; Communication channels; Distortion; Distortion measurement; Encoding; IEC Standards; Information processing; Probability; biimpulse conditionally correlational code; communication channel; information veracity; protocol IEC 608705-101 (104); reliability; telemechanics system (ID#: 16-10259)


I. Kiss, B. Genge, P. Haller and G. Sebestyén, “A Framework for Testing Stealthy Attacks in Energy Grids,” Intelligent Computer Communication and Processing (ICCP), 2015 IEEE International Conference on, Cluj-Napoca, 2015, pp. 553-560. doi: 10.1109/ICCP.2015.7312718
Abstract: The progressive integration of traditional Information and Communication Technologies (ICT) hardware and software into the supervisory control of modern Power Grids (PG) has given birth to a unique technological ecosystem. Modern ICT handles a wide variety of advantageous services in PG, but in turn exposes PG to significant cyber threats. To ensure security, PG use various anomaly detection modules to detect the malicious effects of cyber attacks. In many reported cases the newly appeared targeted cyber-physical attacks can remain stealthy even in presence of anomaly detection systems. In this paper we present a framework for elaborating stealthy attacks against the critical infrastructure of power grids. Using the proposed framework, experts can verify the effectiveness of the applied anomaly detection systems (ADS) either in real or simulated environments. The novelty of the technique relies in the fact that the developed “smart” power grid cyber attack (SPGCA) first reveals the devices which can be compromised causing only a limited effect observed by ADS and PG operators. Compromising low impact devices first conducts the PG to a more sensitive and near unstable state, which leads to high damages when the attacker at last compromises high impact devices, e.g. breaking high demand power lines to cause blackout. The presented technique should be used to strengthen the deployment of ADS and to define various security zones to defend PG against such intelligent cyber attacks. Experimental results based on the IEEE 14-bus electricity grid model demonstrate the effectiveness of the framework.
Keywords: computer network security; power engineering computing; power system control; power system reliability; power system simulation; smart power grids; ADS; ICT hardware; IEEE 14-bus electricity grid model; PG operators; SPGCA; anomaly detection modules; anomaly detection systems; cyber threats; cyber-physical attacks; energy grids; information and communication technologies; intelligent cyber attacks; power grids; power lines; smart power grid cyber attack; stealthy attacks; supervisory control; Actuators; Phasor measurement units; Power grids; Process control; Sensors; Voltage measurement; Yttrium; Anomaly Detection; Control Variable; Cyber Attack; Impact Assessment; Observed Variable; Power Grid (ID#: 16-10260)


X. Lu, S. Wang, W. Li, P. Jiang and C. Zhang, “Development of a WSN Based Real Time Energy Monitoring Platform for Industrial Applications,” Computer Supported Cooperative Work in Design (CSCWD), 2015 IEEE 19th International Conference on, Calabria, 2015, pp. 337-342. doi: 10.1109/CSCWD.2015.7230982
Abstract: In recent years, with significantly increasing pressures from both energy price and the scarcity of energy resources have dramatically raised sustainability awareness in the industrial sector where the effective energy efficient process planning and scheduling are urgently demanded. To response this trend, development of a low cost, high accuracy, great flexibility and distributed real time energy monitoring platform is imperative. This paper presents the design, implementation, and testing of a remote energy monitoring system to support energy efficient sustainable manufacturing in an industrial workshop based on a hierarchical network architecture by integrating WSNs with Internet communication into a knowledge and information services platform. In order to verify the feasibility and effectiveness of the proposed system, the system has been implemented in a real shop floor to evaluate with various production processes. The assessing results showed that the proposed system has significance in practice of discovering energy relationships between various manufacturing processes which can be used to support for machining scheme selection, energy saving discovery and energy quota allocation in a shop floor.
Keywords: Internet; energy conservation; information services; machining; manufacturing processes; power engineering computing; power system measurement; pricing; sustainable development; wireless sensor networks; Internet communication; WSN based real time energy monitoring platform; energy efficient process planning; energy efficient process scheduling; energy price; energy quota allocation; energy resource scarcity; energy saving discovery; industrial applications; information services platform; machining scheme selection; manufacturing process; sustainability awareness; wireless sensor network; Communication system security; Electric variables measurement; Manufacturing; Monitoring; Planning; Wireless communication; Wireless sensor networks; Cloud service; Wireless sensor network; energy monitoring; sustainable manufacturing (ID#: 16-10261)


P. H. Yang and S. M. Yen, “Memory Attestation of Wireless Sensor Nodes by Trusted Local Agents,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 82-89. doi: 10.1109/Trustcom.2015.360
Abstract: Wireless Sensor Networks (WSNs) have been deployed for a wide variety of commercial, scientific, or military applications for the purposes of surveillance and critical data collection. Malicious code injection is a serious threat to the sensor nodes which enable fake data delivery or private data disclosure. The technique of memory attestation used to verify the integrity of a device's firmware is a potential solution against the aforementioned threat, and among which low cost software-based schemes are particularly suitable for protecting the resource-constraint sensor nodes. Unfortunately, the software-based attestation usually requires additional mechanisms to provide a reliable protection when the sensor nodes communicate with the verifier via multi-hop. Alternative hardware-based attestation (e.g., TPM) guarantees a reliable integrity measurement while it is impractical for the WSN applications primary due to the high computational overhead and hardware cost. This paper proposes a lightweight hardware-based memory attestation scheme by employing a simple tamper-resistant trusted local agent which is free from any cryptographic computation and is particularly suitable for the sensor nodes. The experimental results show the effectiveness of the proposed scheme.
Keywords: cryptography; firmware; telecommunication network reliability; telecommunication security; wireless sensor networks; WSN; computational overhead; cryptographic computation; device firmware; fake data delivery; hardware cost; hardware-based attestation; lightweight hardware-based memory attestation; low cost software-based schemes; malicious code injection; private data disclosure; reliable integrity measurement; reliable protection; resource-constraint sensor nodes; simple tamper-resistant trusted local agent; software-based attestation; trusted local agents; wireless sensor nodes; Base stations; Clocks; Hardware; Protocols; Security; Wireless sensor networks; Attestation; malicious code; trusted platform (ID#: 16-10262)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Effectiveness and Work Factor Metrics 2015 – 2016 (Part 2)


SoS Logo

Effectiveness and Work Factor Metics

2015 – 2016 (Part 2)


Measurement to determine the effectiveness of security systems is an essential element of the Science of Security. The work cited here was presented in 2015 and 2016.

J. R. Ward and M. Younis, “A Cross-Layer Distributed Beamforming Approach to Increase Base Station Anonymity in Wireless Sensor Networks,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-7. doi: 10.1109/GLOCOM.2015.7417430
Abstract: In most applications of wireless sensor networks (WSNs), nodes act as data sources and forward measurements to a central base station (BS) that may also perform network management tasks. The critical role of the BS makes it a target for an adversary's attack. Even if a WSN employs conventional security primitives such as encryption and authentication, an adversary can apply traffic analysis techniques to find the BS. Therefore, the BS should be kept anonymous to protect its identity, role, and location. Previous work has demonstrated distributed beamforming to be an effective technique to boost BS anonymity in WSNs; however, the implementation of distributed beamforming requires significant coordination messaging that increases transmission activities and alerts the adversary to the possibility of deceptive activities. In this paper we present a novel, cross-layer design that exploits the integration of the control traffic of distributed beamforming with the MAC protocol in order to boost the BS anonymity while keeping the rate of node transmission at a normal rate. The advantages of our proposed approach include minimizing the overhead of anonymity measures and lowering the transmission power throughout the network which leads to increased spectrum efficiency and reduced energy consumption. The simulation results confirm the effectiveness our cross-layer design.
Keywords: access protocols; array signal processing; wireless sensor networks; MAC protocol; WSN; base station anonymity; central base station; cross-layer distributed beamforming approach; Array signal processing; Media Access Protocol; Schedules; Security; Synchronization; Wireless sensor networks (ID#: 16-10263)


A. Chahar, S. Yadav, I. Nigam, R. Singh and M. Vatsa, “A Leap Password Based Verification System,” Biometrics Theory, Applications and Systems (BTAS), 2015 IEEE 7th International Conference on, Arlington, VA, 2015, pp. 1-6. doi: 10.1109/BTAS.2015.7358745
Abstract: Recent developments in three-dimensional sensing devices has led to the proposal of a number of biometric modalities for non-critical scenarios. Leap Motion device has received attention from Vision and Biometrics community due to its high precision tracking. In this research, we propose Leap Password; a novel approach for biometric authentication. The Leap Password consists of a string of successive gestures performed by the user during which physiological as well as behavioral information is captured. The Conditional Mutual Information Maximization algorithm selects the optimal feature set from the extracted information. Match-score fusion is performed to reconcile information from multiple classifiers. Experiments are performed on the Leap Password Dataset, which consists of over 1700 samples obtained from 150 subjects. An accuracy of over 81% is achieved, which shows the effectiveness of the proposed approach.
Keywords: biometrics (access control); feature selection; gesture recognition; image fusion; optimisation; security of data; 3D sensing devices; Leap Motion device; Leap Password based verification system; biometric authentication; conditional mutual information maximization algorithm; gestures; high precision tracking; match-score fusion; optimal feature set selection; Feature extraction; Performance evaluation; Physiology; Sensors; Spatial resolution; Three-dimensional displays; Time measurement (ID#: 16-10264)


J. Pang and Y. Zhang, “Event Prediction with Community Leaders,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 238-243. doi: 10.1109/ARES.2015.24
Abstract: With the emerging of online social network services, quantitative studies on social influence become achievable. Leadership is one of the most intuitive and common forms for social influence, understanding it could result in appealing applications such as targeted advertising and viral marketing. In this work, we focus on investigating leaders' influence for event prediction in social networks. We propose an algorithm based on events that users conduct to discover leaders in social communities. Analysis on the leaders that we found on a real-life social network dataset leads us to several interesting observations, such as that leaders do not have significantly higher number of friends but are more active than other community members. We demonstrate the effectiveness of leaders' influence on users' behaviors by learning tasks: given a leader has conducted one event, whether and when a user will perform the event. Experimental results show that with only a few leaders in a community the event predictions are always very effective.
Keywords: social networking (online); community leaders; event prediction; leadership; online social network services; real-life social network dataset; social influence; Entropy; Measurement; Prediction algorithms; Reliability; Social network services; Testing; Training (ID#: 16-10265)


H. Pazhoumand-Dar, M. Masek and C. P. Lam, “Unsupervised Monitoring of Electrical Devices for Detecting Deviations in Daily Routines,” 2015 10th International Conference on Information, Communications and Signal Processing (ICICS), Singapore, 2015, pp. 1-6. doi: 10.1109/ICICS.2015.7459849
Abstract: This paper presents a novel approach for automatic detection of abnorma
behaviours in daily routine of people living alone in their homes, without any manual labelling of the training dataset. Regularity and frequency of activities are monitored by estimating the status of specific electrical appliances via their power signatures identified from the composite power signal of the house. A novel unsupervised clustering technique is presented to automatically profile the power signatures of electrical devices. Then, the use of a test statistic is proposed to distinguish power signatures resulted from the occupant interactions from those of self-regulated appliances such as refrigerator. Experiments on real-world data showed the effectiveness of the proposed approach in terms of detection of the occupant's interactions with appliances as well as identifying those days that the behaviour of the occupant was outside the normal pattern.
Keywords: Monitoring; Power demand; Power measurement; Reactive power; Refrigerators; Training; abnormality detection; behaviour monitoring; power sensor; statistical measures (ID#: 16-10266)


N. Sae-Bae and N. Memon, “Quality of Online Signature Templates,” Identity, Security and Behavior Analysis (ISBA), 2015 IEEE International Conference on, Hong Kong, 2015, pp. 1-8. doi: 10.1109/ISBA.2015.7126354
Abstract: This paper proposes a metric to measure the quality of an online signature template derived from a set of enrolled signature samples in terms of its distinctiveness against random signatures. Particularly, the proposed quality score is computed based on statistical analysis of histogram features that are used as part of an online signature representation. Experiments performed on three datasets consistently confirm the effectiveness of the proposed metric as an indication of false acceptance rate against random forgeries when the system is operated at a particular decision threshold. Finally, the use of the proposed quality metric to enforce a minimum signature strength policy in order to enhance security and reliability of the system against random forgeries is demonstrated.
Keywords: counterfeit goods; digital signatures; feature extraction; random processes; statistical analysis; decision threshold; false acceptance rate; histogram features; online signature representation; online signature template quality; quality metric; quality score; random forgeries; random signatures; signature strength policy; Biometrics (access control); Forgery; Histograms; Measurement; Sociology; Standards (ID#: 16-10267)


M. Rezvani, A. Ignjatovic, E. Bertino and S. Jha, “A Collaborative Reputation System Based on Credibility Propagation in WSNs,” Parallel and Distributed Systems (ICPADS), 2015 IEEE 21st International Conference on, Melbourne, VIC, 2015, pp. 1-8. doi: 10.1109/ICPADS.2015.9
Abstract: Trust and reputation systems are widely employed in WSNs to help decision making processes by assessing trustworthiness of sensor nodes in a data aggregation process. However, in unattended and hostile environments, more sophisticated malicious attacks, such as collusion attacks, can distort the computed trust scores and lead to low quality or deceptive service as well as undermine the aggregation results. In this paper we propose a novel, local, collaborative-based trust framework for WSNs that is based on the concept of credibility propagation which we introduce. In our approach, trustworthiness of a sensor node depends on the amount of credibility that such a node receives from other nodes. In the process we also obtain an estimates of sensors' variances which allows us to estimate the true value of the signal using the Maximum Likelihood Estimation. Extensive experiments using both real-world and synthetic datasets demonstrate the efficiency and effectiveness of our approach.
Keywords: decision making; maximum likelihood estimation; telecommunication security; wireless sensor networks; WSN; collaborative reputation system; collaborative-based trust framework; credibility propagation; data aggregation process; reputation systems; sensor nodes; trust systems; Aggregates; Collaboration; Computer science; Maximum likelihood estimation; Robustness; Temperature measurement; Wireless sensor networks; collusion attacks; data aggregation; iterative filtering; reputation system (ID#: 16-10268)


X. Qu, S. Kim, D. Atnafu and H. J. Kim, “Weighted Sparse Representation Using a Learned Distance Metric for Face Recognition,” Image Processing (ICIP), 2015 IEEE International Conference on, Quebec City, QC, 2015, pp. 4594-4598. doi: 10.1109/ICIP.2015.7351677
Abstract: This paper presents a novel weighted sparse representation classification for face recognition with a learned distance metric (WSRC-LDM) which learns a Mahalanobis distance to calculate the weight and code the testing face. The Mahalanobis distance is learned by using the information-theoretic metric learning (ITML) which helps to define a better weight used in WSRC. In the meantime, the learned distance metric takes advantage of the classification rule of SRC which helps the proposed method classify more accurately. Extensive experiments verify the effectiveness of the proposed method.
Keywords: face recognition; image representation; information theory; learning (artificial intelligence); ITML; Mahalanobis distance; WSRC-LDM; information-theoretic metric learning; learned distance metric; weighted sparse representation; Encoding; Face; Face recognition; Image reconstruction; Measurement; Testing; Training; Face Recognition; Metric Learning; Weighted Sparse Representation Classification (ID#: 16-10269)


B. Niu, S. Gao, F. Li, H. Li and Z. Lu, “Protection of Location Privacy in Continuous LBSs Against Adversaries with Background Information,” 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, 2016, pp. 1-6. doi: 10.1109/ICCNC.2016.7440649
Abstract: Privacy issues in continuous Location-Based Services (LBSs) have gained attractive attentions in literature over recent years. In this paper, we illustrate the limitations of existing work and define an entropy-based privacy metric to quantify the privacy degree based on a set of vital observations. To tackle the privacy issues, we propose an efficient privacy-preserving scheme, DUMMY-T, which aims to protect LBSs user's privacy against adversaries with background information. By our Dummy Locations Generating (DLG) algorithm, we first generate a set of realistic dummy locations for each snapshot with considering the minimum cloaking region and background information. Further, our proposed Dummy Paths Constructing (DPC) algorithm guarantees the location reachability by taking the maximum distance of the moving mobile users into consideration. Security analysis and empirical evaluation results further verify the effectiveness and efficiency of our DUMMY-T.
Keywords: data protection; entropy; mobile computing; mobility management (mobile radio); telecommunication security; DLG algorithm; DPC algorithm; DUMMY-T scheme; LBS user privacy protection; adversaries; background information; continuous LBS; continuous location-based services; dummy path-constructing algorithm; dummy-location generating algorithm; empirical evaluation; entropy-based privacy metric; location privacy protection; location reachability; maximum moving-mobile user distance; minimum cloaking region; privacy degree quantification; privacy-preserving scheme; security analysis; snapshots; Entropy; Measurement; Mobile communication; Privacy; Servers; System performance; Uncertainty (ID#: 16-10270)


F. Qin, Z. Zheng, C. Bai, Y. Qiao, Z. Zhang and C. Chen, “Cross-Project Aging Related Bug Prediction,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 43-48. doi: 10.1109/QRS.2015.17
Abstract: In a long running system, software tends to encounter performance degradation and increasing failure rate during execution, which is called software aging. The bugs contributing to the phenomenon of software aging are defined as Aging Related Bugs (ARBs). Lots of manpower and economic costs will be saved if ARBs can be found in the testing phase. However, due to the low presence probability and reproducing difficulty of ARBs, it is usually hard to predict ARBs within a project. In this paper, we study whether and how ARBs can be located through cross-project prediction. We propose a transfer learning based aging related bug prediction approach (TLAP), which takes advantage of transfer learning to reduce the distribution difference between training sets and testing sets while preserving their data variance. Furthermore, in order to mitigate the severe class imbalance, class imbalance learning is conducted on the transferred latent space. Finally, we employ machine learning methods to handle the bug prediction tasks. The effectiveness of our approach is validated and evaluated by experiments on two real software systems. It indicates that after the processing of TLAP, the performance of ARB bug prediction can be dramatically improved.
Keywords: learning (artificial intelligence); program debugging; program testing; software maintenance; ARB bug prediction; TLAP; aging related bugs; class imbalance learning; cross-project aging; data variance; low presence probability; machine learning method; software aging; software execution; software failure rate; software performance degradation; software system; software testing; transfer learning based aging related bug prediction approach; Aging; Computer bugs; Learning systems; Measurement; Software; Testing; Training; aging related bug; bug prediction; ross-project; transfer learning (ID#: 16-10271)


X. Gong, X. Zhang and N. Wang, “Random-Attack Median Fortification Problems with Different Attack Probabilities,” Control Conference (CCC), 2015 34th Chinese, Hangzhou, 2015, pp. 9127-9133. doi: 10.1109/ChiCC.2015.7261083
Abstract: Critical infrastructure can be lost due to random and intentional attacks. The random-attack median fortification problem has been presented to minimize the expected operation cost after pure random attacks with same facility attack probability. This paper discusses the protection problem for supply systems considering random attacks with tendentiousness, that is, some facilities have more attractions for attackers. The random-attack median fortification problem with different attack probabilities (RAM F-DP) is proposed and solved by calculating the service probabilities for all demand nodes and facilities p airs after attacking. The effectiveness of solving RAMF-DP is verified through experiments on various attack probabilities.
Keywords: cost reduction; critical infrastructures; disasters; dynamic programming; national security; probability; random processes; terrorism; RAM F-DP; critical infrastructure; demand nodes; expected operation cost minimization; facility attack probability; intentional attack; protection problem; pure random attack; random-attack median fortification problem; service probability; supply system; tendentiousness; Computational modeling; Games; Indexes; Linear programming; Mathematical model; Q measurement; Terrorism; Different attack probabilities; Median problems; Random attacks; Tendentiousness (ID#: 16-10272)


R. F. Lima and A. C. M. Pereira, “A Fraud Detection Model Based on Feature Selection and Undersampling Applied to Web Payment Systems,” 2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Singapore, 2015, pp. 219-222. doi: 10.1109/WI-IAT.2015.13
Abstract: The volume of electronic transactions has raised a lot in last years, mainly due to the popularization of e-commerce. Since this popularization, we have observed a significant increase in the number of fraud cases, resulting in billions of dollars losses each year worldwide. Therefore, it is important and necessary to develop and apply techniques that can assist in fraud detection in Web transactions. Due to the large amount of data generated in electronic transactions, to find the best set of features is an essential task to identify frauds. Fraud detection is a specific application of anomaly detection, characterized by a large imbalance between the classes (e.g., fraud or non fraud), which can be a detrimental factor for feature selection techniques. In this work we evaluate the behavior and impact of feature selection techniques to detect fraud in a Web Transaction scenario, applying feature selection techniques and performing undersampling in this step. To measure the effectiveness of the feature selection approach we use some state-of-the-art classification techniques to identify frauds, using real data from one of the largest electronic payment system in Latin America. Thus, the fraud detection models comprises a feature selection and classification techniques. To evaluate our results we use metrics of F-Measure and Economic Efficiency. Our results show that the imbalance between the classes reduces the effectiveness of feature selection and the undersampling strategy applied in this task improves the final results. We achieve a very good performance in fraud detection using the proposed methodology, reducing the number of features and presenting financial gains of up to 61% compared to the actual scenario of the company.
Keywords: 1/f noise; Internet; electronic commerce; security of data; F-measure; Latin America; Web payment system; Web transactions; e-commerce; economic efficiency; electronic payment system; electronic transactions; feature selection; fraud detection model; under sampling; Computational modeling; Economics; Feature extraction; Frequency modulation; Logistics; Measurement; Yttrium; Anomaly Detection; Electronic Transactions; Feature Selection; Fraud Detection (ID#: 16-10273)


I. Kiss, B. Genge and P. Haller, “Behavior-Based Critical Cyber Asset Identification in Process Control Systems Under Cyber Attacks,” Carpathian Control Conference (ICCC), 2015 16th International, Szilvasvarad, 2015, pp. 196-201. doi: 10.1109/CarpathianCC.2015.7145073
Abstract: The accelerated advancement of Process Control Systems (PCS) transformed the traditional and completely isolated systems view into a networked inter-connected “system of systems” perspective, where off-the-shelf Information and Communication Technologies (ICT) are deeply embedded into the heart of PCS. This has brought significant economical and operational benefits, but it also provided new opportunities for malicious actors targeting critical PCS. To address these challenges, in this work we employ our previously developed Cyber Attack Impact Assessment (CAIA) technique to provide a systematic mechanism to help PCS designers and industry operators to assess the impact severity of various cyber threats. Moreover, the question of why a device is more critical than others, and also the motivation of this work, are answered through extensive numerical results showing the significance of systems dynamics in the context of closed-loop PCS. The CAIA approach is validated against the simulated Tennessee Eastman chemical process, including 41 observed variables and 12 control variables, involved in cascade controller structures. The results show the application possibilities and effectiveness of CAIA for various attack scenarios.
Keywords: closed loop systems; control engineering computing; interconnected systems; process control; production engineering computing; security of data; CAIA technique; Tennessee Eastman chemical process; behavior-based critical cyber asset identification; cascade controller structures; closed-loop PCS; critical PCS; cyber attack impact assessment; cyber threats impact severity; economical benefits; malicious actors; networked interconnected system of systems; operational benefits; process control systems; systematic mechanism; systems dynamics; Chemical processes; Feeds; Hardware; Inductors; Mathematical model; Process control; Time measurement; Control Variable; Cyber Attack; Impact Assessment; Observed Variable; Process Control Systems; System Dynamics (ID#: 16-10274)


L. Behe, Z. Wheeler, C. Nelson, B. Knopp and W. M. Pottenger, “To Be or Not to Be IID: Can Zipf's Law Help?,” Technologies for Homeland Security (HST), 2015 IEEE International Symposium on, Waltham, MA, 2015, pp. 1-6. doi: 10.1109/THS.2015.7225274
Abstract: Classification is a popular problem within machine learning, and increasing the effectiveness of classification algorithms has many significant applications within industry and academia. In particular, focus will be given to Higher-Order Naive Bayes (HONB), a relational variant of the famed Naive Bayes (NB) statistical classification algorithm that has been shown to outperform Naive Bayes in many cases [1,10]. Specifically, HONB has outperformed NB on character n-gram based feature spaces when the available training data is small [2]. In this paper, a correlation is hypothesized between the performance of HONB on character n-gram feature spaces and how closely the feature space distribution follows Zipf's Law. This hypothesis stems from the overarching goal of ultimately understanding HONB and knowing when it will outperform NB. Textual datasets ranging from several thousand instances to nearly 20,000 instances, some containing microtext, were used to generate character n-gram feature spaces. HONB and NB were both used to model these datasets, using varying character n-gram sizes (2-7) and dictionary sizes up to 5000 features. The performances of HONB and NB were then compared, and the results show potential support for our hypothesis: namely, the results support the hypothesized correlation for the Accuracy and Precision metrics. Additionally, a solution is provided for an open problem which was presented in [1], giving an explicit formula for the number of SDRs from k given sets, which has connections to counting higher-order paths of arbitrary length, which are important in the learning stage of HONB.
Keywords: Bayes methods; learning (artificial intelligence); natural language processing; pattern classification; text analysis; HONB; IDD; Zipf's law; accuracy metrics; character n-gram based feature spaces; character n-gram feature spaces; classification algorithms; feature space distribution; higher-order naive Bayes; independent and identically distributed; machine learning; naive Bayes statistical classification algorithm; precision metrics; textual datasets; Accuracy; Classification algorithms; Correlation; Earthquakes; Measurement; Niobium; Prediction algorithms (ID#: 16-10275)


Q. Yang, Rui Min, D. An, W. Yu and X. Yang, “Towards Optimal PMU Placement Against Data Integrity Attacks in Smart Grid,” 2016 Annual Conference on Information Science and Systems (CISS), Princeton, NJ, USA, 2016, pp. 54-58. doi: 10.1109/CISS.2016.7460476
Abstract: State estimation plays a critical role in self-detection and control of the smart grid. Data integrity attacks (also known as false data injection attacks) have shown significant potential in undermining the state estimation of power systems, and corresponding countermeasures have drawn increased scholarly interest. In this paper, we consider the existing least-effort attack model that computes the minimum number of sensors that must be compromised in order to manipulate a given number of states, and develop an effective greedy-based algorithm for optimal PMU placement to defend against data integrity attacks. We develop a greedy-based algorithm for optimal PMU placement, which can not only combat data integrity attacks, but also ensure the system observability with low overhead. The experimental data obtained based on IEEE standard systems demonstrates the effectiveness of the proposed defense scheme against data integrity attacks.
Keywords: Observability; Phasor measurement units; Power grids; Security; Sensors; State estimation; Data integrity attacks; defense strategy; optimal PMU placement; state estimation; system observability (ID#: 16-10276)


Y. Zhauniarovich, A. Philippov, O. Gadyatskaya, B. Crispo and F. Massacci, “Towards Black Box Testing of Android Apps,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 501-510. doi: 10.1109/ARES.2015.70
Abstract: Many state-of-art mobile application testing frameworks (e.g., Dynodroid [1], EvoDroid [2]) enjoy Emma [3] or other code coverage libraries to measure the coverage achieved. The underlying assumption for these frameworks is availability of the app source code. Yet, application markets and security researchers face the need to test third-party mobile applications in the absence of the source code. There exists a number of frameworks both for manual and automated test generation that address this challenge. However, these frameworks often do not provide any statistics on the code coverage achieved, or provide coarse-grained ones like a number of activities or methods covered. At the same time, given two test reports generated by different frameworks, there is no way to understand which one achieved better coverage if the reported metrics were different (or no coverage results were provided). To address these issues we designed a framework called BBOXTESTER that is able to generate code coverage reports and produce uniform coverage metrics in testing without the source code. Security researchers can automatically execute applications exploiting current state-of-art tools, and use the results of our framework to assess if the security-critical code was covered by the tests. In this paper we report on design and implementation of BBOXTESTER and assess its efficiency and effectiveness.
Keywords: Android (operating system); mobile computing; program testing; security of data; Android apps; BBOXTESTER; automated test generation; black box testing; code coverage report generation; coverage metrics; manual test generation; security-critical code; third-party mobile application testing; Androids; Humanoid robots; Instruments; Java; Measurement; Runtime; Testing (ID#: 16-10277)


J. Armin, B. Thompson, D. Ariu, G. Giacinto, F. Roli and P. Kijewski, “2020 Cybercrime Economic Costs: No Measure No Solution,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 701-710. doi: 10.1109/ARES.2015.56
Abstract: Governments needs reliable data on crime in order to both devise adequate policies, and allocate the correct revenues so that the measures are cost-effective, i.e., The money spent in prevention, detection, and handling of security incidents is balanced with a decrease in losses from offences. The analysis of the actual scenario of government actions in cyber security shows that the availability of multiple contrasting figures on the impact of cyber-attacks is holding back the adoption of policies for cyber space as their cost-effectiveness cannot be clearly assessed. The most relevant literature on the topic is reviewed to highlight the research gaps and to determine the related future research issues that need addressing to provide a solid ground for future legislative and regulatory actions at national and international levels.
Keywords: government data processing; security of data; cyber security; cyber space; cyber-attacks; cybercrime economic cost; economic costs; Computer crime; Economics; Measurement; Organizations; Reliability; Stakeholders (ID#: 16-10278)


P. Pantazopoulos and I. Stavrakakis, “Low-Cost Enhancement of the Intra-Domain Internet Robustness Against Intelligent Node Attacks,” Design of Reliable Communication Networks (DRCN), 2015 11th International Conference on the, Kansas City, MO, 2015, pp. 219-226. doi: 10.1109/DRCN.2015.7149016
Abstract: Internet vulnerability studies typically consider highly central nodes as favorable targets of intelligent (malicious) attacks. Heuristics that use redundancy adding k extra links in the topology are a common class of countermeasures seeking to enhance Internet robustness. To identify the nodes to be linked most previous works propose very simple centrality criteria that lack a clear rationale and only occasionally address Intra-domain topologies. More importantly, the implementation cost induced by adding lengthy links between nodes of remote network locations is rarely taken into account. In this paper, we explore cost-effective link additions in the locality of the targets having the k extra links added only between their first neighbors. We introduce an innovative link utility metric that identifies which pair of a target's neighbors aggregates the most shortest paths coming from the rest of the nodes and therefore could enhance the network connectivity, if linked. This metric drives the proposed heuristic that solves the problem of assigning the link budget k to the neighbors of the targets. By employing a rich Intra-domain networks dataset we first conduct a proof-of-concept study to validate the effectiveness of the metric. Then we compare our approach with the so-far most effective heuristic that does not bound the length of the added links. Our results suggest that the proposed enhancement can closely approximate the connectivity levels the so-far winner yields, yet with up to eight times lower implementation cost.
Keywords: Internet; computer network security; telecommunication links; telecommunication network topology; innovative link utility metric; intelligent node attack; intradomain internet robustness low-cost enhancement; intradomain topology; network connectivity enhancement; proof-of-concept study; Communication networks;  Measurement; Network topology; Reliability engineering; Robustness; Topology (ID#: 16-10279)


M. Bradbury, M. Leeke and A. Jhumka, “A Dynamic Fake Source Algorithm for Source Location Privacy in Wireless Sensor Networks,” Trustcom/BigDataSE/ISPA, 2015 IEEE, Helsinki, 2015, pp. 531-538. doi: 10.1109/Trustcom.2015.416
Abstract: Wireless sensor networks (WSNs) are commonly used in asset monitoring applications, where it is often desirable for the location of the asset being monitored to be kept private. The source location privacy (SLP) problem involves protecting the location of a WSN source node from an attacker who is attempting to locate it. Among the most promising approaches to the SLP problem is the use of fake sources, with much existing research demonstrating their efficacy. Despite the effectiveness of the approach, the most effective algorithms providing SLP require network and situational knowledge that makes their deployment impractical in many contexts. In this paper, we develop a novel dynamic fake sources-based algorithm for SLP. We show that the algorithm provides state-of-the-art levels of location privacy under practical operational assumptions.
Keywords: data privacy; telecommunication security; wireless sensor networks; SLP problem; WSN source node; asset monitoring applications; dynamic fake source algorithm; location protection; source location privacy problem; wireless sensor networks; Context; Heuristic algorithms; Monitoring; Position measurement; Privacy; Temperature sensors; Wireless sensor networks; Dynamic; Sensor Networks; Source Location Privacy (ID#: 16-10280)


J. R. Ward and M. Younis, “Base Station Anonymity Distributed Self-Assessment in Wireless Sensor Networks,” Intelligence and Security Informatics (ISI), 2015 IEEE International Conference on, Baltimore, MD, 2015, pp. 103-108. doi: 10.1109/ISI.2015.7165947
Abstract: In recent years, Wireless Sensor Networks (WSNs) have become valuable assets to both the commercial and military communities with applications ranging from industrial control on a factory floor to reconnaissance of a hostile border. In most applications, the sensors act as data sources and forward information generated by event triggers to a central sink or base station (BS). The unique role of the BS makes it a natural target for an adversary that desires to achieve the most impactful attack possible against a WSN with the least amount of effort. Even if a WSN employs conventional security mechanisms such as encryption and authentication, an adversary may apply traffic analysis techniques to identify the BS. This motivates a significant need for improved BS anonymity to protect the identity, role, and location of the BS. Previous work has proposed anonymity-boosting techniques to improve the BS's anonymity posture, but all require some amount of overhead such as increased energy consumption, increased latency, or decreased throughput. If the BS understood its own anonymity posture, then it could evaluate whether the benefits of employing an anti-traffic analysis technique are worth the associated overhead. In this paper we propose two distributed approaches to allow a BS to assess its own anonymity and correspondingly employ anonymity-boosting techniques only when needed. Our approaches allow a WSN to increase its anonymity on demand, based on real-time measurements, and therefore conserve resources. The simulation results confirm the effectiveness of our approaches.
Keywords: security of data; wireless sensor networks; WSN; anonymity-boosting techniques; anti-traffic analysis technique; base station; base station anonymity distributed self-assessment; conventional security mechanisms; improved BS anonymity; Current measurement; Energy consumption; Entropy; Protocols; Sensors; Wireless sensor networks; anonymity; location privacy
(ID#: 16-10281)


L. Ren, C. Gong, Q. Shen and H. Wang, “A Method for Health Monitoring of Power MOSFETs Based on Threshold Voltage,” Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference on, Auckland, 2015, pp. 1729-1734. doi: 10.1109/ICIEA.2015.7334390
Abstract: The prognostics and health management (PHM) of airborne equipment plays an important role in ensuring the security of flight and improving the ratio of combat readiness. The widely use of electronics equipment in aircraft is now making the PHM technology for power electronics devices become more important. It is the main circuit devices that are proved to have high failure rate in power equipment. This paper does some research about the fault feature extraction of power metal oxide semiconductor field effect transistor (MOSFET). Firstly, the failure mechanism and failure feature of active power switches are analyzed in this paper, and the junction temperature is indicated to be an overall parameter for the health monitoring of MOSFET. Then, a health monitoring method based on the threshold voltage is proposed. For buck converter, a measuring method of the threshold voltage is proposed, which is simple to realize and of high precision. Finally, the simulation and experimental results verify the effectiveness of the proposed measuring method.
Keywords: monitoring; power MOSFET; power electronics; active power switches; airborne equipment; buck converter; electronics equipment; failure mechanism; fault feature extraction; health monitoring; junction temperature; power electronics devices; prognostics and health management; threshold voltage; Aging; Degradation; Junctions; MOSFET; Temperature; Temperature measurement; Threshold voltage; Buck converter; The prognostics and health management (PHM); the failure mechanism (ID#: 16-10282)


T. Saito, H. Miyazaki, T. Baba, Y. Sumida and Y. Hori, “Study on Diffusion of Protection/Mitigation Against Memory Corruption Attack in Linux Distributions,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 525-530. doi: 10.1109/IMIS.2015.73
Abstract: Memory corruption attacks that exploit software vulnerabilities have become a serious problem on the Internet. Effective protection and/or mitigation technologies aimed at countering these attacks are currently being provided with operating systems, compilers, and libraries. Unfortunately, the attacks continue. One of the reasons for this state of affairs can be attributed to the uneven diffusion of the latest (and thus most potent) protection and/or mitigation technologies. This results because attackers are likely to have found ways of circumventing most well-known older versions, thus causing them to lose effectiveness. Therefore, in this paper, we will explore diffusion of relatively new technologies, and analyze the results of a Linux distributions survey.
Keywords: Linux; security of data; Internet; Linux distributions; memory corruption attack mitigation; memory corruption attack protection; software vulnerabilities; Buffer overflows; Geophysical measurement techniques; Ground penetrating radar; Kernel; Libraries;  Anti-thread; Buffer Overflow; Diffusion of countermeasure techniques; Memory corruption attacks (ID#: 16-10283)


X. Zhou, H. Wang and J. Zhao, “A Fault-Localization Approach Based on the Coincidental Correctness Probability,” Software Quality, Reliability and Security (QRS), 2015 IEEE International Conference on, Vancouver, BC, 2015, pp. 292-297. doi: 10.1109/QRS.2015.48
Abstract: Coverage-based fault localization is a spectrum-based technique that identifies the executing program elements that correlate with failure. However, the effectiveness of coverage-based fault localization suffers from the effect of coincidental correctness which occurs when a fault is executed but no failure is detected. Coincidental correctness is prevalent and proved as a safety reducing factor for the coverage-based fault location techniques. In this paper, we propose a new fault-localization approach based on the coincidental correctness probability. We estimate the probability that coincidental correctness happens for each program execution using dynamic data-flow analysis and control-flow analysis. To evaluate our approach, we use safety and precision as evaluation metrics. Our experiment involved 62 seeded versions of C programs from SIR. We discuss the comparison results with Tarantula and two improved CBFL techniques cleansing test suites from coincidental correctness. The results show that our approach can improve the safety and precision of the fault-localization technique to a certain degree.
Keywords: data flow analysis; probability; program testing; software fault tolerance; C programs; CBFL techniques; Tarantula; coincidental correctness probability; control-flow analysis; coverage-based fault localization; coverage-based fault location techniques; dynamic data-flow analysis; evaluation metrics; failure; fault-localization approach; precision; probability estimation; program elements; program execution; safety reducing factor; software testing; spectrum-based technique; test suites; Algorithm design and analysis; Circuit faults; Estimation; Heuristic algorithms; Lead; Measurement; Safety; coincidental correctness; fault localization (ID#: 16-10284)


P. Xu, Q. Miao, T. Liu and X. Chen, “Multi-Direction Edge Detection Operator,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 187-190. doi: 10.1109/CIS.2015.53
Abstract: Due to the noise in the images, the edges extracted from these noisy images are always discontinuous and inaccurate by traditional operators. In order to solve these problems, this paper proposes multi-direction edge detection operator to detect edges from noisy images. The new operator is designed by introducing the shear transformation into the traditional operator. On the one hand, the shear transformation can provide a more favorable treatment for directions, which can make the new operator detect edges in different directions and overcome the directional limitation in the traditional operator. On the other hand, all the single pixel edge images in different directions can be fused. In this case, the edge information can complement each other. The experimental results indicate that the new operator is superior to the traditional ones in terms of the effectiveness of edge detection and the ability of noise rejection.
Keywords: edge detection; image denoising; mathematical operators; transforms; edge extraction; multidirection edge detection operator; noise rejection ability; noisy images; shear transformation; single-pixel edge images; Computed tomography; Convolution; Image edge detection; Noise measurement; Sensitivity; Standards; Wavelet transforms; false edges; matched edges; the shear transformation (ID#: 16-10285)


W. Li, B. Niu, H. Li and F. Li, “Privacy-Preserving Strategies in Service Quality Aware Location-Based Services,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7328-7334. doi: 10.1109/ICC.2015.7249497
Abstract: The popularity of Location-Based Services (LBSs) have resulted in serious privacy concerns recently. Mobile users may lose their privacy while enjoying kinds of social activities due to the untrusted LBS servers. Many Privacy Protection Mechanisms (PPMs) are proposed in literature by employing different strategies, which come at the cost of either system overhead, or service quality, or both of them. In this paper, we design privacy-preserving strategies for both of the users and adversaries in service quality aware LBSs. Different from existing approaches, we first define and point out the importance of the Fine-Grained Side Information (FGSI) over existing concept of the side information, and propose a Dual-Privacy Metric (DPM) and Service Quality Metric (SQM). Then, we build analytical frameworks that provide privacy-preserving strategies for mobile users and the adversaries to achieve their goals, respectively. Finally, the evaluation results show the effectiveness of our proposed frameworks and the strategies.
Keywords: data protection; mobility management (mobile radio); quality of service; DPM; FGSI; LBS; PPM; SQM; dual-privacy metric; fine-grained side information; mobile user; privacy protection mechanism; privacy-preserving strategy; service quality aware location-based service; service quality metric; Information systems; Measurement; Mobile radio mobility management; Privacy; Security; Servers (ID#: 16-10286)


S. Debroy, P. Calyam, M. Nguyen, A. Stage and V. Georgiev, “Frequency-Minimal Moving Target Defense Using Software-Defined Networking,” 2016 International Conference on Computing, Networking and Communications (ICNC), Kauai, HI, 2016, pp. 1-6. doi: 10.1109/ICCNC.2016.7440635
Abstract: With the increase of cyber attacks such as DoS, there is a need for intelligent counter-strategies to protect critical cloud-hosted applications. The challenge for the defense is to minimize the waste of cloud resources and limit loss of availability, yet have effective proactive and reactive measures that can thwart attackers. In this paper we address the defense needs by leveraging moving target defense protection within Software-Defined Networking-enabled cloud infrastructure. Our novelty is in the frequency minimization and consequent location selection of target movement across heterogeneous virtual machines based on attack probability, which in turn minimizes cloud management overheads. We evaluate effectiveness of our scheme using a large-scale GENI testbed for a just-in-time news feed application setup. Our results show low attack success rate and higher performance of target application in comparison to the existing static moving target defense schemes that assume homogenous virtual machines.
Keywords: cloud computing; computer network security; software defined networking; DoS; critical cloud-hosted applications; cyber attacks; frequency-minimal moving target defense; heterogeneous virtual machines; intelligent counter-strategies; software-defined networking-enabled cloud infrastructure; Bandwidth; Cloud computing; Computer crime; Feeds; History; Loss measurement; Time-frequency analysis (ID#: 16-10287)


Y. Nakahira and Y. Mo, “Dynamic State Estimation in the Presence of Compromised Sensory Data,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 5808-5813. doi: 10.1109/CDC.2015.7403132
Abstract: In this article, we consider the state estimation problem of a linear time invariant system in adversarial environment. We assume that the process noise and measurement noise of the system are l∞ functions. The adversary compromises at most γ sensors, the set of which is unknown to the estimation algorithm, and can change their measurements arbitrarily. We first prove that if after removing a set of 2γ sensors, the system is undetectable, then there exists a destabilizing noise process and attacker's input to render the estimation error unbounded. For the case that the system remains detectable after removing an arbitrary set of 2γ sensors, we construct a resilient estimator and provide an upper bound on the l∞ norm of the estimation error. Finally, a numerical example is provided to illustrate the effectiveness of the proposed estimator design.
Keywords: invariance; linear systems; measurement errors; measurement uncertainty; state estimation; compromised sensory data; dynamic state estimation; estimation error; estimator design; l∞ functions; linear time invariant system; measurement noise; measurements arbitrarily; process noise; Estimation error; Robustness; Security; Sensor systems; State estimation (ID#: 16-10288)


M. Kargar, A. An, N. Cercone, P. Godfrey, J. Szlichta and X. Yu, “Meaningful Keyword Search in Relational Databases with Large and Complex Schema,” 2015 IEEE 31st International Conference on Data Engineering, Seoul, 2015, pp. 411-422. doi: 10.1109/ICDE.2015.7113302
Abstract: Keyword search over relational databases offers an alternative way to SQL to query and explore databases that is effective for lay users who may not be well versed in SQL or the database schema. This becomes more pertinent for databases with large and complex schemas. An answer in this context is a join tree spanning tuples containing the query's keywords. As there are potentially many answers to the query, and the user is often only interested in seeing the top-k answers, how to rank the answers based on their relevance is of paramount importance. We focus on the relevance of join as the fundamental means to rank answers. We devise means to measure relevance of relations and foreign keys in the schema over the information content of the database. This can be done offline with no need for external models. We compare the proposed measures against a gold standard we derive from a real workload over TPC-E and evaluate the effectiveness of our methods. Finally, we test the performance of our measures against existing techniques to demonstrate a marked improvement, and perform a user study to establish naturalness of the ranking of the answers.
Keywords: SQL; query processing; relational databases; trees (mathematics); SQL; TPC-E; answer ranking; complex schema; database querying; foreign keys; join tree spanning tuples; keyword search; large schema; query answering; relation relevance measurement; relational databases; Companies; Gold; Indexes; Keyword search; Relational databases; Security (ID#: 16-10289)


H. B. M. Shashikala, R. George and K. A. Shujaee, “Outlier Detection in Network Data Using the Betweenness Centrality,” SoutheastCon 2015, Fort Lauderdale, FL, 2015, pp. 1-5. doi: 10.1109/SECON.2015.7133008
Abstract: Outlier detection has been used to detect and, where appropriate, remove anomalous observations from data. It has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. In this paper, we propose a Betweenness Centrality (BEC) as novel to determine the outlier in network analyses. The Betweenness Centrality of a vertex in a graph is a measure for the participation of the vertex in the shortest paths in the graph. The Betweenness centrality is widely used in network analyses. Especially in a social network, the recursive computation of the betweenness centralities of vertices is performed for the community detection and finding the influential user in the network. In this paper, we propose that this method is efficient in finding outlier in social network analyses. Furthermore we show the effectiveness of the new methods using the experiments data.
Keywords: fraud; graph theory; recursive estimation; security of data; social networking (online); BEC; betweenness centrality; community detection; fraud detection; graph analysis; intrusion detection; network data; network robustness analysis; outlier detection; recursive computation; social network analyses; vertices; Atmospheric measurements; Chaos; Particle measurements; Presses; adjacency matrix; (ID#: 16-10290)


E. Lagunas, M. G. Amin and F. Ahmad, “Through-the-Wall Radar Imaging for Heterogeneous Walls Using Compressive Sensing,” Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 2015 3rd International Workshop on, Pisa, 2015, pp. 94-98. doi: 10.1109/CoSeRa.2015.7330271
Abstract: Front wall reflections are considered one of the main challenges in sensing through walls using radar. This is especially true under sparse time-space or frequency-space sampling of radar returns which may be required for fast and efficient data acquisition. Unlike homogeneous walls, heterogeneous walls have frequency and space varying characteristics which violate the smooth surface assumption and cause significant residuals under commonly used wall clutter mitigation techniques. In the proposed approach, the phase shift and the amplitude of the wall reflections are estimated from the compressive measurements using a Maximum Likelihood Estimation (MLE) procedure. The estimated parameters are used to model electromagnetic (EM) wall returns, which are subsequently subtracted from the total radar returns, rendering wall-reduced and wall-free signals. Simulation results are provided, demonstrating the effectiveness of the proposed technique and showing its superiority over existing methods.
Keywords: compressed sensing; data acquisition; image sampling; maximum likelihood estimation; radar clutter; radar imaging; EM wall return; MLE procedure; compressive sensing; electromagnetic wall return; frequency-space sampling; front wall reflection; heterogeneous wall; maximum likelihood estimation procedure; sparse time-space sampling; through-the-wall radar imaging; wall clutter mitigation technique; Antenna measurements; Arrays; Maximum likelihood estimation; Radar antennas; Radar imaging (ID#: 16-10291)


Rong Jin and Kai Zeng, “Physical Layer Key Agreement Under Signal Injection Attacks,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 254-262. doi: 10.1109/CNS.2015.7346835
Abstract: Physical layer key agreement techniques derive a symmetric cryptographic key from the wireless fading channel between two wireless devices by exploiting channel randomness and reciprocity. Existing works mainly focus on the security analysis and protocol design of the techniques under passive attacks. The study on physical layer key agreement techniques under active attacks is largely open. In this paper, we present a new form of high threatening active attack, named signal injection attack. By injecting the similar signals to both keying devices, the attacker aims at manipulating the channel measurements and compromising a portion of the key. We further propose a countermeasure to the signal injection attack, PHY-UIR (PHYsical layer key agreement with User Introduced Randomness). In PHY-UIR, both keying devices independently introduce randomness into the channel probing frames, and compose common random series by combining the randomness in the fading channel and the ones introduced by users together. With this solution, the composed series and injected signals become uncorrelated. Thus, the final key will automatically exclude the contaminated portion related to injected signal while persisting the other portion related to random fading channel. Moreover, the contaminated composed series at two keying devices become decorrelated, which help detect the attack. We analyze the security strength of PHY-UIR and conduct extensive simulations to evaluate it Both theoretical analysis and simulations demonstrate the effectiveness of PHY-UIR. We also perform proof-of-concept experiments by using software defined radios in a real-world environment. We show that signal injection attack is feasible in practice and leads to a strong correlation (0.75) between the injected signal and channel measurements at legitimate users for existing key generation methods. PHY-UIR is immune to the signal injection attack and results in low correlation (0.15) between the injected signal and the composed random signals at legitimate users.
Keywords: cryptography; fading channels; telecommunication security; PHY-UIR; channel measurements; channel probing frames; channel randomness; pHY-mR; physical layer key agreement techniques; physical layer key agreement with user introduced randomness; protocol design; random fading channel; reciprocity; security analysis; security strength; signal injection attack; signal injection attacks; symmetric cryptographic key; theoretical analysis; wireless fading channel; Clocks; Cryptography; DH-HEMTs; Niobium; Protocols; Yttrium (ID#: 16-10292)


X. Zhang, X. Yang, J. Lin and W. Yu, “On False Data Injection Attacks Against the Dynamic Microgrid Partition in the Smart Grid,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7222-7227. doi: 10.1109/ICC.2015.7249479
Abstract: To enhance the reliability and efficiency of energy service in the smart grid, the concept of the microgrid has been proposed. Nonetheless, how to secure the dynamic microgrid partition process is essential in the smart grid. In this paper, we address the security issue of the dynamic microgrid partition process and systematically investigate three false data injection attacks against the dynamic microgrid partition process. Particularly, we first discussed the dynamic microgrid partition problem based on a Connected Graph Constrained Knapsack Problem (CGKP) algorithm. We then developed a theoretical model and carried out simulations to investigate the impacts of these false data injection attacks on the effectiveness of the dynamic microgrid partition process. Our theoretical and simulation results show that the investigated false data injection attacks can disrupt the dynamic microgrid partition process and pose negative impacts on the balance of energy demand and supply within microgrids such as an increased number of lack-nodes and increased energy loss in microgrids.
Keywords: computer network security; distributed power generation; graph theory; knapsack problems; power engineering computing; power system management; power system measurement; power system reliability; smart power grids; algorithm; connected graph constrained knapsack problem; dynamic microgrid partition process security; energy service efficiency; false data injection attacks; smart power grid reliability; Energy loss; Heuristic algorithms; Microgrids; Partitioning algorithms; Smart grids; Smart meters
(ID#: 16-10293)


X. Zhao, F. Deng, H. Liang and L. Zhou, “Monitoring the Deformation of the Facade of a Building Based on Terrestrial Laser Point-Cloud,” 2015 11th International Conference on Computational Intelligence and Security (CIS), Shenzhen, 2015, pp. 183-186. doi: 10.1109/CIS.2015.52
Abstract: When terrestrial laser point-cloud data are employed for monitoring the façade of a building, point-cloud data collected in different phases cannot be used directly to calculate the deforming displacement due to data points in a homogeneous region caused by inhomogeneous sampling. Aiming at this problem, a triangular patch is built for the previous point-cloud data, the distance is measured between the latter point-cloud data and the former patch in the homogeneous region, and thus the distance of the deforming displacement is determined. On this basis, the software of laser point-cloud monitoring analysis is developed and three series of experiments are designed to verify the effectiveness of the method.
Keywords: buildings (structures); condition monitoring; deformation; distance measurement; structural engineering; building façade deformation monitoring; data points; deforming displacement; distance measurement; homogeneous region; inhomogeneous sampling; laser point-cloud monitoring analysis; point-cloud data; terrestrial laser point-cloud; triangular patch; Buildings; Data models; Deformable models; Mathematical model; Monitoring; Reliability; Three-dimensional displays; building façade; deformation monitoring; point-cloud (ID#: 16-10294)


H. Alizadeh, A. Khoshrou and A. Zúquete, “Traffic Classification and Verification Using Unsupervised Learning of Gaussian Mixture Models,” Measurements & Networking (M&N), 2015 IEEE International Workshop on, Coimbra, 2015, pp. 1-6. doi: 10.1109/IWMN.2015.7322980
Abstract: This paper presents the use of unsupervised Gaussian Mixture Models (GMMs) for the production of per-application models using their flows' statistics in order to be exploited in two different scenarios: (i) traffic classification, where the goal is to classify traffic flows by application (ii) traffic verification or traffic anomaly detection, where the aim is to confirm whether or not traffic flow generated by the claimed application conforms to its expected model. Unlike the first scenario, the second one is a new research path that has received less attention in the scope of Intrusion Detection System (IDS) research. The term “unsupervised” refers to the method ability to select the optimal number of components automatically without the need of careful initialization. Experiments are carried out using a public dataset collected from a real network. Favorable results indicate the effectiveness of unsupervised GMMs.
Keywords: Gaussian processes; computer network security; mixture models; pattern classification; security of data; telecommunication traffic; unsupervised learning; Gaussian mixture model; IDS; intrusion detection system; traffic anomaly detection; traffic classification; traffic flow; traffic verification; unsupervised GMM; unsupervised learning; Accuracy; Feature extraction; Mixture models; Payloads; Ports (Computers); Protocols; Training (ID#: 16-10295)


N. Matyunin, J. Szefer, S. Biedermann and S. Katzenbeisser, “Covert Channels Using Mobile Device's Magnetic Field Sensors,” 2016 21st Asia and South Pacific Design Automation Conference (ASP-DAC), Macau, 2016, pp. 525-532. doi: 10.1109/ASPDAC.2016.7428065
Abstract: This paper presents a new covert channel using smartphone magnetic sensors. We show that modern smartphones are capable to detect the magnetic field changes induced by different computer components during I/O operations. In particular, we are able to create a covert channel between a laptop and a mobile device without any additional equipment, firmware modifications or privileged access on either of the devices. We present two encoding schemes for the covert channel communication and evaluate their effectiveness.
Keywords: encoding; magnetic field measurement; magnetic sensors; smart phones; I/O operations; computer components; covert channels; encoding schemes; laptop; magnetic field changes; magnetic field sensors; mobile device; smartphone magnetic sensors; Encoding; Hardware; Magnetic heads; Magnetic sensors; Magnetometers; Portable computers (ID#: 16-10296)


H. Wei, Y. Zhang, D. Guo and X. Wei, “CARISON: A Community and Reputation Based Incentive Scheme for Opportunistic Networks,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1398-1403. doi: 10.1109/IMCCC.2015.299
Abstract: Forwarding messages in opportunistic networks incurs costs for nodes in terms of storage and energy. Some nodes become selfish or even malicious. The selfish and malicious behaviors depress the connectivity of opportunistic networks. To tackle this issue, in this paper, we propose CARISON: a community and reputation based incentive scheme for opportunistic networks. CARISON allows every node belongs to a community and manages its reputation evidence and demonstrate its reputation whenever necessary. In order to kick out malicious nodes we propose altruism function which is a critical factor. Besides, considering the social attributes of nodes, we propose two ways to calculate reputation: intra-community reputation calculating and inter-community reputation calculating. Meanwhile this paper proposes the binary exponent punishment strategy to punish the nodes with low reputation. Extensive performance analysis and simulations are given to demonstrate the effectiveness and efficiency of the proposed scheme.
Keywords: cooperative communication; incentive schemes; telecommunication security; CARISON; altruism function; binary exponent punishment strategy; community and reputation based incentive scheme; inter-community reputation calculating; intra-community reputation calculating; malicious behaviors; malicious nodes; opportunistic networks; reputation evidence; selfish behaviors; social attributes; Analytical models; Computational modeling; Computers; History; Incentive schemes; Monitoring; Performance analysis; Altruism function; Binary exponent punishment strategy; Community; Opportunistic networks; Reputation based incentive; Selfish (ID#: 16-10297)


Z. Pang, F. Hou, Y. Zhou and D. Sun, “Design of False Data Injection Attacks for Output Tracking Control of CARMA Systems,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 1273-1277. doi: 10.1109/ICInfA.2015.7279482
Abstract: Considerable attention has focused on the problem of cyber-attacks on cyber-physical systems in recent years. In this paper, we consider a class of single-input single-output systems which are described by a controlled auto-regressive moving average (CARMA) model. A PID controller is designed to make the system output track the reference signal. Then the state-space model of the controlled plant and the corresponding Kalman filter are employed to generate stealthy false data injection attacks for the sensor measurements, which can destroy the control system performance without being detected by an online parameter identification algorithm. Finally, two numerical simulation results are given to demonstrate the effectiveness of the proposed false data injection attacks.
Keywords: Kalman filters; autoregressive moving average processes; control system synthesis; security of data; state-space methods; three-term control; CARMA systems; Kalman filter; PID controller design; controlled auto-regressive moving average; false data injection attacks; online parameter identification algorithm; output tracking control; single-input single-output systems; state-space model; Conferences; Control systems; Detectors; Mathematical model; Parameter estimation; Smart grids; CARMA model; Cyber-physical systems (CPSs); output feedback control (ID#: 16-10298)


Y. Hu and M. Sun, “Synchronization of a Class of Hyperchaotic Systems via Backstepping and Its Application to Secure Communication,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1055-1060. doi: 10.1109/IMCCC.2015.228
Abstract: Researches on multi-scroll hyper chaotic systems, which present excellent activities in secure communication, are comparatively poor. There are no systematic design methods and current methods are difficult to deal with uncertainties. In this paper, an adaptive back stepping control is proposed. The adaptive updating laws are presented to approximate the uncertainties. The proposed method improves the robust performance of controller by only two control input. The asymptotical convergence of synchronization errors is proved to zero by Lyapunov functions. Finally, simulation examples are presented to demonstrated the effectiveness of the proposed synchronization control scheme and its validity in secure communication.
Keywords: Lyapunov methods; chaotic communication; control nonlinearities; synchronisation; telecommunication security; Lyapunov functions; adaptive back stepping control; multiscroll hyperchaotic systems; secure communication; synchronization control scheme; systematic design methods; Adaptive control; Backstepping; Chaotic communication; Synchronization; adaptive control; backstepping; hyperchaos; multi-scroll (ID#: 16-10299)


I. Kiss, B. Genge and P. Haller, “A Clustering-Based Approach to Detect Cyber Attacks in Process Control Systems,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 142-148. doi: 10.1109/INDIN.2015.7281725
Abstract: Modern Process Control Systems (PCS) exhibit an increasing trend towards the pervasive adoption of commodity, off-the-shelf Information and Communication Technologies (ICT). This has brought significant economical and operational benefits, but it also shifted the architecture of PCS from a completely isolated environment to an open, “system of systems” integration with traditional ICT systems, susceptible to traditional computer attacks. In this paper we present a novel approach to detect cyber attacks targeting measurements sent to control hardware, i.e., typically to Programmable Logical Controllers (PLC). The approach builds on the Gaussian mixture model to cluster sensor measurement values and a cluster assessment technique known as silhouette. We experimentally demonstrate that in this particular problem the Gaussian mixture clustering outperforms the k-means clustering algorithm. The effectiveness of the proposed technique is tested in a scenario involving the simulated Tennessee-Eastman chemical process and three different cyber attacks.
Keywords: Gaussian processes; control engineering computing; mixture models; pattern clustering; process control; production engineering computing; programmable controllers security of data; Gaussian mixture model; ICT systems; Information and Communication Technologies; PCS; PLC; cluster assessment technique; cluster sensor measurement values; computer attacks; cyber attack detection; process control systems; programmable logical controllers; silhouette; simulated Tennessee-Eastman chemical process; system of systems integration; Clustering algorithms; Computer crime; Engines; Mathematical model; Process control (ID#: 16-10300)


H. Wu, X. Dang, L. Zhang and L. Wang, “Kalman Filter Based DNS Cache Poisoning Attack Detection,” 2015 IEEE International Conference on Automation Science and Engineering (CASE), Gothenburg, 2015, pp. 1594-1600. doi: 10.1109/CoASE.2015.7294328
Abstract: Detection for Domain Name Systems cache poisoning attack is investigated. We exploit the fact that when attack is happening, the entropies of the query packet IP addresses of the cache server will have a decrease, to detect the cache poisoning attack. We pay attention to the detection method for the case that the entropy sequence has nonstationary dynamic at normal cases. In order to handle the nonstationarity, we first model the entropy sequence by a state space equation, and then we utilize Kalman filter to implement the attack detection. The problem is discussed for single and distributed cache poisoning attack, respectively. For the single one, we use the measurement errors to detect the anomaly. Under distributed attack, we utilize the correlation variation of the prediction errors to detect the attack event and identify the attacked cache servers. An experiment is illustrated to verify the effectiveness of our presented method.
Keywords: IP networks; Kalman filters; cache storage; computer network security; entropy; file servers; query processing; Kalman filter based DNS cache poisoning attack detection; attack event; attacked cache servers; correlation variation; domain name systems; entropy sequence; measurement errors; query packet IP addresses; state space equation; Correlation; Entropy; Mathematical model; Servers (ID#: 16-10301)


R. Cao, J. Wu, C. Long and S. Li, “Stability Analysis for Networked Control Systems Under Denial-of-Service Attacks,” 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, 2015, pp. 7476-7481. doi: 10.1109/CDC.2015.7403400
Abstract: With the large-scale application of modern information technology in networked control systems (NCSs), the security of networked control systems has drawn more and more attention in recent years. However, how far the NCSs can be affected by adversaries are few considered. In this paper, we consider a stability problem for NCSs under denial-of-service (DoS) attacks in which control and measurement packets are transmitted over the communication networks. We model the NCSs under DoS attacks as a singular system, where the effect of the DOS attack is described as a time-varying delay. By a Wirtinger-based integral inequality, a less conservative attack-based delay-dependent criterion for NCSs' stability is obtained in term of linear matrix inequalities (LMIs). Finally, examples are given to illustrate the effectiveness of our methods.
Keywords: delays; linear matrix inequalities; networked control systems; stability; time-varying systems; DoS attacks; LMI; NCS stability; Wirtinger-based integral inequality; attack-based delay-dependent criterion; communication networks; control packets; denial-of-service attacks; large-scale application; measurement packets; stability analysis; stability problem; time-varying delay; Computer crime; Delays; Networked control systems; Power system stability; Stability criteria; Symmetric matrices (ID#: 16-10302)


M. Wang, X. Wu, D. Liu, C. Wang, T. Zhang and P. Wang, “A Human Motion Prediction Algorithm for Non-Binding Lower Extremity Exoskeleton,” Information and Automation, 2015 IEEE International Conference on, Lijiang, 2015, pp. 369-374. doi: 10.1109/ICInfA.2015.7279315
Abstract: This paper introduces a novel approach to predict human motion for the Non-binding Lower Extremity Exoskeleton (NBLEX). Most of the exoskeletons must be attached to the pilot, which exists potential security problems. In order to solve these problems, the NBLEX is studied and designed to free pilots from the exoskeletons. Rather than applying Electromyography (EMG) and Ground Reaction Force (GFR) signals to predict human motion in the binding exoskeleton, the non-binding exoskeleton robot collect the Inertial Measurement Unit (IMU) signals of the pilot. Seven basic motions are studied, each motion is divided into four phases except the standing-still motion which only has one motion phase. The human motion prediction algorithm adopts Support Vector Machine (SVM) to classify human motion phases and Hidden Markov Model (HMM) to predict human motion. The experimental data demonstrate the effectiveness of the proposed algorithm.
Keywords: control engineering computing; hidden Markov models; mobile robots; motion control; support vector machines; EMG signal; GFR signal; HMM; IMU signal; NBLEX; SVM; electromyography; ground reaction force signal; hidden Markov model; human motion phase; human motion prediction algorithm; inertial measurement unit signal; nonbinding exoskeleton robot; nonbinding lower extremity exoskeleton; standing-still motion; support vector machine; Accuracy; Classification algorithms; Exoskeletons; Hidden Markov models; Prediction algorithms; Support vector machines; Training; Exoskeleton; Hidden Markov Model; Human Motion Prediction; Non-binding Lower Extremity Exoskeleton; Support Vector Machine (ID#: 16-10303)


M. Ingels, A. Valjarevic and H. S. Venter, “Evaluation and Analysis of a Software Prototype for Guidance and Implementation of a Standardized Digital Forensic Investigation Process,” Information Security for South Africa (ISSA), 2015, Johannesburg, 2015, pp. 1-8. doi: 10.1109/ISSA.2015.7335052
Abstract: Performing a digital forensic investigation requires a standardized and formalized process to be followed. The authors have contributed to the creation of an international standard on digital forensic investigation process, namely ISO/IEC 27043:2015, which was published in 2015. However, currently, there exists no application that would guide a digital forensic investigator to implement such a standardized process. The prototype of such an application has been developed by the authors and presented in their previous work. The prototype is in the form of a software application which has two main functionalities. The first functionality is to act as an expert system that can be used for guidance and training of novice investigators. The second functionality is to enable reliable logging of all actions taken within the investigation processes, enabling the validation of use of a correct process. The benefits of such a prototype include possible improvement in efficiency and effectiveness of an investigation and easier training of novice investigators. The last, and possibly most important benefit, includes that higher admissibility of digital evidence will be possible due to the fact that it will be easier to show that the standardized process was followed. This paper presents an evaluation of the prototype. Evaluation was performed in order to measure the usability and the quality of the prototype software, as well as the effectiveness of the prototype. The evaluation of the prototype consisted of two main parts. The first part was a software usability evaluation, which was performed using the Software Usability Measurement Inventory (SUMI), a reliable method of measuring software usability and quality. The second part of evaluation was in a form of a questionnaire set up by the authors, with the aim to evaluate whether the prototype meets its goals. The results indicated that the prototype reaches most of its goals, that it does have intended functionalities and that it is relatively easy to learn and use. Areas of improvement and future work were also identified in this work.
Keywords: digital forensics; software performance evaluation; software prototyping; software quality; ISO/lEC 27043: 2015; SUMI; digital forensic investigation process; software prototype analysis; software prototype evaluation; software quality; software usability evaluation; software usability measurement inventory; Cryptography; Libraries; Organizations; Software; Standards organizations; Yttrium; digital forensic investigation process model;  implementation prototype; software evaluation; standardization (ID#: 16-10304)


J. G. Cui, P. J. Zhou, M. y. Yu, C. Liu and X. y. Xu, “Research on Time Optimization Algorithm of Aircraft Support Activity with Limited Resources,” 2015 Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC), Qinhuangdao, 2015, pp. 1298-1303. doi: 10.1109/IMCCC.2015.279
Abstract: The required time of aircraft turnaround support activity directly affects the aircraft combat effectiveness. Aiming at the problem that the shortest time of support activity is hard to realize under the limited aircraft support resources conditions, the time optimization algorithm based on Branch and Cut Method (BCM) of the aircraft turnaround support activity is given in this paper. The purpose is to achieve the required shortest time of the aircraft turnaround support activity. The constraints are logical relationship between the limited support personnel and the support jobs. The shortest time process is calculated and compiled into computer program. The time optimal simulation system of aircraft turnaround support activity is designed and developed. Finally, a certain type of aircraft real support job is analyzed. The results show that the calculated result is accurate and reliable. It is in line with the actual security and can provide guidance for aircraft turnaround support and decision-making. The reliability and automation level of support activities are enhanced. It has a good application value in engineering.
Keywords: aircraft; decision making; optimisation; reliability theory; resource allocation; tree searching; BCM; aircraft turnaround support activity; automation level; branch and cut method; reliability level; resource limitation; time optimal simulation system; time optimization algorithm; Aerospace electronics; Aircraft; Aircraft manufacture; Atmospheric modeling; Mathematical model; Optimization; Personnel; Branch and Cut Method; Limited resources; Simulation; Support activity time (ID#: 16-10305)


J. M. G. Duarte, E. Cerqueira and L. A. Villas, “Indoor Patient Monitoring Through Wi-Fi and Mobile Computing,” 2015 7th International Conference on New Technologies, Mobility and Security (NTMS), Paris, 2015, pp. 1-5. doi: 10.1109/NTMS.2015.7266497
Abstract: The developments in wireless sensor networks, mobile technology and cloud computing have been pushing forward the concept of intelligent or smart cities, and each day smarter infrastructures are being developed with the aim of enhancing the well-being of citizens. These advances in technology can provide considerable benefits for the diverse components of smart cities including smart health which can be seen as the facet of smart cities dedicated to healthcare. A considerable defy that stills requiring appropriate responses is the development of mechanisms to detect health issues in patients from the very beginning. In this work, we propose a novel solution for indoor patient monitoring for medical purposes. The output of our solution will consist of a report containing the patterns of room occupation by the patient inside her/his home during a certain period of time. This report will allow health care professionals to detect changes on the behavior of the patient that can be interpreted as early signs of any health related issue. The proposed solution was implemented in an Android smartphone and tested in a real scenario. To assess our solution, 400 measurements divided into 10 experiments were performed, reaching a total of 391 correct detections which corresponds to an average effectiveness of 97.75%.
Keywords: cloud computing; indoor radio; mobile computing; patient monitoring; smart cities; smart phones; wireless LAN; wireless sensor networks; Android smartphone; Wi-Fi; indoor patient monitoring; intelligent cities; smart health; wireless sensor networks; IEEE 802.11 Standard; Medical services; Mobile communication; Mobile computing; Monitoring; Sensors; Wireless sensor networks; Behavior; Indoor monitoring; Patient; Smart health; Smartphone; Wi-Fi (ID#: 16-10306)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

End to End Security and the Internet of Things 2015


SoS Logo

End to End Security and the Internet of Things



End to end security focuses on the concept of uninterrupted protection of data traveling between two communicating partners. Generally, encryption is the method of choice. For the Internet of Things (IOT), “baked in” security is a major challenge. The research cited here was presented during 2015.

S. R. Moosavi et al., “Session Resumption-Based End-to-End Security for Healthcare Internet-of-Things,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015, pp. 581-588. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.83
Abstract: In this paper, a session resumption-based end-to-end security scheme for healthcare Internet of things (IoT) is pro-posed. The proposed scheme is realized by employing certificate-based DTLS handshake between end-users and smart gateways as well as utilizing DTLS session resumption technique. Smart gateways enable the sensors to no longer need to authenticate and authorize remote end-users by handing over the necessary security context. Session resumption technique enables end-users and medical sensors to directly communicate without the need for establishing the communication from the initial handshake. Session resumption technique has an abbreviated form of DTLS handshake and neither requires certificate-related nor public-key funtionalities. This alleviates some burden of medical sensors tono longer need to perform expensive operations. The energy-performance evaluations of the proposed scheme are evaluated by developing a remote patient monitoring prototype based on healthcare IoT. The energy-performance evaluation results show that our scheme is about 97% and 10% faster than certificate-based and symmetric key-based DTLS, respectively. Also, the certificate-based DTLS consumes about 2.2X more RAM and 2.9X more ROM resources required by our scheme. While, our scheme and symmetric key-based DTLS have almost similar RAM and ROM requirements. The security analysis reveals that the proposed scheme fulfills the requirements of end-to-end security and provides higher security level than related approaches found in the literature. Thus, the presented scheme is a well-suited solution to provide end-to-end security for healthcare IoT.
Keywords: Internet of Things; health care; public key cryptography; DTLS session resumption technique; IoT; end-to-end security; energy performance evaluations; healthcare Internet-of-Things; medical sensors; public key functionalities; remote end-users; remote patient monitoring prototype; security context; session resumption technique; smart gateways; Computers; Conferences; Information technology; Ubiquitous computing (ID#: 16-11225)


S. S. Basu, S. Tripathy and A. R. Chowdhury, “Design Challenges and Security Issues in the Internet of Things,” Region 10 Symposium (TENSYMP), 2015 IEEE, Ahmedabad, 2015, pp. 90-93. doi: 10.1109/TENSYMP.2015.25
Abstract: The world is rapidly getting connected. Commonplace everyday things are providing and consuming software services exposed by other things and service providers. A mash up of such services extends the reach of the current Internet to potentially resource constrained “Things”, constituting what is being referred to as the Internet of Things (IoT). IoT is finding applications in various fields like Smart Cities, Smart Grids, Smart Transportation, e-health and e-governance. The complexity of developing IoT solutions arise from the diversity right from device capability all the way to the business requirements. In this paper we focus primarily on the security issues related to design challenges in IoT applications and present an end-to-end security framework.
Keywords: Internet; Internet of Things; security of data; Internet of Things; IoT; e-governance; e-health; end-to-end security framework; service providers; smart cities; smart grids; smart transportation; software services; Computer crime; Encryption; Internet of things; Peer-to-peer computing; Protocols; End-to-end (E2E) security; Internet of Things (IoT); Resource constrained devices; Security
(ID#: 16-11226)


D. Bonino et al., “ALMANAC: Internet of Things for Smart Cities,” Future Internet of Things and Cloud (FiCloud), 2015 3rd International Conference on, Rome, 2015, pp. 309-316. doi: 10.1109/FiCloud.2015.32
Abstract: Smart cities advocate future environments where sensor pervasiveness, data delivery and exchange, and information mash-up enable better support of every aspect of (social) life in human settlements. As this vision matures, evolves and is shaped against several application scenarios, and adoption perspectives, a common need for scalable, pervasive, flexible and replicable infrastructures emerges. Such a need is currently fostering new design efforts to grant performance, reuse and interoperability while avoiding knowledge silos typical of early efforts on similar top is, e.g. Automation in buildings and homes. This paper introduces a federated smart city platform (SCP) developed in the context of the ALMANAC FP7 EU project and discusses lessons learned during the first experimental application of the platform to a smart waste management scenario in a medium-sized, European city. The ALMANAC SCP aims to integrate Internet of Things (IoT), capillary networks and metro access networks to offer smart services to the citizens, and thus enable Smart City processes. The key element of the SCP is a middleware supporting semantic interoperability of heterogeneous resources, devices, services and data management. The platform is built upon a dynamic federation of private and public networks, while supporting end-to-end security and privacy. Furthermore, it also enables the integration of services that, although being natively external to the platform itself, allow enriching the set of data and information used by the Smart City applications supported.
Keywords: Internet of Things; data privacy; middleware; open systems; smart cities; waste management; ALMANAC FP7 EU project; European city; capillary networks; data management; end-to-end privacy; end-to-end security; heterogeneous devices; heterogeneous resources; heterogeneous services; metro access networks; middleware; private networks; public networks; semantic interoperability; sensor pervasiveness; smart city platform; smart waste management scenario; Cities and towns; Context; Data integration; Metadata; Semantics; Smart cities; federation; internet of things; platform; smart city (ID#: 16-11227)


J. M. Bohli, A. Skarmeta, M. Victoria Moreno, D. García and P. Langendörfer, “SMARTIE Project: Secure IoT Data Management for Smart Cities,” Recent Advances in Internet of Things (RIoT), 2015 International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/RIOT.2015.7104906
Abstract: The vision of SMARTIE (Secure and sMARter ciTIEs data management) is to create a distributed framework for IoT-based applications storing, sharing and processing large volumes of heterogeneous information. This framework is envisioned to enable end-to-end security and trust in information delivery for decision-making purposes following the data owner's privacy requirements. SMARTIE follows a data-centric paradigm, which will offer highly scalable and secure information for smart city applications. The heart of this paradigm will be the “information management and services” plane as a unifying umbrella, which will operate above heterogeneous network devices and data sources, and will provide advanced secure information services enabling powerful higher-layer applications.
Keywords: Internet of Things; data privacy; database management systems; decision making; distributed processing; information services; smart cities; town and country planning; trusted computing; IoT-based applications; SMARTIE project; data owner privacy requirements; data sources; data-centric paradigm; decision-making purposes; distributed framework; end-to-end security; heterogeneous information processing; heterogeneous information sharing; heterogeneous information storing; heterogeneous network devices; information delivery; information management; secure IoT data management; secure and smarter cities data management; secure information services; smart city applications; trust; Authorization; Cities and towns; Cryptography; Heating; Monitoring; IoT; Security; Smart Cities (ID#: 16-11228)


F. Van den Abeele, T. Vandewinckele, J. Hoebeke, I. Moerman and P. Demeester, “Secure Communication in IP-Based Wireless Sensor Networks via a Trusted Gateway,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/ISSNIP.2015.7106963
Abstract: As the IP-integration of wireless sensor networks enables end-to-end interactions, solutions to appropriately secure these interactions with hosts on the Internet are necessary. At the same time, burdening wireless sensors with heavy security protocols should be avoided. While Datagram TLS (DTLS) strikes a good balance between these requirements, it entails a high cost for setting up communication sessions. Furthermore, not all types of communication have the same security requirements: e.g. some interactions might only require authorization and do not need confidentiality. In this paper we propose and evaluate an approach that relies on a trusted gateway to mitigate the high cost of the DTLS handshake in the WSN and to provide the flexibility necessary to support a variety of security requirements. The evaluation shows that our approach leads to considerable energy savings and latency reduction when compared to a standard DTLS use case, while requiring no changes to the end hosts themselves.
Keywords: IP networks; Internet; authorisation; computer network security; energy conservation; internetworking; protocols; telecommunication power management; trusted computing; wireless sensor networks; DTLS handshake; WSN authorization; communication security; datagram TLS; end-to-end interactions; energy savings; heavy security protocol; latency reduction; trusted gateway; wireless sensor network IP integration; Bismuth; Cryptography; Logic gates; Random access memory; Read only memory; Servers; Wireless sensor networks; 6LoWPAN; CoAP; DTLS; Gateway; IP; IoT (ID#: 16-11229)


V. L. Shivraj, M. A. Rajan, M. Singh and P. Balamuralidhar, “One Time Password Authentication Scheme Based on Elliptic Curves for Internet of Things (IoT),” Information Technology: Towards New Smart World (NSITNSW), 2015 5th National Symposium on, Riyadh, 2015, pp. 1-6. doi: 10.1109/NSITNSW.2015.7176384
Abstract: Establishing end-to-end authentication between devices and applications in Internet of Things (IoT) is a challenging task. Due to heterogeneity in terms of devices, topology, communication and different security protocols used in IoT, existing authentication mechanisms are vulnerable to security threats and can disrupt the progress of IoT in realizing Smart City, Smart Home and Smart Infrastructure, etc. To achieve end-to-end authentication between IoT devices/applications, the existing authentication schemes and security protocols require a two-factor authentication mechanism. Therefore, as part of this paper we review the suitability of an authentication scheme based on One Time Password (OTP) for IoT and proposed a scalable, efficient and robust OTP scheme. Our proposed scheme uses the principles of lightweight Identity Based Elliptic Curve Cryptography scheme and Lamport's OTP algorithm. We evaluate analytically and experimentally the performance of our scheme and observe that our scheme with a smaller key size and lesser infrastructure performs on par with the existing OTP schemes without compromising the security level. Our proposed scheme can be implemented in real-time IoT networks and is the right candidate for two-factor authentication among devices, applications and their communications in IoT.
Keywords: Internet of Things; message authentication; public key cryptography; IoT; OTP; end-to-end authentication; identity based elliptic curve cryptography; one time password; password authentication; Algorithm design and analysis; Authentication; Elliptic curves; Logic gates; Protocols; Servers (ID#: 16-11230)


N. Zhang, K. Yuan, M. Naveed, X. Zhou and X. Wang, “Leave Me Alone: App-Level Protection Against Runtime Information Gathering on Android,” 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 915-930. doi: 10.1109/SP.2015.61
Abstract: Stealing of sensitive information from apps is always considered to be one of the most critical threats to Android security. Recent studies show that this can happen even to the apps without explicit implementation flaws, through exploiting some design weaknesses of the operating system, e.g., Shared communication channels such as Bluetooth, and side channels such as memory and network-data usages. In all these attacks, a malicious app needs to run side-by-side with the target app (the victim) to collect its runtime information. Examples include recording phone conversations from the phone app, gathering WebMD's data usages to infer the disease condition the user looks at, etc. This runtime-information-gathering (RIG) threat is realistic and serious, as demonstrated by prior research and our new findings, which reveal that the malware monitoring popular Android-based home security systems can figure out when the house is empty and the user is not looking at surveillance cameras, and even turn off the alarm delivered to her phone. To defend against this new category of attacks, we propose a novel technique that changes neither the operating system nor the target apps, and provides immediate protection as soon as an ordinary app (with only normal and dangerous permissions) is installed. This new approach, called App Guardian, thwarts a malicious app's runtime monitoring attempt by pausing all suspicious background processes when the target app (called principal) is running in the foreground, and resuming them after the app stops and its runtime environment is cleaned up. Our technique leverages a unique feature of Android, on which third-party apps running in the background are often considered to be disposable and can be stopped anytime with only a minor performance and utility implication. We further limit such an impact by only focusing on a small set of suspicious background apps, which are identified by their behaviors inferred from their side channels (e.g., Thread names, CPU scheduling and kernel time). App Guardian is also carefully designed to choose the right moments to start and end the protection procedure, and effectively protect itself against malicious apps. Our experimental studies show that this new technique defeated all known RIG attacks, with small impacts on the utility of legitimate apps and the performance of the OS. Most importantly, the idea underlying our approach, including app-level protection, side-channel based defense and lightweight response, not only significantly raises the bar for the RIG attacks and the research on this subject but can also inspire the follow-up effort on new detection systems practically deployable in the fragmented Android ecosystem.
Keywords: Internet of Things; cryptography; invasive software; mobile computing; smart phones; Android security; App Guardian; IoT; RIG threat; app-level protection; malware monitoring; runtime information gathering; side-channel based defense; Androids; Bluetooth; Humanoid robots; Monitoring; Runtime; Security; Smart phones (ID#: 16-11231)


M. Rao, T. Newe, I. Grout, E. Lewis and A. Mathur, “FPGA Based Reconfigurable IPSec AH Core Suitable for IoT Applications,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015,
pp. 2212-2216. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.327
Abstract: Real-world deployments of Internet of Things (IoTs) applications require secure communication. The IPSec (Internet Protocol Security) is an important and widely used security protocol (in the IP layer) to provide end to end secure communication. Implementation of the IPSec is a computing intensive work, which significantly limits the performance of the high speed networks. To overcome this issue, hardware implementation of IPSec is a best solution. IPSec includes two main protocols namely, Authentication Header (AH) and Encapsulating Security Payload (ESP) with two modes of operations, transport mode and tunnel mode. In this work we presented an FPGA implementation of IPSec AH protocol. This implementation supports both, tunnel and transport mode of operation. Cryptographic hash function called Secure Hash Algorithm – 3 (SHA-3) is used to calculate hash value for AH protocol. The proposed IPSec AH core can be used to provide data authentication security service to IoT applications.
Keywords: IP networks; Internet of Things; cryptographic protocols; field programmable gate arrays; AH; ESP; FPGA based reconfigurable IPSec AH core; IP layer; Internet protocol security; IoT applications; SHA; authentication header; cryptographic hash function; data authentication security service; encapsulating security payload; end to end secure communication; secure hash algorithm; transport mode; tunnel mode; Authentication; Cryptography; Field programmable gate arrays; Internet; Protocols; FPGA; IPSec; SHA-3 (ID#: 16-11232)


A. Ahrary and D. Ludena, “Research Studies on the Agricultural and Commercial Field,” Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on, Okayama, 2015, pp. 669-673. doi: 10.1109/IIAI-AAI.2015.291
Abstract: The new Internet of Things (IoT) paradigm is giving to the scientific community the possibility to create integrated environments where information could be exchanged among heterogeneous characteristic networks in an automated way, in order to provide a richer experience to the user and to give specific relevant information regarding the particular environment in which the user is interacting with. Those characteristic are highly valuable for the novel nutrition-based vegetable production and distribution system, in which the multiple benefits of Big Data where used in order to generate a healthy food recommendation to the end user and to feed to the system different analytics to improve the system efficiency. Moreover, the different IoT capabilities, specifically automation and heterogeneous network communication are valuable to improve the information matrix of our project. This paper discusses the different IoT available technologies, their security capabilities and assessment, and how could be useful for our project.
Keywords: Big Data; Internet of Things; agriculture; IoT capabilities; agricultural field; commercial field; distribution system; healthy food recommendation; integrated environments; network communication; research studies; scientific community; vegetable production; Agriculture; Big data; Business; Internet of things; Production; Security; Big Data infrastructure; Data Analysis; IoT; IoT Security (ID#: 16-11233)


W. K. Bodin, D. Jaramillo, S. K. Marimekala and M. Ganis, “Security Challenges and Data Implications by Using Smartwatch Devices in the Enterprise,” Emerging Technologies for a Smarter World (CEWIT), 2015 12th International Conference & Expo on, Melville, NY, 2015, pp. 1-5. doi: 10.1109/CEWIT.2015.7338164
Abstract: In the age of the Internet of Things, use of Smartwatch devices in the enterprise is evolving rapidly and many companies are exploring, adopting and researching the use of these devices in the Enterprise IT (Information Technology). The biggest challenge presented to an organization is understanding how to integrate these devices with the back end systems, building the data correlation and analytics while ensuring the security of the overall systems. The core objective of this paper is to provide a brief overview of such security challenges and data exposures to be considered. The research effort focuses on three key questions: 1. Data: how will we integrate these data streams into of physical world instrumentation with all of our existing data? 2. Security: how can pervasive sensing and analytics systems preserve and protect user security? 3. Usability: what hardware and software systems will make developing new intelligent and secure Smartwatch applications as easy as a modern web application? This area of research is in the early stages and through this paper we attempt to bring different views on how data, security and usability is important for Enterprise IT to adopt this type of Internet of Things (IoT) device in the Enterprise.
Keywords: Internet of Things; electronic commerce; mobile computing; security of data; watches; IoT device; analytics systems; data implications; enterprise IT; information technology; pervasive sensing system; security challenges; smartwatch devices; Biomedical monitoring; Internet; Media; Mobile communication; Monitoring; Security; Smart phones; Enterprise IT; Security; Smartwatch; analytics; data correlation (ID#: 16-11234)


K. Lee, D. Kim, D. Ha, U. Rajput and H. Oh, “On Security and Privacy Issues of Fog Computing Supported Internet of Things Environment,” Network of the Future (NOF), 2015 6th International Conference on the, Montreal, QC, 2015, pp. 1-3. doi: 10.1109/NOF.2015.7333287
Abstract: Recently, the concept of Internet of Things (IoT) is attracting much attention due to the huge potential. IoT uses the Internet as a key infrastructure to interconnect numerous geographically diversified IoT nodes which usually have scare resources, and therefore cloud is used as a key back-end supporting infrastructure. In the literature, the collection of the IoT nodes and the cloud is collectively called as an IoT cloud. Unfortunately, the IoT cloud suffers from various drawbacks such as huge network latency as the volume of data which is being processed within the system increases. To alleviate this issue, the concept of fog computing is introduced, in which foglike intermediate computing buffers are located between the IoT nodes and the cloud infrastructure to locally process a significant amount of regional data. Compared to the original IoT cloud, the communication latency as well as the overhead at the backend cloud infrastructure could be significantly reduced in the fog computing supported IoT cloud, which we will refer as IoT fog. Consequently, several valuable services, which were difficult to be delivered by the traditional IoT cloud, can be effectively offered by the IoT fog. In this paper, however, we argue that the adoption of IoT fog introduces several unique security threats. We first discuss the concept of the IoT fog as well as the existing security measures, which might be useful to secure IoT fog. Then, we explore potential threats to IoT fog.
Keywords: Internet of Things; cloud computing; data privacy; security of data; Internet of Things environment; IoT cloud; IoT fog; IoT nodes; back-end cloud infrastructure; back-end supporting infrastructure; cloud infrastructure; communication latency; fog computing; network latency; privacy issues; security issues; security threats; Cloud computing; Distributed databases; Internet of things; Privacy; Real-time systems; Security; Sensors (ID#: 16-11235)


R. M. Savola, P. Savolainen, A. Evesti, H. Abie and M. Sihvonen, “Risk-Driven Security Metrics Development for an E-Health IoT Application,” Information Security for South Africa (ISSA), 2015, Johannesburg, 2015, pp. 1-6. doi: 10.1109/ISSA.2015.7335061
Abstract: Security and privacy for e-health Internet-of-Things applications is a challenge arising due to the novelty and openness of the solutions. We analyze the security risks of an envisioned e-health application for elderly persons' day-to-day support and chronic disease self-care, from the perspectives of the service provider and end-user. In addition, we propose initial heuristics for security objective decomposition aimed at security metrics definition. Systematically defined and managed security metrics enable higher effectiveness of security controls, enabling informed risk-driven security decision-making.
Keywords: Internet of Things; data privacy; decision making; diseases; geriatrics; health care; risk management; security of data; chronic disease self-care; e-health Internet-of-Things applications; e-health IoT application; elderly person day-to-day support; privacy; risk-driven security decision-making; risk-driven security metrics development; security controls; security objective decomposition; Artificial intelligence; Android; risk analysis; security effectiveness; security metrics (ID#: 16-11236)


E. Vasilomanolakis, J. Daubert, M. Luthra, V. Gazis, A. Wiesmaier and P. Kikiras, “On the Security and Privacy of Internet of Things Architectures and Systems,” 2015 International Workshop on Secure Internet of Things (SIoT), Vienna, 2015, pp. 49-57. doi: 10.1109/SIOT.2015.9
Abstract: The Internet of Things (IoT) brings together a multitude of technologies, with a vision of creating an interconnected world. This will benefit both corporations as well as the end-users. However, a plethora of security and privacy challenges need to be addressed for the IoT to be fully realized. In this paper, we identify and discuss the properties that constitute the uniqueness of the IoT in terms of the upcoming security and privacy challenges. Furthermore, we construct requirements induced by the aforementioned properties. We survey the four most dominant IoT architectures and analyze their security and privacy components with respect to the requirements. Our analysis shows a mediocre coverage of security and privacy requirements. Finally, through our survey we identify a number of research gaps that constitute the steps ahead for future research.
Keywords: Internet of Things; data privacy; IoT architecture; privacy; security; Communication networks; Computer architecture; Internet of things; Privacy; Resilience; Security; Sensors (ID#: 16-11237)


G. Kim, J. Kim and S. Lee, “An SDN Based Fully Distributed NAT Traversal Scheme for IoT Global Connectivity,” Information and Communication Technology Convergence (ICTC), 2015 International Conference on, Jeju, 2015, pp. 807-809. doi: 10.1109/ICTC.2015.7354671
Abstract: Existing NAT solves to IP address exhaustion problem binding private IP address and public IP address, and NAT traversal such as hole punching scheme enables to communicate End-to-End devices located in different private networks. However, such technologies are centralized the workload at NAT gateway and increase transmission delay caused by packet modification per packet. In this paper, we propose an SDN based fully distributed NAT traversal scheme, which can distribute the workload of NAT processing to devices and reduce transmission delay by packet switching instead of packet modification. Furthermore, we describe SDN based IoT connectivity management architecture for supporting IoT global connectivity and enhanced real-time and security.
Keywords: IP networks; Internet of Things; computer network management; packet switching; software defined networking; telecommunication security; IP address; IoT connectivity management architecture; IoT global connectivity; NAT traversal scheme; SDN; end-to-end devices; hole punching scheme; packet modification; packet switching; transmission delay; Computer architecture; Delays; Internet; Performance evaluation; Ports (Computers); Punching; Connectivity; Network Address Translation; Software Defined Networking (ID#: 16-11238)


P. Porambage, A. Braeken, P. Kumar, A. Gurtov and M. Ylianttila, “Efficient Key Establishment for Constrained IoT Devices with Collaborative HIP-Based Approach,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417094
Abstract: The Internet of Things (IoT) technologies interconnect wide ranges of network devices irrespective of their resource capabilities and local networks. The device constraints and the dynamic link creations make it challenging to use pre-shared keys for every secure end-to-end (E2E) communication scenario in IoT. Variants of Host Identity Protocol (HIP) are adopted for constructing dynamic and secure E2E connections among the heterogenous network devices with imbalanced resource profiles and less or no previous knowledge about each other. We propose a collaborative HIP solution with an efficient key establishment component for the high constrained devices in IoT, which delegates the expensive cryptographic operations to the resource rich devices in the local networks. Finally, we demonstrate the applicability of the key establishment in collaborative HIP solution for the constrained IoT devices rather than the existing HIP variants, by providing performance and security analysis.
Keywords: Internet of Things; computer network security; protocols; E2E; HIP; Internet of Things technologies; collaborative HIP based approach; constrained IoT devices; device constraints; dynamic link creations; efficient key establishment; host identity protocol; local networks; network devices; preshared keys; resource capabilities; secure end-to-end communication; security analysis; Collaboration; Cryptography; DH-HEMTs; Protocols; Visualization (ID#: 16-11239)


H. Derhamy, J. Eliasson, J. Delsing, P. P. Pereira and P. Varga, “Translation Error Handling for Multi-Protocol SOA Systems,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 2015, pp. 1-8. doi: 10.1109/ETFA.2015.7301473
Abstract: The IoT research area has evolved to incorporate a plethora of messaging protocol standards, both existing and new, emerging as preferred communications means. The variety of protocols and technologies enable IoT to be used in many application scenarios. However, the use of incompatible communication protocols also creates vertical silos and reduces interoperability between vendors and technology platform providers. In many applications, it is important that maximum interoperability is enabled. This can be for reasons such as efficiency, security, end-to-end communication requirements etc. In terms of error handling each protocol has its own methods, but there is a gap for bridging the errors across protocols. Centralized software bus and integrated protocol agents are used for integrating different communications protocols. However, the aforementioned approaches do not fit well in all Industrial IoT application scenarios. This paper therefore investigates error handling challenges for a multi-protocol SOA-based translator. A proof of concept implementation is presented based on MQTT and CoAP. Experimental results show that multi-protocol error handling is possible and furthermore a number of areas that need more investigation have been identified.
Keywords: open systems; protocols; service-oriented architecture; CoAP; MQTT; centralized software bus; communication protocols; industrial IoT; integrated protocol agents; maximum interoperability; messaging protocol standards; multiprotocol SOA systems; multiprotocol SOA-based translator; translation error handling; Computer architecture; Delays; Monitoring; Protocols; Quality of service; Servers; Service-oriented architecture; Arrowhead; Cyber-physical systems; Error handling; Internet of Things; Protocol translation; SOA; Translation (ID#: 16-11240)


P. Porambage, A. Braeken, P. Kumar, A. Gurtov and M. Ylianttila, “Proxy-Based End-to-End Key Establishment Protocol for the Internet of Things,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 2677-2682. doi: 10.1109/ICCW.2015.7247583
Abstract: The Internet of Things (IoT) drives the world towards an always connected paradigm by interconnecting wide ranges of network devices irrespective of their resource capabilities and local networks. This would inevitably enhance the requirements of constructing dynamic and secure end-to-end (E2E) connections among the heterogenous network devices with imbalanced resource profiles and less or no previous knowledge about each other. The device constraints and the dynamic link creations make it challenging to use pre-shared keys for every secure E2E communication scenario in IoT. We propose a proxy-based key establishment protocol for the IoT, which enables any two unknown high resource constrained devices to initiate secure E2E communication. The high constrained devices should be legitimate and maintain secured connections with the neighbouring less constrained devices in the local networks, in which they are deployed. The less constrained devices are performing as the proxies and collaboratively advocate the expensive cryptographic operations during the session key computation. Finally, we demonstrate the applicability of our solution in constrained IoT devices by providing performance and security analysis.
Keywords: Internet of Things; cryptographic protocols; next generation networks; E2E connections; IoT drives; cryptographic operations; end-to-end connections; heterogenous network devices; preshared keys; proxy-based end-to-end key establishment protocol; secure E2E communication; Conferences; Cryptography; DH-HEMTs; Internet of things; Polynomials; Protocols
(ID#: 16-11241)


L. Kypus, L. Vojtech and L. Kvarda, “Qualitative and Security Parameters Inside Middleware Centric Heterogeneous RFID/IoT Networks, On-Tag Approach,” Telecommunications and Signal Processing (TSP), 2015 38th International Conference on, Prague, 2015, pp. 21-25. doi: 10.1109/TSP.2015.7296217
Abstract: Work presented in the paper started as preliminary research, and analysis, ended as testing of radio frequency identification (RFID) middlewares. The intention was to get better insight into the architecture and functionalities with respect to its impact to overall quality of service (QoS). Main part of paper focuses on lack of QoS awareness due to missing classification of data originated from tags and from the very beginning of the delivery process. Method we used to evaluate did follow up on existing researches in area of QoS for RFID, combining them with new proposal from standard ISO 25010 regarding - Quality Requirements and Evaluation, system and software quality models. The idea is to enhance application identification area in user memory bank with encoded QoS flags and security attributes. The proof of concept of on-tag specified classes and attributes is able to manage and intentionally influence applications and data processing behavior.
Keywords: middleware; quality of service; radiofrequency identification; software quality; telecommunication computing; IoT networks; QoS awareness; middleware centric heterogeneous RFID network; on-tag approach; quality requirements; radio frequency identification middlewares; software quality models; standard ISO 25010; Ecosystems; Middleware; Protocols; Quality of service; Radiofrequency identification; Security; Standards; Application identification; IoT; QoS flags; RFID; Security attributes (ID#: 16-11242)


S. C. Arseni, S. Halunga, O. Fratu, A. Vulpe and G. Suciu, “Analysis of the Security Solutions Implemented in Current Internet of Things Platforms,” Grid, Cloud & High Performance Computing in Science (ROLCG), 2015 Conference, Cluj-Napoca, 2015, pp. 1-4. doi: 10.1109/ROLCG.2015.7367416
Abstract: Our society finds itself in a point where it becomes more and more bounded by the use of technology in each activity, no matter how simple it could be. Following this social trend, the IT paradigm called Internet of Things (IoT) aims to group each technological end-point that has the ability to communicate, under the same “umbrella”. In recent years many private or public organizations have discussed on this topic and tried to provide IoT Platforms that will allow the grouping of devices scattered worldwide. Yet, while information flows and a certain level of scalability and connectivity have been assured, one key component, security, remains a vulnerable point of IoT Platforms. In this paper we describe the main features of some of these “umbrellas”, either open source or with payment, while analyzing and comparing the security solutions integrated in each one of these IoT Platforms. Moreover, through this paper we try to raise users and organizations awareness of the possible vulnerabilities that could appear in any moment, when using one of the presented IoT Platforms.
Keywords: Internet of Things; data analysis; security of data; IoT platform; security solution analysis; Authentication; Internet of things; Organizations; Protocols; Sensors; Internet of Things architectures; Internet of Things platforms; platforms security (ID#: 16-11243)


U. Celentano and J. Röning, “Framework for Dependable and Pervasive eHealth Services,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 634-639. doi: 10.1109/WF-IoT.2015.7389128
Abstract: Provision of health care and well-being services at end-user residence, together with its benefits, brings important concerns to be dealt with. This article discusses selected issues in dependable pervasive eHealth services support. Dependable services need to be implemented in a resource-efficient and safe way due to constrained and concurrent, pre-existing conditions and radio environment. Security is a must when dealing with personal information, even more critical when regarding health. Once these fundamental requirements are satisfied, and services designed in an effective manner, social significance can be achieved in various scenarios. After having discussed the above viewpoints, the article concludes with the future directions in eHealth IoT including scaling the system down to the nanoscale, to interact more intimately with biological organisms.
Keywords: Internet of Things; health care; software reliability; IoT; dependable service; eHealth service; pervasive service; Data analysis; Data privacy; Distributed databases; Medical services; Privacy; Safety; Security; Dependability; diagnostics; inclusive health care; nanoscale; preventative health care; privacy; remote patient monitoring; resource use efficiency; robustness; safety; security; treatment (ID#: 16-11244)


S. Rao, D. Chendanda, C. Deshpande and V. Lakkundi, “Implementing LWM2M in Constrained IoT Devices,” Wireless Sensors (ICWiSe), 2015 IEEE Conference on, Melaka, 2015, pp. 52-57. doi: 10.1109/ICWISE.2015.7380353
Abstract: LWM2M is an emerging Open Mobile Alliance standard that defines a fast deployable client-server specification to provide various machine to machine services. It provides both efficient device management as well as security workflow for Internet of Things applications, making it especially suitable for use in constrained networks. However, most of the ongoing research activities on this topic focus on the server domain of LWM2M. Enabling relevant LWM2M functionalities on the client side is not only critical and important but challenging as well since these end-nodes are invariably resource constrained. In this paper, we address those issues by proposing the client-side architecture for LWM2M and its complete implementation framework carried out over Contiki-based IoT nodes. We also present a lightweight IoT protocol stack that incorporates the proposed LWM2M client engine architecture and its interfaces. Our implementation is based on the recently released OMA LWM2M v1.0 specification, and supports OMA, IPSO as well as third party objects. We employ a real world application scenario to validate its usability and effectiveness. The results obtained indicate that the memory footprint overheads incurred due to the introduction of LWM2M into the client side IoT protocol stack are around 6-9%, thus making this implementation framework very appealing to even Class 1 constrained device types.
Keywords: Internet of Things; client-server systems; computer network security; mobile computing; Constrained Contiki-based IoT node; IPSO; Internet of Things application; LWM2M client engine architecture; OMA; client-server specification; device management; lightweight IoT protocol stack; machine to machine service; open mobile alliance standard; security workflow; Computer architecture; Engines; Logic gates; Microprogramming; Protocols; Servers; Standards; Constrained Nodes; Device Management; IPSO Objects; IoT Gateway; L WM2M; OMA Objects (ID#: 16-11245)


C. Doukas and F. Antonelli, “Developing and Deploying End-To-End Interoperable & Discoverable IoT Applications,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 673-678. doi: 10.1109/ICC.2015.7248399
Abstract: This paper presents COMPOSE: a collection of open source tools that enable the development and deployment of end-to-end Internet of Things applications and services. COMPOSE targets developers and entrepreneurs providing a full PaaS and the essential IoT tools for applications and services. Device interoperability, service discovery and composition, security and scalability integrated and demonstrated in use cases around smart cities and smart retail context.
Keywords: Internet of Things; cloud computing; open systems; public domain software; smart cities; COMPOSE; IoT tool; PaaS; device interoperability; end-to-end Internet of Things application; open source tool; platform as a service; service discovery; smart city; Intelligent sensors; Internet of things; Mobile communication; Protocols; Internet of Things; IoT development; Smart City; Smart Retail (ID#: 16-11246)


A. Saxena, V. Kaulgud and V. Sharma, “Application Layer Encryption for Cloud,” 2015 Asia-Pacific Software Engineering Conference (APSEC), New Delhi, India, 2015, pp. 377-384. doi: 10.1109/APSEC.2015.52
Abstract: As we move to the next generation of networks such as Internet of Things (IoT), the amount of data generated and stored on the cloud is going to increase by several orders of magnitude. Traditionally, storage or middleware layer encryption has been used for protecting data at rest. However, such mechanisms are not suitable for cloud databases. More sophisticated methods include user-layer-encryption (ULE) (where the encryption is performed at the end-user's browser) and application-layer-encryption (ALE) (where the encryption is done within the web-app). In this paper, we study security and functionality aspects of cloud encryption and present an ALE framework for Java called JADE that is designed to protect data in the event of a server compromise.
Keywords: Cloud computing; Databases; Encryption; Java; PaaS security; application layer encryption; cloud encryption; cloud security; database security (ID#: 16-11247)


P. Srivastava and N. Garg, “Secure and Optimized Data Storage for IoT through Cloud Framework,” Computing, Communication & Automation (ICCCA), 2015 International Conference on, Noida, 2015, pp. 720-723. doi: 10.1109/CCAA.2015.7148470
Abstract: Internet of Things (IoT) is the future. With increasing popularity of internet, soon internet in routine devices will be a common practice by people. Hence we are writing this paper to encourage IoT accomplishment using cloud computing features with it. Basic setback of IoT is management of the huge quantity of data. In this paper, we have suggested a framework with several data compression techniques to store this large amount of data on cloud acquiring lesser space and using AES encryption techniques we have also improved the security of this data. Framework also shows the interaction of data with reporting and analytic tools through cloud. At the end, we have concluded our paper with some of the future scopes and possible enhancements of our ideas.
Keywords: Internet of Things; cloud computing; cryptography; data compression; optimisation; storage management; AES encryption technique; Internet of Things; IoT; cloud computing feature; data compression technique; data storage optimization; data storage security; Cloud computing; Encryption; Image coding; Internet of things; Sensors; AES; IoT; actuators; compression; encryption; sensors; trigger (ID#: 16-11248)


K. Yasaki, H. Ito and K. Nimura, “Dynamic Reconfigurable Wireless Connection between Smartphone and Gateway,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 228-233.
doi: 10.1109/COMPSAC.2015.234
Abstract: In a broad sense, the Internet of Things (IoT) includes devices that do not have Internet access capability but are aided by a gateway (such as a smartphone) that does have such access. The combination of a gateway and devices with a wireless connection can provide flexibility, but there are limitations to the network capability of each gateway in terms of how many network connections can be accommodated. It would be possible to get rid of the constraint and provide further flexibility and stability if we could deal with multiple gateways and balance the connections. Therefore, we propose a dynamic reconfigurable wireless connection system that can hand over the device connection between gateways by introducing a driver management framework that migrates the driver module handling the network connection. We have implemented a prototype using smartphones as gateways, Bluetooth low energy (BLE) sensors as devices, and a Web application that works on an extended Web runtime that can directly control a device from the application. The combination of these, composed by the user, can be migrated from the smartphone to other gateways (including the network connection) by dragging and dropping icons, after which the gateway and devices take over the combined task. We confirmed that the proposed architecture enables end users to utilize devices flexibly and can easily migrate the network connections of a particular service to another gateway.
Keywords: Internet; Internet of Things; internetworking; network servers; smart phones; Bluetooth low energy sensors; Internet access; Internet of Things; IoT; Web application; driver management framework; driver module handling; dynamic reconfigurable wireless connection system; extended Web runtime; multiple gateways; network connections; smartphone; Communication system security; IEEE 802.11 Standard; Logic gates; Protocols; Sensors; Wireless communication; Wireless sensor networks; Internet of Things; Javascript; dynamic reconfiguration; gateway; heterogeneity; mash-up; smartphone (ID#: 16-11249) 


T. F. J. M. Pasquier, J. Singh, J. Bacon and O. Hermant, “Managing Big Data with Information Flow Control,” 2015 IEEE 8th International Conference on Cloud Computing, New York City, NY, 2015, pp. 524-531. doi: 10.1109/CLOUD.2015.76
Abstract: Concern about data leakage is holding back more widespread adoption of cloud computing by companies and public institutions alike. To address this, cloud tenants/applications are traditionally isolated in virtual machines or containers. But an emerging requirement is for cross-application sharing of data, for example, when cloud services form part of an IoT architecture. Information Flow Control (IFC) is ideally suited to achieving both isolation and data sharing as required. IFC enhances traditional Access Control by providing continuous, data-centric, cross-application, end-to-end control of data flows. However, large-scale data processing is a major requirement of cloud computing and is infeasible under standard IFC. We present a novel, enhanced IFC model that subsumes standard models. Our IFC model supports 'Big Data' processing, while retaining the simplicity of standard IFC and enabling more concise, accurate and maintainable expression of policy.
Keywords: Big Data; Internet of Things; authorisation; cloud computing; Big Data management; IFC; IoT architecture; access control; cloud services; cloud tenants; containers; cross-application data sharing; data flows; data leakage; information flow control; large-scale data processing; virtual machines; Access control; Companies; Context; Data models; Hospitals; Standards; Data Management; Information Flow Control; Security (ID#: 16-11250)


A. J. Poulter, S. J. Johnston and S. J. Cox, “Using the MEAN Stack to Implement a RESTful Service for an Internet of Things Application,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 280-285.
doi: 10.1109/WF-IoT.2015.7389066
Abstract: This paper examines the components of the MEAN development stack (MongoDb, Express.js, Angular.js, & Node.js), and demonstrate their benefits and appropriateness to be used in implementing RESTful web-service APIs for Internet of Things (IoT) appliances. In particular, we show an end-to-end example of this stack and discuss in detail the various components required. The paper also describes an approach to establishing a secure mechanism for communicating with IoT devices, using pull-communications.
Keywords: Internet of Things; Web services; application program interfaces; security of data; software tools; Angular.js; Express.js; Internet of Things application; IoT devices; MEAN development stack; MongoDb; Node.js; RESTful Web-service API; pull-communications; secure mechanism; Databases; Hardware; Internet of things; Libraries; Logic gates; Servers; Software; IoT; MEAN; REST; web programming (ID#: 16-11251)


Z. Liu, Mianxiong Dong, Bo Gu, Cheng Zhang, Y. Ji and Y. Tanaka, “Inter-Domain Popularity-Aware Video Caching in Future Internet Architectures,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 404-409. doi: (not provided)
Abstract: Current TCP/IP based network is suffering from the usage of IP especially in the era of Internet of things (IoT). Recently Content Centric Network (CCN) is proposed as an alternative of the future network architecture. In CCN, data itself, which is authenticated and secured, is a name and can be directly requested at the network level instead of using IP and Domain Name System (DNS). Another difference between CCN and traditional networks is that the routers in CCN have the caching abilities. Then the end users can obtain the data from routers instead of from the remote server if the content has been stored in the router. Hence the overall network performance can be improved by reducing the required transmission hops and the advantage of the CCN caching has been shown in literature. In this paper, we design a new caching policy for the popularity-aware video caching in CCN to handle the 'redundancy' problem in the existing schemes, where the same content may be stored multiple times along the road from server to users, thus leading to a significant performance degradation. Simulations are conducted and we could observe that the proposed scheme performs better comparing with the existing caching policies.
Keywords: Internet; Internet of Things; CCN; DNS; Internet of things; TCP-IP based network; content centric network; domain name system; future Internet architecture; interdomain popularity-aware video caching; loT; redundancy problem; remote server; router; Artificial neural networks; Degradation; IP networks; Indexes; Redundancy; Servers; Topology (ID#: 16-11252)


P. Porambage, A. Braeken, A. Gurtov, M. Ylianttila and S. Spinsante, “Secure End-to-End Communication for Constrained Devices in IoT-Enabled Ambient Assisted Living Systems,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 711-714. doi: 10.1109/WF-IoT.2015.7389141
Abstract: The Internet of Things (IoT) technologies interconnect broad ranges of network devices irrespective of their resource capabilities and local networks. In order to upgrade the standard of life of elderly people, Ambient Assisted Living (AAL) systems are also widely deployed in the context of IoT applications. To preserve user security and privacy in AAL systems, it is significant to ensure secure communication link establishment among the medical devices and the remote hosts or servers that are interested in accessing the critical health data. However, due to the limited resources available in such constrained devices, it is challenging to exploit expensive cryptographic operations in the conventional security protocols. Therefore, in this paper we propose a novel proxy-based authentication and key establishment protocol, which is lightweight and suitable to safeguard sensitive data generated by resource-constrained devices in IoT-enabled AAL systems.
Keywords: Internet of Things; assisted living; cryptographic protocols; data privacy; geriatrics; health care; medical computing; Internet of Things technology; IoT-enabled ambient assisted living system; constrained device; critical health data assessment; cryptographic operation; elderly people; key establishment protocol; medical device; proxy-based authentication protocol; remote host; remote server; secure end-to-end communication link; security protocol; user privacy; user security; Authentication; Cryptography; DH-HEMTs; Protocols; Senior citizens; Sensors; authentication; key establishment; proxy; resource-constrained device (ID#: 16-11253)


H. C. Pöhls, “JSON Sensor Signatures (JSS): End-to-End Integrity Protection from Constrained Device to IoT Application,” Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2015 9th International Conference on, Blumenau, 2015, pp. 306-312. doi: 10.1109/IMIS.2015.48
Abstract: Integrity of sensor readings or actuator commands is of paramount importance for a secure operation in the Internet-of-Things (IoT). Data from sensors might be stored, forwarded and processed by many different intermediate systems. In this paper we apply digital signatures to achieve end-to-end message level integrity for data in JSON. JSON has become very popular to represent data in the upper layers of the IoT domain. By signing JSON on the constrained device we extend the end-to-end integrity protection starting from the constrained device to any entity in the IoT data-processing chain. Just the JSON message's contents including the enveloped signature and the data must be preserved. We reached our design goal to keep the original data accessible by legacy parsers. Hence, signing does not break parsing. We implemented an elliptic curve based signature algorithm on a class 1 (following RFC 7228) constrained device (Zolertia Z1: 16-bit, MSP 430). Furthermore, we describe the challenges of end-to-end integrity when crossing from IoT to the Web and applications.
Keywords: Internet of Things; Java; data integrity; digital signatures; public key cryptography; Internet-of-Things; IoT data-processing chain; JSON sensor signatures; actuator commands; digital signatures; elliptic curve based signature algorithm; end-to-end integrity protection; end-to-end message level integrity; enveloped signature; legacy parsers; sensor readings integrity; Data structures; Digital signatures; Elliptic curve cryptography; NIST; Payloads; XML; ECDSA; IoT; JSON; integrity (ID#: 16-11254)


E. Z. Tragos et al., “An IoT Based Intelligent Building Management System for Ambient Assisted Living,” 2015 IEEE International Conference on Communication Workshop (ICCW), London, 2015, pp. 246-252. doi: 10.1109/ICCW.2015.7247186
Abstract: Ambient Assisted Living (AAL) describes an ICT based environment that exposes personalized and context-aware intelligent services, thus creating an appropriate experience to the end user to support independent living and improvement of the everyday quality of life of both healthy elderly and disabled people. The social and economic impact of AAL systems have boosted the research activities that combined with the advantages of enabling technologies such as Wireless Sensor Networks (WSNs) and Internet of Things (IoT) can greatly improve the performance and the efficiency of such systems. Sensors and actuators inside buildings can create an intelligent sensing environments that help gather realtime data for the patients, monitor their vital signs and identify abnormal situations that need medical attention. AAL applications might be life critical and therefore have very strict requirements for their performance with respect to the reliability of the devices, the ability of the system to gather data from heterogeneous devices, the timeliness of the data transfer and their trustworthiness. This work presents the functional architecture of SOrBet (Marie Curie IAPP project) that provides a framework for interconnecting efficiently smart devices, equipping them with intelligence that helps automating many of the everyday activities of the inhabitants. SOrBet is a paradigm shift of traditional AAL systems based on a hybrid architecture, including both distributed and centralized functionalities, extensible, self-organising, robust and secure, built on the concept of “reliability by design”, thus being capable of meeting the strict Quality of Service (QoS) requirements of demanding applications such as AAL.
Keywords: Internet of Things; assisted living; building management systems; patient monitoring; quality of service; wireless sensor networks; IoT based intelligent building management system; SOrBet; ambient assisted living; hybrid architecture; Artificial intelligence; Automation; Buildings; Quality of service; Reliability; Security; Sensors (ID#: 16-11255)


N. Pazos, M. Müller, M. Aeberli and N. Ouerhani, “ConnectOpen — Automatic Integration of IoT Devices,” Internet of Things
(WF-IoT), 2015 IEEE 2nd World Forum on, Milan,
2015, pp. 640-644. doi: 10.1109/WF-IoT.2015.7389129
Abstract: There exists, today, a wide consensus that Internet of Things (IoT) is creating a wide range of business opportunities for various industries and sectors like Manufacturing, Healthcare, Public infrastructure management, Telecommunications and many others. On the other hand, the technological evolution of IoT facing serious challenges. The fragmentation in terms of communication protocols and data formats at device level is one of these challenges. Vendor specific application architectures, proprietary communication protocols and lack of IoT standards are some reasons behind the IoT fragmentation. In this paper we propose a software enabled framework to address the fragmentation challenge. The framework is based on flexible communication agents that are deployed on a gateway and can be adapted to various devices communicating different data formats using different communication protocol. The communication agent is automatically generated based on specifications and automatically deployed on the Gateway in order to connect the devices to a central platform where data are consolidated and exposed via REST APIs to third party services. Security and scalability aspects are also addressed in this work.
Keywords: Internet of Things; application program interfaces; cloud computing; computer network security; internetworking; transport protocols; ConnectOpen; IoT fragmentation; REST API; automatic IoT device integration; central platform; communication agents; communication protocol; communication protocols; data formats; device level; scalability aspect; security aspect; software enabled framework; third party services; Business; Embedded systems; Logic gates; Protocols; Scalability; Security; Sensors; Communication Agent; End Device; Gateway; IoT; Kura; MQTT; OSGi (ID#: 16-11256)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Expandability 2015


SoS Logo




The expansion of a network to more nodes creates security problems. For the Science of Security community, expandability relates to resilience and compositionality. The research work cited here was presented in 2015.

Z. Li and Y. Yang, “ABCCC: An Advanced Cube Based Network for Data Centers,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 547-556. doi: 10.1109/ICDCS.2015.62
Abstract: A new network structure called BCube Connected Crossbars (BCCC) was recently proposed. Its short diameter, good expandability and low cost make it a very promising topology for data center networks. However, it can utilize only two NIC ports of each server, which is suitable for nowadays technology, even when more ports are available. Due to technology advances, servers with more NIC ports are emerging and they will become low-cost commodities some time later. In this paper, we propose a more general server-centric data center network structure, called Advanced BCube Connected Crossbars (ABCCC), which can utilize inexpensive commodity off-the-shelf switches and servers with any fixed number of NIC ports and provide good network properties. Like BCCC, ABCCC has good expandability. When doing expansion, there is no need to alter the existing system but only to add new components into it. Thus the expansion cost that BCube suffers from can be significantly reduced in ABCCC. We also introduce an addressing scheme and an efficient routing algorithm for one-to-one communication in ABCCC. We make comprehensive comparisons between ABCCC and some popular existing structures in terms of several critical metrics, such as diameter, network size, bisection bandwidth and capital expenditure. We also conduct extensive simulations to evaluate ABCCC, which show that ABCCC achieves the best trade off among all these critical metrics and it suits for many different applications by fine tuning its parameters.
Keywords: computer centres; computer networks; topology; ABCCC; NIC port; advanced BCube connected crossbar; off-the-shelf switch; one-to-one communication; routing algorithm; server-centric data center network structure; Hardware; Hypercubes; Network topology; Ports (Computers); Routing; Servers; Topology; Data center networks; expandability; network diameter; server-centric (ID#: 16-9991)


Z. Li and Y. Yang, “GBC3: A Versatile Cube-Based Server-Centric Network for Data Centers,” in IEEE Transactions on Parallel and Distributed Systems, vol. 27, no. 10, pp. 2895-2910, 2016. doi: 10.1109/TPDS.2015.2511725
Abstract: A new network structure called BCube Connected Crossbars (BCCC) was recently proposed. Its short diameter, good expandability and low cost make it a very promising topology for data center networks. However, it can utilize only two NIC ports of each server, which is suitable for nowadays technology, even though more NIC ports are available. Due to technology advances, servers with more NIC ports are emerging and they will become low-cost commodities some time later. In this paper, we propose a more general server-centric data center network structure, called GBC3, which can utilize inexpensive commodity off-the-shelf switches and servers with any fixed number of NIC ports and provide good network properties. Like BCCC, GBC3 has good expandability. When doing expansion, there is no need to alter the existing system but only to add new components into it. Thus the expansion cost that BCube suffers from can be significantly reduced in GBC3. We also introduce an addressing scheme and several efficient routing algorithms for one-to-one, one-to-all and one-to-many communications in GBC3 respectively. We make comprehensive comparisons between GBC3 and some popular existing structures in terms of several critical metrics, such as diameter, network size, bisection bandwidth and capital expenditure. We also conduct extensive experiments to evaluate GBC3, which show that GBC3 achieves the best flexibility to make tradeoff among all these critical metrics and it can suit for many different applications by fine tuning its parameters.
Keywords: Hardware; Hypercubes; Network topology; Ports (Computers); Routing; Servers; Topology; Data center networks; expandability; network diameter; server-centric; topology (ID#: 16-9992)


Y. Cheng, D. Zhao, F. Tao, L. Zhang and Y. Liu, “Complex Networks Based Manufacturing Service and Task Management in Cloud Environment,” Industrial Electronics and Applications (ICIEA), 2015 IEEE 10th Conference on, Auckland, 2015, pp. 242-247. doi: 10.1109/ICIEA.2015.7334119
Abstract: In the process of development and application of service-oriented manufacturing (SOM) system, e.g., cloud manufacturing (CMfg), manufacturing resource allocation is always one of the most important issues need to be addressed. With the permeation of Internet of things (IoT), big data, and cloud technologies in manufacturing, manufacturing service and task management in SOM is facing some new challenges under the cloud environment. In consideration of the characteristics of cloud environment (i.e., complexity, sociality, dynamics, uncertainty, distribution, expandability, etc.), a manufacturing service and task management method based on complex networks is proposed in this paper. The models of manufacturing service network (S_Net) and manufacturing task network (T_Net) are built according to the digital description of manufacturing service and task. Then the manufacturing service management upon S_Net and manufacturing task management upon T_Net are discussed respectively. Finally, the conclusion and future works are pointed out.
Keywords: cloud computing; manufacturing data processing; service-oriented architecture; Big Data; CMfg; Internet of things; SOM system; S_Net; T_Net; cloud environment characteristics; cloud manufacturing; cloud technologies; complex network-based manufacturing service; complexity characteristic; distribution characteristic; dynamics characteristic; expandability characteristic; manufacturing resource allocation; manufacturing service network; manufacturing task network; service-oriented manufacturing system; sociality characteristic; task management; uncertainty characteristic; Cloud computing; Collaborative work; Complex networks; Computational modeling; Correlation; Manufacturing; Resource management; cloud environment; manufacturing service network (S_Net); manufacturing task network (T_Net); service management; service-oriented manufacturing (SOM) (ID#: 16-9993)


R. Zhao and J. Zhang, “High Efficiency Hybrid Current Balancing Method for Multi-Channel LED Drive,” 2015 IEEE Applied Power Electronics Conference and Exposition (APEC), Charlotte, NC, 2015, pp. 854-860. doi: 10.1109/APEC.2015.7104449
Abstract: In this paper, a novel hybrid current balancing method for multi-channel LED drive based on quasi-two-stage converter are proposed. In the proposed structure, each output module has two outputs and their output currents can be balanced by a capacitor based on charge balancing principle. A switching mode current regulator is adopted for each output module to balance the currents of the output modules. Since the current regulator only process part of the total output power, the cost is low and the efficiency is high. The proposed method combines the advantages of passive and active current balancing method, which is simple and flexible for load expandability. Performance of the proposed method is validated by the simulation and experimental results from a 120W prototype with four LED strings.
Keywords: capacitors; driver circuits; electric current control; light emitting diodes; switching convertors; active current balancing method; capacitor; hybrid current balancing method; load expandability; multichannel LED drive; passive current balancing method; power 120 W; quasitwo-stage converter; switching mode current regulator; Adaptive control; Capacitors; DC-DC power converters; Light emitting diodes; Regulators; Switches; Voltage control; Current balancing method; Hybrid; LLC; Multi-output LED driver; high efficiency (ID#: 16-9994)


S. C. Lin, C. Wang, C. Y. Lo, Y. W. Chang, H. Y. Lai and P. L. Hsu, “Using Constructivism as a Basic Idea to Design Multi-Situated Game-Based Learning Platform and ITS Application,” Advanced Applied Informatics (IIAI-AAI), 2015 IIAI 4th International Congress on, Okayama, 2015, pp. 711-712. doi: 10.1109/IIAI-AAI.2015.264
Abstract: Nowadays, e-learning becomes a popular learning strategies because of the advance in technology and the development of learning platforms. At present, most of platforms are designed for single topic rather than multiple topics and are difficultly extended to different topics, since learning mode, design, and limitations of applications for game-based learning. Therefore, in this study, we developed a tower defense game-based platform based on situated learning theory and constructivism of knowledge and this platform can be applied to diverse learning programs. In this platform, users can learn in a simulated scenario. Additionally, the flexible design of platforms will provide the usability and expandability of the system.
Keywords: computer aided instruction; computer games; diverse learning programs; e-learning; knowledge constructivism; learning mode; learning strategies; multisituated game-based learning platform design; situated learning theory; system expandability; system usability; tower defense game-based learning platform; Electronic learning; Games; Information management; Multimedia communication; Poles and towers; Usability; Constructivist Learning; Game-based learning; Situated learning (ID#: 16-9995)


L. Mossucca et al., “Polar Data Management Based on Cloud Technology,” Complex, Intelligent, and Software Intensive Systems (CISIS), 2015 Ninth International Conference on, Blumenau, 2015, pp. 459-463. doi: 10.1109/CISIS.2015.67
Abstract: IDIPOS, that stands for Italian Database Infrastructure for Polar Observation Sciences, has been conceived to realize a feasibility study on infrastructure devoted to management of data coming from Polar areas. This framework adopted a modular approach identifying two main parts: the first one defines main components of infrastructure, and, the latter selects possible cloud solutions to manage and organize these components. The main purpose is the creation of a scalable and flexible infrastructure for the exchange of scientific data from various application fields. The envisaged infrastructure is based on the cutting-edge technology of the Community Cloud Infrastructure for an aggregation and federation of resources, to optimize the use of hardware. The infrastructure is composed of: a central node, several nodes distributed in Italy, interconnection between other systems realized in Polar areas. This paper aims to investigate cloud solution, and explore the key factors which may influence cloud adoption in the project such as scalability, flexibility and expandability. In particular, main cloud aspects addressed are related to data storage, data management, data analysis, infrastructure federation following recommendations from the Cloud Expert Group to allow sharing information in scientific communities.
Keywords: cloud computing; data analysis; database management systems; open systems; scientific information systems; Cloud Expert Group; IDIPOS; Italian Database Infrastructure for Polar Observation Sciences; central node; cloud technology; community cloud infrastructure; cutting-edge technology; data management; data storage; expandability; flexibility; information sharing; infrastructure federation; modular approach; polar data management; resource aggregation; resource federation; scalability; scientific communities; scientific data exchange; Cloud computing; Clouds; Communities; Computer architecture; Interoperability; Organizations; Servers; Polar Observation Sciences; e-science; interoperability (ID#: 16-9996)


A. Musa, T. Minotani, K. Matsunaga, T. Kondo and H. Morimura, “An 8-Mode Reconfigurable Sensor-Independent Readout Circuit for Trillion Sensors Era,” Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, Singapore, 2015, pp. 1-6. doi: 10.1109/ISSNIP.2015.7106913
Abstract: The Internet of Things (IoT) is opening the doors to many new devices and applications. Such an increase in the variety of applications requires reconfigurable, flexible and expandable hardware for fabrication and development cost reduction. This has been achieved for the digital part with devices like Arduino. However, the sensor readout Analog-Front-End (AFE) circuits are mainly designed for a specific sensor type or application. Such an approach would be feasible for the current small number of applications and sensors used. However, it will increase cost drastically as the variety and number of applications and sensors are increased. Moreover, flexibility and expandability of the system will be limited. Therefore, a universal sensor platform that can be reconfigured to adapt to various sensors and applications is needed. Moreover, an array of such circuit can be made with the same sensor to increase measurement accuracy and reliability. It can also be used to integrate heterogeneous sensors for increasing the flexibility of the system, which will make the system adaptable to many applications through only activating the desired sensors. In this paper, an 8-mode reconfigurable sensor readout AFE with offset-cancellation-resolution enhancing scheme is proposed to serve as a step towards a universal sensor interface. The proposed AFE can be reconfigured to interface resistive, capacitive, current producing, and voltage producing sensors through direct or capacitive connection to its terminals. The proposed system is fabricated in 180nm CMOS process and has successfully measured the four types of sensor outputs. It has also been interfaced to Arduino board to allow easy interfacing of various sensors. Therefore, the proposed work can be used as general purpose AFE resulting in manufacturing and development cost reduction and increased flexibility and expandability.
Keywords: CMOS digital integrated circuits; Internet of Things; capacitive sensors; cloud computing; digital-analogue conversion; microprocessor chips; readout electronics; 8-mode reconfigurable sensor readout AFE; 8-mode reconfigurable sensor-independent readout circuit; Arduino board; CMOS process; IoT; current producing sensors; development cost reduction; digital part; expandable hardware; flexible hardware; heterogeneous sensors; interface resistive sensors; measurement accuracy; offset-cancellation-resolution enhancing scheme; reconfigurable hardware; reliability; sensor outputs; sensor readout AFE circuit; sensor readout analog-front-end circuits; system flexibility; trillion sensor era; universal sensor interface; universal sensor platform; voltage producing sensors; Arrays; Current measurement; Electrical resistance measurement; Integrated circuits; Signal resolution; Transducers; Voltage measurement (ID#: 16-9997)


Z. Xu and C. Zhang, “Optimal Direct Voltage Control of MTDC Grids for Integration of Offshore Wind Power,” Smart Grid Technologies - Asia (ISGT ASIA), 2015 IEEE Innovative, Bangkok, 2015, pp. 1-6. doi: 10.1109/ISGT-Asia.2015.7387179
Abstract: This paper presents an optimal control of multiterminal high voltage DC (MTDC) networks. The conventional methods of controlling direct voltages of MTDC networks suffered from serials of issues, such as lack of ability to steer the power flow, less expandability to scale up and poor dynamic responses. In this paper, an innovative strategy of regulating DC voltages is derived through three main steps: calculation of DC loads flow, optimization of power flow and N-1 security for MTDC networks. Further, this strategy is numerically tested by incorporating the loss minimization in a MTDC network. The advantages of the control strategy are verified by simulations using MATLAB/Simulink package.
Keywords: dynamic response; load flow; offshore installations; optimal control; optimisation; power system security; voltage control; wind power plants; DC load flow; DC voltages; MATLAB-Simulink package; MTDC grids; MTDC networks; N-1 security; dynamic responses; loss minimization; multiterminal high voltage DC networks; offshore wind power; optimal direct voltage control; power flow; HVDC transmission; Load flow; Reactive power; Security; Voltage control; Wind power generation; Control; MTDC; Power flow (ID#: 16-9998)


A. Agarwal, V. Mukati and P. Kumar, “Performance Analysis of Variable Rate Multicarrier Transmission Schemes over LMS Channel,” Electronics, Computing and Communication Technologies (CONECCT), 2015 IEEE International Conference on, Bangalore, 2015, pp. 1-6. doi: 10.1109/CONECCT.2015.7383866
Abstract: With the increasing demand for increased coverage area, higher QoS, ubiquitous availability, flexibility and expandability, Land Mobile Satellite (LMS) multimedia communication is gaining popularity over existing Land Mobile Terrestrial (LMT) communication. This paper presents a comparative study of GO-OFDMA and VSL MC-CDMA variable rate transmission scheme over L and Ka-band LMS channel. For both the schemes, four variable rate classes employing 15 users are considered. It is shown that, for both the frequency bands, the BER performance of GO-OFDMA scheme is better than that of VSL MC-CDMA for all the different data rate class of users. Though, for Ka-band, the performance of both the schemes is relatively poor than L-band. Also, the performance of both the schemes for different elevation angles are illustrated and analyzed. Later, the composite signal PAPR performance of both the transmission schemes is shown and compared. It is observed that, the PAPR performance of GO-OFDMA scheme is better than VSL MC-CDMA. Hence GO-OFDMA scheme is a suitable candidate for variable rate communication over LMS channel.
Keywords: OFDM modulation; code division multiple access; error statistics; frequency division multiple access; land mobile radio; mobile satellite communication; multimedia communication; quality of service; GO-OFDMA scheme BER performance; Ka-band LMS channel; L-band LMS channel; LMS multimedia communication channel; QoS; VSL MC-CDMA scheme; composite signal PAPR performance; land mobile satellite multimedia communication; multicarrier code division multiple access; orthogonal frequency division multiple access; variable rate multicarrier transmission scheme performance analysis; variable spreading length; Channel models; Mobile communication; Multicarrier code division multiple access; OFDM; Satellite broadcasting; Satellites; Shadow mapping; GO-OFDMA; L and Ka-Band; LMS channel; PAPR; VSL MC-CDMA (ID#: 16-9999)


Q. Qiu, Xiao Yao, Cuiting Chen, Yu Liu and Jinyun Fang, “A Spatial Data Partitioning and Merging Method for Parallel Vector Spatial Analysis,” Geoinformatics, 2015 23rd International Conference on, Wuhan, 2015, pp. 1-5. doi: 10.1109/GEOINFORMATICS.2015.7378651
Abstract: Based on the principle of the proximity of spatial elements and the equilibrium of spatial data's size, this paper presents a data partitioning and merging method based on spatial filling curve and collection of spatial features. In the data reducing section, this method takes the principle of dynamic tree merging and reduces the times of data serialization and deserialization. The experiment shows that such methods can cut down the time of every process' computing and merging, improve the load balancing degree, and make a great improvement to the efficiency of parallel algorithm and expandability.
Keywords: data reduction; geographic information systems; merging; parallel algorithms; vectors; data deserialization; data reducing section; data serialization; dynamic tree merging; load balancing degree; parallel algorithm; parallel vector spatial analysis; spatial data merging method; spatial data partitioning method; spatial data size equilibrium; spatial element proximity; spatial feature collection; spatial filling curve; Algorithm design and analysis; Hardware; Linux cluster; MPI; SLFB; serialize; spatial filling curve  (ID#: 16-10000)


K. Liu, R. Fu, Y. Gao, Y. Sun and P. Yan, “High Voltage Regulating Frequency AC Power Supply Based on CAN Bus Communication Control,” 2015 IEEE Pulsed Power Conference (PPC), Austin, TX, 2015, pp. 1-4. doi: 10.1109/PPC.2015.7297000
Abstract: High voltage high frequency AC power supply (HVHFACPS) is widely used in military, industry, scientific research and so on. Different applications ask for different functions, such as the control modes selectable, work modes selectable, output voltage adjustable, frequency adjustable, and even the integratability and expandability. In this paper, a kind of HVHFACPS is introduced, which output voltage can be regulated from 0 to 30 kV and output frequency can be regulated from 1kHz to 50kHz. There are continuous and discontinuous work modes to be chosen for a continuous AC voltage output or a discontinuous output and the work time and frequency can be regulate in the discontinuous work mode. There are remote and local control modes to be chosen for a remote control by computer or a local control by the keyboard on the cabinet panel. The control system of this power supply has the CAN bus communication function so that it can be connected to the CAN bus network and work cooperate to other equipments. Some experiments such as dielectric barrier discharge (DBD) and plasma generation are carried on use the power supply and the results proved that the functions are realized and the performance is good.
Keywords: controller area networks; field buses; frequency control; power supply circuits; telecontrol; voltage control; CAN bus communication control; HVHFACPS; cabinet panel; high voltage high frequency AC power supply; remote control; Control systems; Digital signal processing; Frequency control; Inductance; Power supplies; Resonant frequency; Voltage control (ID#: 16-10001)


Z. Li and Y. Yang, “Permutation Generation for Routing in BCube Connected Crossbars,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 5460-5465. doi: 10.1109/ICC.2015.7249192
Abstract: BCube Connected Crossbars (BCCC) is a recently proposed network structure with short diameter and good expandability for cloud-based networks. Its diameter increases linearly to its order (dimension) and it has multiple near-equal parallel paths between any pair of servers. These advantages make BCCC a very promising network structure for next generation cloud-based networks. An efficient routing algorithm for BCCC has also been proposed, in which a permutation is used to determine which order (or dimension) will be routed first. However, there is no discussion yet about how to choose the permutation. In this paper, we mainly focus on permutation generations for routing in BCCC. We analyze the impact of choosing different permutations in both theory and simulation and propose two efficient permutation generation algorithms which take advantage of BCCC structure and give good performance.
Keywords: cloud computing; multicast communication; telecommunication network routing; BCube connected crossbars; multiple near-equal parallel paths; next generation cloud-based networks; permutation generation; Aggregates; Arrays; Cloud computing; Next generation networking; Routing; Servers; Throughput; BCube Connected Crossbars (BCCC); Cloud-based networks; dual-port server; load balance (ID#: 16-10002)


A. S. Bouhouras, K. I. Sgouras and D. P. Labridis, “Multi-Objective Planning Tool for the Installation of Renewable Energy Resources,” in IET Generation, Transmission & Distribution, vol. 9, no. 13, pp. 1782-1789, Oct. 01 2015.
doi: 10.1049/iet-gtd.2014.1054
Abstract: This study examines how environmental and socioeconomic criteria affect renewable energy resources (RES) distribution strategic plans regarding national energy policies. Four criteria are introduced with respective coefficients properly formulated to quantify their capacity. Moreover, these coefficients are properly normalised to combine the effect of each criterion under a uniform formulation. The base case scenario in this work considers an initially available capacity of RESs to be equally distributed among the candidate regions. Six scenarios about different prioritisation are examined. The results prove that different prioritisation criteria yield significant variations regarding the assigned regional RES capacity. The proposed algorithm defines optimisation only by terms of predefined prioritisation criteria; each solution could be considered optimal given that the respective installation strategic plan is subject to specific weighted criteria. The advantages of the proposed algorithm rely on its simplicity and expandability, since both coefficients formulation and resizing procedure are easily performed, as well as additional criteria could be easily incorporated in the resizing procedure. Thus, this algorithm could be considered as a multi-objective planning tool regarding long-term strategic plans for nationwide RES distribution.
Keywords: environmental economics; optimisation; power distribution economics; power distribution planning; renewable energy sources; coefficients formulation; environmental criteria; multiobjective planning tool; national energy policies; optimisation; predefined prioritisation criteria; regional RES capacity; renewable energy resource distribution strategic plans; renewable energy resource installation; resizing procedure; socioeconomic criteria (ID#: 16-10003)


G. Parise, L. Parise, L. Martirano and A. Germolé, “The Relevance of the Architecture of Electrical Power Systems in Hospitals: The Service Continuity Safety by Design,” 2015 IEEE/IAS 51st Industrial & Commercial Power Systems Technical Conference (I&CPS), Calgary, AB, 2015, pp. 1-6. doi: 10.1109/ICPS.2015.7266433
Abstract: The power system architecture of hospitals supports by design an enhanced electrical behavior also adequate to the better withstand to external forces, if actual, as earthquake, fire, flood, applying a “Darwinian” approach. The architecture of the power system, supported by supervision control systems and a business continuity management (BCM), must guarantee operational performances that preserve the global service continuity such as: selectivity of faults and immunity to interferences among the system areas; easy maintainability of the system and its parts; flexibility and expandability. The paper deals with sample cases of systems in complexes of buildings applying the micro approach to satisfy hospital requirements and medical quality performances.
Keywords: SCADA systems; business continuity; hospitals; power system security; BCM; Darwinian approach; business continuity management; external forces; global service continuity; hospital requirements; medical quality performances; operational performances; power system architecture; service continuity safety; supervision control systems; Artificial neural networks; Heating; Load modeling; Reliability engineering; Substations; Switches; Critical loads; architecture efficiency; business and service continuity; complex systems; operation efficiency (ID#: 16-10004)


W. Wang, Q. Cao, X. Zhu and S. Liang, “A Framework for Intelligent Service Environments Based on Middleware and General Purpose Task Planner,” Intelligent Environments (IE), 2015 International Conference on, Prague, 2015, pp. 184-187. doi: 10.1109/IE.2015.40
Abstract: Aiming at providing various services for daily living, a framework of Intelligent Service Environment of Ubiquitous Robotics (ISEUR) is presented. This framework mainly addresses two important issues. First, it builds standardized component models for heterogeneous sensing and acting devices based on the middleware technology. Second, it implements a general purpose task planner, which coordinates associated components to achieve various tasks. The video demonstrates how these two functionalities are combined together in order to provide services in intelligent environments. Two different tasks, a localization task and a robopub task, are implemented to show the feasibility, efficiency and expandability of the system.
Keywords: intelligent robots; middleware; mobile robots; robot programming; ISEUR; daily living; general-purpose task planner; heterogeneous acting devices; heterogeneous sensing devices; intelligent environments; intelligent service environment-of-ubiquitous robotics; localization task; middleware technology; robopub task; standardized component models; Cameras; Middleware; Planning; Ports (Computers); Robot kinematics; Robot vision systems; intelligent service environment; middleware; task planning (ID#: 16-10005)


G. Papadopoulos, “Challenges in the Design and Implementation of Wireless Sensor Networks: A Holistic Approach-Development and Planning Tools, Middleware, Power Efficiency, Interoperability,” 2015 4th Mediterranean Conference on Embedded Computing (MECO), Budva, 2015, pp. 1-3. doi: 10.1109/MECO.2015.7181857
Abstract: Wireless Sensor Networks (WSNs) constitute a networking area with promising impact in the environment, health, security, industrial applications and more. Each of these presents different requirements, regarding system performance and QoS, and involves a variety of mechanisms such as routing and MAC protocols, algorithms, scheduling policies, security, OS, all of which are residing over the HW, the sensors, actuators and the Radio Tx/Rx. Furthermore, they encompass special characteristics, such as constrained energy, CPU and memory resources, multi-hop communication, leading to a few steps higher the required special knowledge. Although the status of WSNs is nearing the stage of maturity and wide-spread use, the issue of their sustainability hinges upon the implementation of some features of paramount importance: Low power consumption to achieve long operational life-time for battery-powered unattended WSN nodes, joint optimization of connectivity and energy efficiency leading to best-effort utilization of constrained radios and minimum energy cost, self-calibration and self-healing to recover from failures and errors to which WSNs are prone, efficient data aggregation lessening the traffic load in constrained WSNs, programmable and reconfigurable stations allowing for long life-cycle development, system security enabling protection of data and system operation, short development time making more efficient the time-to-market process and simple installation and maintenance procedures for wider acceptance. Despite the considerable research and important advances in WSNs, large scale application of the technology is still hindered by technical, complexity and cost impediments. Ongoing R&D is addressing these shortcomings by focusing on energy harvesting, middleware, network intelligence, standardization, network reliability, adaptability and scalability. However, for efficient WSN development, deployment, testing, and maintenance, a holistic unified approach is necessary which will address the above WSN challenges by developing an integrated platform for smart environments with built-in user friendliness, practicality and efficiency. This platform will enable the user to evaluate his design by identifying critical features and application requirements, to verify by adopting design indicators and to ensure ease of development and long life cycle by incorporating flexibility, expandability and reusability. These design requirements can be accomplished to a significant extent via an integration tool that provides a multiple level framework of functionality composition and adaptation for a complex WSN environment consisting of heterogeneous platform technologies, establishing a software infrastructure which couples the different views and engineering disciplines involved in the development of such a complex system, by means of the accurate definition of all necessary rules and the design of the 'glue-logic' which will guarantee the correctness of composition of the various building blocks. Furthermore, to attain an enhanced efficiency, the design/development tool must facilitate consistency control as well as evaluate the selections made by the user and, based on specific criteria, provide feedback on errors concerning consistency and compatibility as well as warnings on potentially less optimal user selections. Finally, the WSN planning tool will provide answers to fundamental issues such as the number of nodes needed to meet overall system objectives, the deployment of these nodes to optimize network performance and the adjustment of network topology and sensor node placement in case of changes in data sources and network malfunctioning.
Keywords: computer network reliability; computer network security; data protection; energy conservation; energy harvesting; middleware; open systems; optimisation; quality of service; sensor placement; telecommunication network planning; telecommunication network topology; telecommunication power management; telecommunication traffic; time to market; wireless sensor networks; QoS; WSN reliability; constrained radio best-effort utilization; data aggregation; data security enabling protection; design-development tool; energy efficiency; failure recovery; heterogeneous platform technology; holistic unified approach; interoperability; network intelligence; network topology adjustment; power consumption; power efficiency; sensor node placement; time-to-market process; traffic load; wireless sensor network planning tools; Electrical engineering; Embedded computing; Europe; Security; Wireless sensor networks (ID#: 16-10006)


M. Jaekel, P. Schaefer, D. Schacht, S. Patzack and A. Moser, “Modular Probabilistic Approach for Modelling Distribution Grids and Its Application,” International ETG Congress 2015; Die Energiewende — Blueprints for the new energy age; Proceedings of, Bonn, Germany, 2015, pp. 1-7. doi:  (not provided)
Abstract: Due to the high increase in installed distributed renewable energy sources (DRES) new challenges in the planning and operation of distribution grids (DG) exist. This paper proposes an approach to generate models of present and future synthetic DG based on statistical data of existing networks and operational planning. Compared to the utilization of grid samples a probabilistic network generator offers significant advantages, which are demonstrated in this paper. A modular design and a simple expandability is one of the most important requirements for its application in different issues. In this context four exemplary use cases are described - reactive power analysis, the identification of planning principles, analysing benefits of innovative network equipment and short circuit protection analysis.
Keywords:  (not provided) (ID#: 16-10007)


S. R. Bandela, S. K and R. K. P, “Implementation of NTCIP in Road Traffic Controllers for Traffic Signal Coordination,” 2015 Fifth International Conference on Advances in Computing and Communications (ICACC), Kochi, 2015, pp. 20-23. doi: 10.1109/ICACC.2015.58
Abstract: National Transportation Communication for Intelligent Transportation System Protocol (NTCIP) is a family of open standards, defining common communications protocols and data definitions for transmitting data and messages between computer systems used in Intelligent Transportation Systems (ITS). The Intelligent Transportation Systems make use of Information Technology, Computers, Telecommunication and Electronics (ICTE) in the effort of improving safety and mobility of automobiles and road users. In this effort it is likely that the various devices and gadgets used in ITS communicate each other. As of now many ITS solutions use proprietary protocol for communication that restricts interoperability and interchangeability while sharing a common platform. NTCIP provides the benefits of device interoperability and interchangeability, bridging the gap. In ITS, the Adaptive Traffic Control System (ATCS) is widely accepted in the present day for road traffic control and realtime signal coordination. The ATCS receives traffic information from all traffic junctions in a road traffic network in a timely manner. This information is processed centrally by the ATCS and signal timings at the traffic junctions are updated in realtime for minimum stops and delays to improve the travel time. There are many vendors manufacturing ATCS and traffic controllers with their proprietary protocol. This leads to the lack of interoperability between the ATCS and the traffic controllers restricting the expandability and customer choice. This problem can be overcome by adopting the concept of NTCIP in the communication process. This paper discusses how the traffic controller is made NTCIP compliant by adding the SNMP Agent functionality into it and how the communication is carried out in the form of NTCIP standards in spite of its proprietary terminology.
Keywords: automobiles; intelligent transportation systems; protocols; road safety; road traffic control; ATCS; ICTE; ITS; NTCIP; SNMP agent functionality; adaptive traffic control system; automobile mobility; automobile safety; communication process; communications protocols; computer systems; computers; data definitions; device interchangeability; device interoperability; electronics; information technology; national transportation communication for intelligent transportation system protocol; open standards; proprietary protocol; realtime signal coordination; road traffic controllers; road traffic network; telecommunication; traffic signal coordination; Interoperability; Junctions; Protocols; Servers; Standards; Traffic control; Vehicles; ATCS; NTCIP; SNMP TRAP; Traffic Controller (ID#: 16-10008)


Ritu, N. Verma, S. Mishra and S. Shukla, “Implementation of Solar Based PWM Fed Two Phase Interleaved Boost Converter,” 2015 Communication, Control and Intelligent Systems (CCIS), Mathura, 2015, pp. 470-476. doi: 10.1109/CCIntelS.2015.7437962
Abstract: Renewable energy plays a dominant role in electricity production with the increase in global warning. Advantages like ENVIRONMENTAL friendliness, expandability and flexibility have made its wider application. Nowadays, step up power conversion is widely used in many applications and power capability demands. The applications of step up power conversion may be seen in electric vehicles, photovoltaic (PV) system, uninterruptable power supplies (UPS), and fuel cell power system. Boost converter is one type of DC-DC step up power converter. Step up power converters is quite popular because it can produce higher DC voltage output from low voltage input. In this paper, the analysis of interleaved boost converter is done by controlling with interleaved switching signals, which are having same switching frequency but shifted in phase. By utilizing the parallel operation of converters, the input current can be shared among the inductors so that high reliability and efficiency in power electronic systems can be obtained. Simulation study for PWM fed two phases IBC for solar cell has been implemented using MATLAB/ SIMULINK. The simulation results show the reduction in ripple quantity up to zero, which makes the operation of IBC to be more reliable and stable when it is utilized with solar cell.
Keywords: DC-DC power convertors; PWM power convertors; power electronics; renewable energy sources; solar cells; solar power stations; DC-DC step up power converter; electricity production; global warning; interleaved switching signals; power electronic systems; renewable energy; solar based PWM fed two phase interleaved boost converter; solar cell; Capacitors; Inductors; Insulated gate bipolar transistors; MATLAB; Mathematical model; Pulse width modulation; Switches; IBC; MATLAB; PWM; Ripple; Solar PV Cell  (ID#: 16-10009)


L. Kohútka, M. Vojtko and T. Krajcovic, “Hardware Accelerated Scheduling in Real-Time Systems,” Engineering of Computer Based Systems (ECBS-EERC), 2015 4th Eastern European Regional Conference on the, Brno, 2015, pp. 142-143. doi: 10.1109/ECBS-EERC.2015.32
Abstract: There are two groups of task scheduling algorithms in real-time systems. The first group contains algorithms that have constant asymptotic time complexity and thus these algorithms lead to deterministic task switch duration but smaller theoretical CPU utilisation. The second group contains complex algorithms that plan more efficient task sequences and thus the better CPU utilisation. The problem is that each task scheduling algorithm belongs to one of these two groups only. This is a motivation to design a real-time task scheduler that has all the benefits mentioned above. In order to reach this goal, we decided to reduce the time complexity of an algorithm from the second group by using hardware acceleration. We propose a scalable hardware representation of task scheduler in a form of coprocessor based on EDF algorithm. Thanks to the achieved constant time complexity, the hardware scheduler can help real-time systems to have more tasks that meet their deadlines while keeping high CPU utilisation and system determinism. Another advantage of our task scheduler is that any task can be removed from the scheduler according to the ID of the task, which increases expandability of the task scheduler.
Keywords: computational complexity; coprocessors; real-time systems; scheduling; CPU utilisation; EDF algorithm; asymptotic time complexity; coprocessor; hardware accelerated scheduling; task scheduling algorithms; Computer architecture; Coprocessors; Hardware; Real-time systems; Scheduling algorithms; Software; FPGA; hardware acceleration; performance; task queue; task scheduling (ID#: 16-10010)


J. J. Lin, “Integration of Multiple Automotive Radar Modules Based on Fiber-Wireless Network,” Wireless and Optical Communication Conference (WOCC), 2015 24th, Taipei, 2015, pp. 36-39. doi: 10.1109/WOCC.2015.7346112
Abstract: An integrated millimeter-wave (77/79 GHz) automotive radar system based on fiber-wireless network is proposed. The purpose of this integrated system is to realize a 360° radar protection and to make the overall automotive radar system more affordable. The central module (CM) generates the desired radar signals and processes the received data. The individual radar modules (RMs) only amplify and mix down the signals. No PLL and ADCs are in RMs. Fiber network is applied for distributing the millimeter-wave radar signals from CM to RMs. An example of integration of four automotive radar modules based on fiber-wireless (Fi-Wi) network is also discussed. A frequency quadrupler is utilized in RM. Therefore, CM needs to generate only 19~20.25-GHz signals. This could lower the operation frequencies as well as the cost of optical-to-electrical (O/E) and electrical-to-optical (E/O) converters. The smaller individual RM will provide more installation flexibility. Furthermore, the fiber network can also be the backbone of Advanced Driver Assistance Systems (ADAS) to connect more sensors, and accommodate the future big data flow. Fi-Wi network could provide the overall integrated automotive radar system with more expandability. This proposed system could be a great candidate to provide sensing functions of future fully autonomous cars.
Keywords: free-space optical communication; millimetre wave radar; radar signal processing; road vehicle radar; Big Data flow; Fi-Wi network; advanced driver assistance system; central module; electrical-to-optical converter; fiber-wireless network; frequency 77 GHz; frequency 79 GHz; frequency quadrupler; fully autonomous cars; millimeter-wave automotive radar system integration; millimeter-wave radar signal; multiple automotive radar module integration; optical-to-electrical converter; radar module; radar protection; Advanced driver assistance systems; Automotive engineering; Optical fiber amplifiers; Optical fiber networks; Optical fiber sensors; Radar; advanced driver assistance systems; automotive radar; fiber-wireless; optical-wireless; radio-over-fiber; sensor (ID#: 16-10011)


B. Wu, S. Li, K. Ma Smedley and S. Singer, “A Family of Two-Switch Boosting Switched-Capacitor Converters,” in IEEE Transactions on Power Electronics, vol. 30, no. 10, pp. 5413-5424, Oct. 2015. doi: 10.1109/TPEL.2014.2375311
Abstract: A family of “Two-Switch Boosting Switched-Capacitor Converters (TBSC)” is introduced, which distinguishes itself from the prior arts by symmetrically interleaved operation, reduced output ripple, low yet even voltage stress on components, and systematic expandability. Along with the topologies, a modeling method is formulated, which provokes the converter regulation method through duty cycle and frequency adjustment. In addition, the paper also provides guidance for circuit components and parameter selection. A 1-kW 3X TBSC was built to demonstrate the converter feasibility, regulation capability via duty cycle and frequency, which achieved a peak efficiency of 97.5% at the rated power.
Keywords: power convertors; converter regulation method; duty cycle; efficiency 97.5 percent; frequency adjustment; power 1 kW; two-switch boosting switched-capacitor converters; Capacitors; Integrated circuit modeling; Pulse width modulation; Stress; Switches; Topology; Voltage control; Frequency modulation; TBSC; frequency modulation; interleaved; modeling; switched-capacitor; two-switch boosting switched-capacitor converters (TBSC) (ID#: 16-10012)


S. Khan, W. Dang, L. Lorenzelli and R. Dahiya, “Flexible Pressure Sensors Based on Screen-Printed P(VDF-TrFE) and P(VDF-TrFE)/MWCNTs,” in IEEE Transactions on Semiconductor Manufacturing, vol. 28, no. 4, pp. 486-493, Nov. 2015. doi: 10.1109/TSM.2015.2468053
Abstract: This paper presents large-area-printed flexible pressure sensors developed with an all screen-printing technique. The 4 × 4 sensing arrays are obtained by printing polyvinylidene fluoride-trifluoroethylene P(VDF-TrFE) and their nanocomposite with multi-walled carbon nanotubes (MWCNTs) and are sandwiched between printed metal electrodes in a parallel plate structure. The bottom electrodes and sensing materials are printed sequentially on polyimide and polyethylene terephthalate (PET) substrates. The top electrodes with force concentrator posts on backside are printed on a separate PET substrate and adhered with good alignment to the bottom electrodes. The interconnects, linking the sensors in series, are printed together with metal electrodes and they provide the expandability of the cells. Different weight ratios of MWCNTs are mixed in P(VDF-TrFE) to optimize the percolation threshold for a better sensitivity. The nanocomposite of MWCNTs in piezoelectric P(VDF-TrFE) is also explored for application in stretchable interconnects, where the higher conductivity at lower percolation ratios are of significant importance compared to the nanocomposite of MWCNTs in an insulator material. To examine the functionality and sensitivity of sensor module, the capacitance-voltage analysis at different frequencies, and the piezoelectric and piezoresistive response of the sensor are presented. The whole package of foldable pressure sensor is completely developed by screen-printing and is targeted toward realization of low-cost electronic skin.
Keywords: electrodes; insulating materials; multi-wall carbon nanotubes; nanocomposites; polymers; pressure sensors; C; P(VDF-TrFE)-MWCNT; bottom electrodes; capacitance-voltage analysis; force concentrator; insulator material; large-area-printed flexible pressure sensors; low-cost electronic skin; multiwalled carbon nanotubes; nanocomposite; parallel plate structure; percolation threshold; piezoelectric response; piezoresistive response; polyethylene terephthalate substrates; polyimide substrates; polyvinylidene fluoride-trifluoroethylene; printed metal electrodes; screen-printed P(VDF-TrFE); sensing arrays; sensing materials; stretchable interconnects; top electrodes; Flexible electronics; Nanocomposites; Piezoelectric devices; Pressure sensors; Printing; Flexible Sensors; P(VDF-TrFE); Piezoelectric; Screen Printing; Screen printing; Spin Coating; flexible sensors; piezoelectric; spin coating
(ID#: 16-10013)


Yang Liu and J. Ai, “A Software Evolution Complex Network for Object Oriented Software,” Prognostics and System Health Management Conference (PHM), 2015, Beijing, 2015, pp. 1-6. doi: 10.1109/PHM.2015.7380050
Abstract: With rapid growth of software Complexity, software reliability is an important issue in recent years. Thus, software complex networks are raised to give an expression of software complexity. Recent software complex networks are insufficient in expressing software feature with software reliability. In this paper, a software evolution complex network for object-oriented software (OOSEN) is built based on object-oriented code. With detailed structural feature and software version updating information, OOSEN makes improvement in expressing software features. By analysis software version updating dates, OOSEN builds a more effectively relationship with software reliability. The expandability makes OOSEN more suitable to express software system.
Keywords: object-oriented methods; software metrics; software reliability; OOSEN; object oriented software; software complexity; software evolution complex network; software version updating information; Software reliability; Software systems; software code; software complex network; software evolution complex network;  software version (ID#: 16-10014)


Z. Xiao-yan and K. Dan, “Research of Coal Quality Detection Management Information System in Coal Enterprise,” Signal Processing, Communications and Computing (ICSPCC), 2015 IEEE International Conference on, Ningbo, 2015, pp. 1-4. doi: 10.1109/ICSPCC.2015.7338855
Abstract: On the basis of deeply studying coal quality inspection management business process, and combine with open source framework technology, we have designed a coal quality detection management information system based on J2EE. The system uses jxl report processing technology and vector graphics library Raphael, and makes it easy for users to analyze the coal seam and coal quality visually. The trial results show that the coal quality detection management information system could have excellent stability and expandability and would have wide application prospects in the information management of coal enterprise.
Keywords: coal; computer graphic equipment; inspection; public domain software; quality management; J2EE; coal enterprise; coal quality detection management information system; coal quality inspection management business process; coal seam; information management; jxl report processing technology; open source framework technology; vector graphics library Raphael; Coal; Face; Inspection; Management information systems; Personnel; Tunneling; Open source framework; Raphael; coal quality detection
(ID#: 16-10015)


B. H. Song, J. Shin, S. Kim and J. Jeong, “On PMIPv6-Based Mobility Support for Hierarchical P2P-SIP Architecture in Intelligent Transportation System,” System Sciences (HICSS), 2015 48th Hawaii International Conference on, Kauai, HI, 2015, pp. 5446-5452. doi: 10.1109/HICSS.2015.637
Abstract: Network Service providers have many worries about providing network services with an expandable, reliable, flexible and low-cost structure according to the expanding market environment. The current client-server system has various problems such as complexity and high costs in providing network services. On the contrary to this, this problem can be simply solved, if the Peer-to-Peer (P2P) communication terminal supporting access of distributed resources provides functions which the current Session Initiation Protocol (SIP) -based network devices have. Because diverse terminals in a network access through networks, also, partitioning network domains with gateways to manage, and applying the Proxy Mobile IPv6 (PMIPv6) technology considering mobility of terminals would help to have a more efficient network structure. Especially, the proposed P2P-SIP structure proves itself as a very efficient structure to have an outstanding expandability among different networks in a region, and to reduce maintenance costs.
Keywords: IP networks; client-server systems; cost reduction; intelligent transportation systems; internetworking; mobile computing; network servers; peer-to-peer computing; signalling protocols; P2P communication terminal; P2P-SIP structure; PMIPv6 technology; PMIPv6-based mobility support; SIP-based network devices; client-server system; distributed resources; gateways; hierarchical P2P-SIP architecture; intelligent transportation system; maintenance cost reduction; network domain partitioning; peer-to-peer communication terminal; proxy mobile IPv6 technology; session initiation protocol-based network devices; Logic gates; Maintenance engineering; Manganese; Mobile radio mobility management; Overlay networks; Registers; Servers; Intelligent Transportation System; P2P-SIP Architecture; PMIPv6-Based Mobility Management; Proxy Mobile IPv6 (ID#: 16-10016)


A. V. Ho, T. W. Chun and H. G. Kim, “Extended Boost Active-Switched-Capacitor/Switched-Inductor Quasi-Z-Source Inverters,” in IEEE Transactions on Power Electronics, vol. 30, no. 10, pp. 5681-5690, Oct. 2015. doi: 10.1109/TPEL.2014.2379651
Abstract: This paper proposes a new topology named the active-switched-capacitor/switched-inductor quasi-Z -source inverter (ASC/SL-qZSI), which is based on a traditional qZSI topology. Compared to other qZSI-based topologies under the same operating conditions, the proposed ASC/SL-qZSI provides higher boost ability, requires fewer passive components such as inductors and capacitors, and achieves lower voltage stress across the switching devices of the main inverter. Another advantage of the topology is its expandability. If a higher boosting rate is required, additional cells can easily be cascaded at the impedance network by adding one inductor and three diodes. Both the simulation studies and the experimental results obtained from a prototype built in the laboratory validate proper operation and performance of the proposed ASC/SL-qZSI.
Keywords: invertors; power capacitors; power inductors; extended boost active-switched-capacitor quasi-Z-source inverters; extended boost active-switched-inductor quasi-Z-source inverters; impedance network; Capacitors; Inductors; Inverters; Modulation; Network topology; Switches; Topology; Active switched capacitor; Active switched capacitor (ASC); boost ability; quasi-Z-source inverter (qZSI); switched inductor (ID#: 16-10017)


V. S. Latha and D. S. B. Rao, “The Evolution of the Ethernet: Various Fields of Applications,” 2015 Online International Conference on Green Engineering and Technologies (IC-GET), Coimbatore, India, 2015, pp. 1-7. doi: 10.1109/GET.2015.7453807
Abstract: Ethernet technology became predominant due to optimistic nature with proven simplicity, cost, reliability, ease of installation and expandability. This Attractive nature of Ethernet made its existence in any fields of applications such as Industry to Avionics, Video and Voice applications which are intended for higher network speeds. To handle such faster data rates Ethernet has been adopted as alternative technology. The main objective of this paper is to describe Evolution of Ethernet towards the 400GBPS technology and various fields of applications.
Keywords: Bandwidth; EPON; IEEE 802.3 Standard; Local area networks; Physical layer; Wavelength division multiplexing; CSMA/CD Standard; IEEE 802.3; Media Independent Interface (MII); Physical Coding Sublayer (PCS); Physical Layer (PHY); Physical Medium Attachment (PMA); Physical Medium Dependent (PMD) (ID#: 16-10018)


A. M. Lalge, A. Shrivastav and S. U. Bhandari, “Implementing PSK MODEMs on FPGA Using Partial Reconfiguration,” Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, Pune, 2015, pp. 917-921. doi: 10.1109/ICCUBEA.2015.182
Abstract: The radio, which has as many as components with programmable devices, was envisioned as future of telecommunication industry by Joseph Mitola in 1991. The traditional, bulky and costly radios are expected to be replaced by a radio in which properties of carrier frequency, signal bandwidth, modulation and network access are defined in software. The key requirements for SDR platforms are flexibility, expandability, scalability, re-configurability and re-programmability. In SDR, the power consumption, configuration time, hardware usage plays significant role. FPGA has both high speed processing capability and good reconfigurable performance hence FPGA architecture is a viable solution for SDR technology.  The objective of this paper is to demonstrate simulation and implementation of the PSK modems on FPGA using Partial Reconfiguration. By using Partial Reconfiguration (PR) technique the hardware usage, configuration time and power consumption can be reduced. The PSK modulator and demodulator algorithms are simulated using MATLAB R2013a and implemented on FPGA using Xilinx ISE 14.2 System Generator, PlanAhead, Partial Reconfiguration Tool. The results indicate Partial Reconfiguration design leads to negligible reconfiguration time saving in resource utilization by 55%, in power consumption by 75%. The output waveforms are displayed and analyzed using Xilinx ChipScope Pro. The output waveforms are displayed and analyzed using Xilinx ChipScope Pro.
Keywords: demodulators; field programmable gate arrays; modems; phase shift keying; reconfigurable architectures; software radio; FPGA architecture; MATLAB R2013a; PSK demodulator algorithm; PSK modem; PSK modulator; PlanAhead; SDR; Xilinx ChipScope Pro; Xilinx ISE 14.2 system generator; configuration time; hardware usage; partial reconfiguration design; partial reconfiguration tool; power consumption; reconfigurable performance; Binary phase shift keying; Field programmable gate arrays; Generators; Hardware; Modems; BPSK; LFSR; PR; QPSK (ID#: 16-10019)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Fog Computing Security 2015


SoS Logo

Fog Computing Security



Fog computing is a concept that extends the Cloud concept to the end user. As with most new technologies, a survey of the scope and types of security problems is necessary. Much of the research presented relates to the Internet of Things. The articles cited here were presented in 2015.

Y. Wang, T. Uehara and R. Sasaki, “Fog Computing: Issues and Challenges in Security and Forensics,” Computer Software and Applications Conference (COMPSAC), 2015 IEEE 39th Annual, Taichung, 2015, pp. 53-59. doi: 10.1109/COMPSAC.2015.173
Abstract: Although Fog Computing is defined as the extension of the Cloud Computing paradigm, its distinctive characteristics in the location sensitivity, wireless connectivity, and geographical accessibility create new security and forensics issues and challenges which have not been well studied in Cloud security and Cloud forensics. In this paper, through an extensive review of the motivation and advantages of the Fog Computing and its unique features as well as the comparison on various scenarios between the Fog Computing and Cloud Computing, the new issues and challenges in Fog security and Fog forensics are presented and discussed. The result of this study will encourage and promote more extensive research in this fascinating field, Fog security and Fog forensics.
Keywords: cloud computing; digital forensics; cloud computing paradigm; cloud forensics; cloud security; fog computing; fog forensics; fog security; geographical accessibility; location sensitivity; wireless connectivity; Cloud computing; Digital forensics; Mobile communication; Security; Wireless communication; Wireless sensor networks; Cloud Computing; Cloud Forensics; Cloud Security; Fog Computing; Fog Forensics; Fog Security (ID#: 16-10307)


K. Lee, D. Kim, D. Ha, U. Rajput and H. Oh, “On Security and Privacy Issues of Fog Computing Supported Internet of Things Environment,” Network of the Future (NOF), 2015 6th International Conference on the, Montreal, QC, 2015, pp. 1-3. doi: 10.1109/NOF.2015.7333287
Abstract: Recently, the concept of Internet of Things (IoT) is attracting much attention due to the huge potential. IoT uses the Internet as a key infrastructure to interconnect numerous geographically diversified IoT nodes which usually have scare resources, and therefore cloud is used as a key back-end supporting infrastructure. In the literature, the collection of the IoT nodes and the cloud is collectively called as an IoT cloud. Unfortunately, the IoT cloud suffers from various drawbacks such as huge network latency as the volume of data which is being processed within the system increases. To alleviate this issue, the concept of fog computing is introduced, in which foglike intermediate computing buffers are located between the IoT nodes and the cloud infrastructure to locally process a significant amount of regional data. Compared to the original IoT cloud, the communication latency as well as the overhead at the backend cloud infrastructure could be significantly reduced in the fog computing supported IoT cloud, which we will refer as IoT fog. Consequently, several valuable services, which were difficult to be delivered by the traditional IoT cloud, can be effectively offered by the IoT fog. In this paper, however, we argue that the adoption of IoT fog introduces several unique security threats. We first discuss the concept of the IoT fog as well as the existing security measures, which might be useful to secure IoT fog. Then, we explore potential threats to IoT fog.
Keywords: Internet of Things; cloud computing; data privacy; security of data; Internet of Things environment; IoT cloud; IoT fog; IoT nodes; back-end cloud infrastructure; back-end supporting infrastructure; cloud infrastructure; communication latency; fog computing; network latency; privacy issues; security issues; security threats; Cloud computing; Distributed databases; Internet of things; Privacy; Real-time systems; Security; Sensors (ID#: 16-10308)


M. Aazam and E. N. Huh, “Fog Computing Micro Datacenter Based Dynamic Resource Estimation and Pricing Model for IoT,” 2015 IEEE 29th International Conference on Advanced Information Networking and Applications, Gwangiu, 2015, pp. 687-694. doi: 10.1109/AINA.2015.254
Abstract: Pervasive and ubiquitous computing services have recently been under focus of not only the research community, but developers as well. Prevailing wireless sensor networks (WSNs), Internet of Things (IoT), and healthcare related services have made it difficult to handle all the data in an efficient and effective way and create more useful services. Different devices generate different types of data with different frequencies. Therefore, amalgamation of cloud computing with IoTs, termed as Cloud of Things (CoT) has recently been under discussion in research arena. CoT provides ease of management for the growing media content and other data. Besides this, features like: ubiquitous access, service creation, service discovery, and resource provisioning play a significant role, which comes with CoT. Emergency, healthcare, and latency sensitive services require real-time response. Also, it is necessary to decide what type of data is to be uploaded in the cloud, without burdening the core network and the cloud. For this purpose, Fog computing plays an important role. Fog resides between underlying IoTs and the cloud. Its purpose is to manage resources, perform data filtration, preprocessing, and security measures. For this purpose, Fog requires an effective and efficient resource management framework for IoTs, which we provide in this paper. Our model covers the issues of resource prediction, customer type based resource estimation and reservation, advance reservation, and pricing for new and existing IoT customers, on the basis of their characteristics. The implementation was done using Java, while the model was evaluated using CloudSim toolkit. The results and discussion show the validity and performance of our system.
Keywords: Internet of Things; Java; cloud computing; computer centres; pricing; resource allocation; wireless sensor networks; CloudSim toolkit; CoT; IoT; WSN; cloud of things; customer type based resource estimation; customer type based resource reservation; data filtration; fog computing microdata center based dynamic resource estimation; healthcare related services; latency sensitive services; media content; pervasive computing services; pricing model; real-time response; resource prediction issues; resource provisioning; service creation; service discovery; ubiquitous access; ubiquitous computing services; wireless sensor networks; Cloud computing; Logic gates; Mobile handsets; Performance evaluation; Pricing; Resource management; Wireless sensor networks; Cloud of Things; Edge computing; Fog computing; Micro Data Center; resource management (ID#: 16-10309)


M. A. Hassan, M. Xiao, Q. Wei and S. Chen, “Help Your Mobile Applications with Fog Computing,” Sensing, Communication, and Networking - Workshops (SECON Workshops), 2015 12th Annual IEEE International Conference on, Seattle, WA, 2015, pp. 1-6. doi: 10.1109/SECONW.2015.7328146
Abstract: Cloud computing has paved a way for resource-constrained mobile devices to speed up their computing tasks and to expand their storage capacity. However, cloud computing is not necessary a panacea for all mobile applications. The high network latency to cloud data centers may not be ideal for delay-sensitive applications while storing everything on public clouds risks users' security and privacy. In this paper, we discuss two preliminary ideas, one for mobile application offloading and the other for mobile storage expansion, by leveraging the edge intelligence offered by fog computing to help mobile applications. Preliminary experiments conducted based on implemented prototypes show that fog computing can provide an effective and sometimes better alternative to help mobile applications.
Keywords: cloud computing; mobile computing; cloud data centers; edge intelligence; fog computing; mobile applications; network latency; Androids; Bandwidth; Cloud computing; Mobile applications; Mobile handsets; Servers; Time factors (ID#: 16-10310)


M. Aazam and E. N. Huh, “Dynamic Resource Provisioning Through Fog Micro Datacenter,” Pervasive Computing and Communication Workshops (PerCom Workshops), 2015 IEEE International Conference on, St. Louis, MO, 2015, pp. 105-110. doi: 10.1109/PERCOMW.2015.7134002
Abstract: Lately, pervasive and ubiquitous computing services have been under focus of not only the research community, but developers as well. Different devices generate different types of data with different frequencies. Emergency, healthcare, and latency sensitive services require real-time response. Also, it is necessary to decide what type of data is to be uploaded in the cloud, without burdening the core network and the cloud. For this purpose, Fog computing plays an important role. Fog resides between underlying IoTs and the cloud. Its purpose is to manage resources, perform data filtration, preprocessing, and security measures. For this purpose, Fog requires an effective and efficient resource management framework, which we provide in this paper. Moreover, since Fog has to deal with mobile nodes and IoTs, which involves objects and devices of different types, having a fluctuating connectivity behavior. All such types of service customers have an unpredictable relinquish probability, since any object or device can quit resource utilization at any moment. In our proposed methodology for resource estimation and management, we have taken into account these factors and formulate resource management on the basis of fluctuating relinquish probability of the customer, service type, service price, and variance of the relinquish probability. Implementation of our system was done using Java, while evaluation was done on CloudSim toolkit. The discussion and results show that these factors can help service provider estimate the right amount of resources, according to each type of service customers.
Keywords: Internet of Things; cloud computing; computer centres; mobile computing; probability; resource allocation; CloudSim toolkit; Fog computing; Fog microdatacenter; IoT; Java; data filtration; data preprocessing; dynamic resource provisioning; mobile nodes; pervasive computing services; real-time response; research community; resource management framework; resource utilization; security measures; service price; service provider; service type; ubiquitous computing services; Cloud computing; Conferences; Estimation; Logic gates; Resource management; Sensors; Wireless sensor networks; Cloud of Things; Edge Computing; Fog-Smart Gateway (FSG); IoT; Micro Data Center (MDC); resource management (ID#: 16-10311)


C. Vallati, A. Virdis, E. Mingozzi and G. Stea, “Exploiting LTE D2D Communications in M2M Fog Platforms: Deployment and Practical Issues,” Internet of Things (WF-IoT), 2015 IEEE 2nd World Forum on, Milan, 2015, pp. 585-590.
doi: 10.1109/WF-IoT.2015.7389119
Abstract: Fog computing is envisaged as the evolution of the current centralized cloud to support the forthcoming Internet of Things revolution. Its distributed architecture aims at providing location awareness and low-latency interactions to Machine-to-Machine (M2M) applications. In this context, the LTE-Advanced technology and its evolutions are expected to play a major role as a communication infrastructure that guarantees low deployment costs, plug-and-play seamless configuration and embedded security. In this paper, we show how the LTE network can be configured to support future M2M Fog computing platforms. In particular it is shown how a network deployment that exploits Device-to-Device (D2D) communications, currently under definition within 3GPP, can be employed to support efficient communication between Fog nodes and smart objects, enabling low-latency interactions and locality-preserving multicast transmissions. The proposed deployment is presented highlighting the issues that its practical implementation raises. The advantages of the proposed approach against other alternatives are shown by means of simulation.
Keywords: Internet of Things; Long Term Evolution; cloud computing; mobile computing; D2D communication; LTE-Advanced technology; M2M fog platform; device-to-device communication; fog computing; machine-to-machine application; Actuators; Cloud computing; Computer architecture; Intelligent sensors; Long Term Evolution; D2D; Fog Computing; LTE; LTE-Advanced; M2M
(ID#: 16-10312)


Hongyu Xiang, Mugen Peng, Yuanyuan Cheng and H. H. Chen, “Joint Mode Selection and Resource Allocation for Downlink Fog Radio Access Networks Supported D2D,” Heterogeneous Networking for Quality, Reliability, Security and Robustness (QSHINE), 2015 11th International Conference on, Taipei, 2015, pp. 177-182. doi: (not provided)
Abstract: Presented as an innovative paradigm incorporating the cloud computing into radio access network, cloud radio access networks (C-RANs) have been shown advantageous in curtailing the capital and operating expenditures as well as providing better services to the customers. However, heavy burden on the non-ideal fronthaul limits performances of C-RANs. Here we focus on the alleviation of burden on the fronthaul via the edge devices' caches and propose a fog computing based RAN (F-RAN) architecture with three candidate transmission modes: device to device, local distributed coordination, and global C-RAN. Followed by the proposed simple mode selection scheme, the average energy efficiency (EE) of systems optimization problem considering congestion control is presented. Under the Lyapunov framework, the problem is reformulated as a joint mode selection and resource allocation problem, which can be solved by block coordinate descent method. The mathematical analysis and simulation results validate the benefits of F-RAN and an EE-delay tradeoff can be achieved by the proposed algorithm.
Keywords: mathematical analysis; optimisation; radio equipment; radio links; radio networks; C-RANs; F-RAN architecture; Lyapunov framework; capital expenditures; cloud computing; cloud radio access networks; congestion control; device to device; downlink fog radio access networks supported D2D; edge devices; joint mode selection; local distributed coordination; operating expenditures; optimization problem; resource allocation problem; Chlorine; Performance evaluation; Resource management (ID#: 16-10313)


M. Koschuch, M. Hombauer, S. Schefer-Wenzl, U. Haböck and S. Hrdlicka, “Fogging the Cloud — Implementing and Evaluating Searchable Encryption Schemes in Practice,” 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM), Ottawa, ON, 2015, pp. 1365-1368. doi: 10.1109/INM.2015.7140497
Abstract: With the rise of cloud computing new ways to secure outsourced data have to be devised. Traditional approaches like simply encrypting all data before it is transferred only partially alleviate this problem. Searchable Encryption (SE) schemes enable the cloud provider to search for user supplied strings in the encrypted documents, while neither learning anything about the content of the documents nor about the search terms. Currently there are many different SE schemes defined in the literature, with their number steadily growing. But experimental results of real world performance, or direct comparisons between different schemes, are severely lacking. In this work we propose a simple Java client-server framework to efficiently implement different SE algorithms and compare their efficiency in practice. In addition, we demonstrate the possibilities of such a framework by implementing two different existing SE schemes from slightly different domains and compare their behavior in a real-world setting.
Keywords: Java; cloud computing; cryptography; document handling; Java client-server framework; SE schemes; encrypted documents; outsourced data security; searchable encryption schemes; user supplied strings; Arrays; Conferences; Encryption; Indexes; Servers (ID#: 16-10314)


R. Gupta and R. Garg, “Mobile Applications Modelling and Security Handling in Cloud-Centric Internet of Things,” Advances in Computing and Communication Engineering (ICACCE), 2015 Second International Conference on, Dehradun, 2015, pp. 285-290. doi: 10.1109/ICACCE.2015.119
Abstract: The Mobile Internet of Things (IoT) applications are already a part of technical world. The integration of these application with Cloud can increase the storage capacity and help users to collect and process their personal data in an organized manner. There are a number of techniques adopted for sensing, communicating and intelligently transmitting data from mobile devices onto the Cloud in IoT applications. Thus, security must be maintained while transmission. The paper outlines the need for Cloud-centric IoT applications using Mobile phones as the medium for communication. Overview of different techniques to use Mobile IoT applications with Cloud has been presented. Majorly four techniques namely Mobile Sensor Data Processing Engine (MOSDEN), Mobile Fog, Embedded Integrated Systems (EIS) and Dynamic Configuration using Mobile Sensor Hub (MosHub) are discussed and few of the similarities and comparisons between them is mentioned. There is a need to maintain confidentiality and security of the data being transmitted by these methodologies. Therefore, cryptographic mechanisms like Public Key Encryption (PKI)and Digital certificates are used for data mechanisms like Public Key Encryption (PKI) and Digital certificates are used for data management (TSCM) allows trustworthy sensing of data for public in IoT applications. The above technologies are used to implement an application called Smart Helmet by us to bring better understanding of the concept of Cloud IoT and support Assisted Living for the betterment of the society. Thus the Applications makes use of Nordic BLE board transmission and stores data onto the Cloud to be used by large number of people.
Keywords: Internet of Things; cloud computing; data acquisition; embedded systems; mobile computing; public key cryptography; trusted computing; EIS; MOSDEN; MosHub; Nordic BLE board transmission; PKI; Smart Helmet; TSCM; assisted living; cloud-centric Internet of Things; cloud-centric IoT applications; communication; cryptographic mechanisms; data confidentiality; data management; data mechanisms; data security; data transmission; digital certificates; dynamic configuration; embedded integrated systems; mobile Internet of Things; mobile IoT applications; mobile applications modelling; mobile devices; mobile fog; mobile phones; mobile sensor data processing engine; mobile sensor hub; personal data collection; personal data processing; public key encryption; security handling; sensing; storage capacity; trustworthy data; Bluetooth; Cloud computing; Mobile applications; Mobile communication; Mobile handsets; Security; Cloud IoT; Embedded Integrated Systems; Mobile Applications; Mobile Sensor Data Processing Engine; Mobile Sensor Hub; Nordic BLE board; Public Key Encryption; Smart Helmet (ID#: 16-10315)


M. Dong, K. Ota and A. Liu, “Preserving Source-Location Privacy Through Redundant Fog Loop for Wireless Sensor Networks,” Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on, Liverpool, 2015,
pp. 1835-1842. doi: 10.1109/CIT/IUCC/DASC/PICOM.2015.274
Abstract: A redundant fog loop-based scheme is proposed to preserve the source node-location privacy and achieve energy efficiency through two important mechanisms in wireless sensor networks (WSNs). The first mechanism is to create fogs with loop paths. The second mechanism creates fogs in the real source node region as well as many interference fogs in other regions of the network. In addition, the fogs are dynamically changing, and the communication among fogs also forms the loop path. The simulation results show that for medium-scale networks, our scheme can improve the privacy security by 8 fold compared to the phantom routing scheme, whereas the energy efficiency can be improved by 4 fold.
Keywords: data privacy; energy conservation; telecommunication power management; telecommunication security; wireless sensor networks; energy efficiency; medium-scale network; privacy security improvement; redundant fog loop-based scheme; source-location privacy preservation; wireless sensor network; Energy consumption; Phantoms; Position measurement; Privacy; Protocols; Routing; Wireless sensor networks; performance optimization; redundant fog loop; source-location privacy; wireless sensor networks
(ID#: 16-10316)


M. Zhanikeev, “A Cloud Visitation Platform to Facilitate Cloud Federation and Fog Computing,” in Computer, vol. 48, no. 5, pp. 80-83, May 2015. doi: 10.1109/MC.2015.122
Abstract: Evolving from hybrid clouds to true cloud federations and, ultimately, fog computing will require that cloud platforms allow for—and embrace—local hardware awareness.
Keywords: cloud computing; cloud federations; cloud visitation platform; fog computing; hybrid clouds; local hardware awareness; Cloud computing; Computer security; Software architecture; Streaming media; Cloud; cloud federations; hardware awareness; hardware virtualization (ID#: 16-10317)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Integrated Security Technologies in Cyber-Physical Systems 2015


SoS Logo

Integrated Security Technologies

in Cyber-Physical Systems, 2015


Cybersecurity has spent the past two decades largely as a “bolt-on” product added as an afterthought. To get to composability, built-in, integrated security will be a key factor. The research cited here was presented in 2015.

H. Hidaka, “How Future Mobility Meets IT: Cyber-Physical System Designs Revisit Semiconductor Technology,” Solid-State Circuits Conference (A-SSCC), 2015 IEEE Asian, Xiamen, 2015, pp. 1-4. doi: 10.1109/ASSCC.2015.7387514
Abstract: Cyber-Physical System (CPS) exemplified by future mobility application systems necessitates unconventional embedded design considerations in embedded systems; multiples latency-aware computing and communication construction, the importance of once non-functional requirements like security and safety to cover physical- and cyber-systems, and VLSI life-time design by ecology. All in all we have to reexamine and re-organize current semiconductor technology to produce platform bases for connected open collaborations to tackle global human challenges.
Keywords: VLSI; circuit analysis computing; cyber-physical systems; integrated circuit design; semiconductor technology; CPS; IT; VLSI life-time design; communication construction; cyber-physical system designs; embedded design; embedded systems; mobility application systems; multiple latency-aware computing; semiconductor technology; Automotive engineering; Cyber-physical systems; Safety; Security; Sensors; System analysis and design; Very large scale integration (ID#: 16-11257)


Y. Peng et al., “Cyber-Physical Attack-Oriented Industrial Control Systems (ICS) Modeling, Analysis and Experiment Environment,” 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP),
Adelaide, SA, 2015, pp. 322-326. doi: 10.1109/IIH-MSP.2015.110
Abstract: The most essential difference between information technology (IT) and industrial control systems (ICS) is that ICSs are Cyber-Physical Systems (CPS) and they have direct effects on the physical world. In the context of this paper, the specific attacks which can lead to physical damage via cyber means are named as Cyber-Physical Attacks. In the real world, malware associated events, such as Stuxnet, have proven that this kind of attack is both feasible and destructive. We proposed an ICS-CPS operation dual-loop analysis model (ICONDAM) for analyzing ICS' human-cyber-physical interdependences. And we present an architecture and the features of our CPS-based Critical Infrastructure Integrated Experiment Platform (C2I2EP) ICS experiment environment. Through both theory analysis and experiments over the Cyber-Physical Attacks performed on our ICS experiment environment, we can say that ICONDAM model and C2I2EP experiment environment has a promising prospect in the field of ICS cyber-security research.
Keywords: industrial control; invasive software; production engineering computing; C2I2EP; CPS-based critical infrastructure integrated experiment platform; ICONDAM model; ICS cyber-security research; ICS experiment environment; ICS human-cyber-physical interdependences; ICS modeling; ICS-CPS operation dual-loop analysis model; IT; Stuxnet; cyber-physical attack-oriented industrial control systems; information technology; malware associated events; Analytical models; Biological system modeling; Integrated circuit modeling; Malware; Process control; Sensors; Cyber-Physical Attacks; Cyber-Physical Systems (CPS); Industrial Control Systems (ICS); cyber security; experiment environment (ID#: 16-11258)


L. Vegh and L. Miclea, “A Simple Scheme for Security and Access Control in Cyber-Physical Systems,” 2015 20th International Conference on Control Systems and Computer Science, Bucharest, 2015, pp. 294-299. doi: 10.1109/CSCS.2015.13
Abstract: In a time when technology changes continuously, where things you need today to run a certain system, might not be needed tomorrow anymore, security is a constant requirement. No matter what systems we have, or how we structure them, no matter what means of digital communication we use, we are always interested in aspects like security, safety, privacy. An example of the ever-advancing technology are cyber-physical systems. We propose a complex security architecture that integrates several consecrated methods such as cryptography, steganography and digital signatures. This architecture is designed to not only ensure security of communication by transforming data into secret code, it is also designed to control access to the system and detect and prevent cyber attacks.
Keywords: authorisation; cryptography; digital signatures; steganography; access control; cyber attacks; cyber-physical system; security architecture; security requirement; system security; Computer architecture; Digital signatures; Encryption; Public key; multi-agent systems; (ID#: 16-11259)


M. Heiss, A. Oertl, M. Sturm, P. Palensky, S. Vielguth and F. Nadler, “Platforms for Industrial Cyber-Physical Systems Integration: Contradicting Requirements as Drivers for Innovation,” Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, Seattle, WA, 2015, pp. 1-8. doi: 10.1109/MSCPES.2015.7115405
Abstract: The full potential of distributed cyber-physical systems (CPS) can only be leveraged if their functions and services can be flexibly integrated. Challenges like communication quality, interoperability, and amounts of data are massive. The design of such integration platforms therefore requires radically new concepts. This paper shows the industrial view, the business perspective on such envisioned platforms. It turns out that there are not only huge technical challenges to overcome but also fundamental dilemmas. Contradicting requirements and conflicting trends force us to re-think the task of interconnecting services of distributed CPS.
Keywords: embedded systems; manufacturing data processing; business perspective; distributed CPS; distributed cyber-physical system; industrial cyber-physical system integration; Business; Complexity theory; Computer architecture; Optimization; Reliability; Security; Software; IT platforms; complexity management; cyber-physical systems; distributed systems; software integration 
(ID#: 16-11260)


D. Chen, K. Meinke, F. Asplund and C. Baumann, “A Knowledge-in-the-Loop Approach to Integrated Safety & Security for Cooperative System-of-Systems,” 2015 IEEE Seventh International Conference on Intelligent Computing and Information Systems (ICICIS), Cairo, 2015, pp. 13-20. doi: 10.1109/IntelCIS.2015.7397237
Abstract: A system-of-systems (SoS) is inherently open in configuration and evolutionary in lifecycle. For the next generation of cooperative cyber-physical system-of-systems, safety and security constitute two key issues of public concern that affect the deployment and acceptance. In engineering, the openness and evolutionary nature also entail radical paradigm shifts. This paper presents one novel approach to the development of qualified cyber-physical system-of-systems, with Cooperative Intelligent Transport Systems (C-ITS) as one target. The approach, referred to as knowledge-in-the-loop, aims to allow a synergy of well-managed lifecycles, formal quality assurance, and smart system features. One research goal is to enable an evolutionary development with continuous and traceable flows of system rationale from design-time to post-deployment time and back, supporting automated knowledge inference and enrichment. Another research goal is to develop a formal approach to risk-aware dynamic treatment of safety and security as a whole in the context of system-of-systems. Key base technologies include: (1) EAST-ADL for the consolidation of system-wide concerns and for the creation of an ontology for advanced run-time decisions, (2) Learning Based-Testing for run-time and post-deployment model inference, safety monitoring and testing, (3) Provable Isolation for run-time attack detection and enforcement of security in real-time operating systems.
Keywords: cyber-physical systems; evolutionary computation; formal verification; intelligent transportation systems; learning (artificial intelligence); ontologies (artificial intelligence); security of data; C-ITS; EAST-ADL; cooperative intelligent transport systems; cooperative system-of-systems; cyber-physical system-of-systems; evolutionary development; formal quality assurance; integrated safety and security; knowledge-in-the-loop approach; learning based-testing; ontology; risk-aware dynamic treatment; run-time attack detection; safety monitoring; smart system feature; Analytical models; Ontologies; Organizations; Risk management; Roads; Security; System analysis and design; cyber-physical system; knowledge modeling; machine learning; model-based development; ontology; quality-of-service; safety; security; systems-of-systems; verification and validation (ID#: 16-11261)


H. Derhamy, J. Eliasson, J. Delsing, P. P. Pereira and P. Varga, “Translation Error Handling for Multi-Protocol SOA Systems,” 2015 IEEE 20th Conference on Emerging Technologies & Factory Automation (ETFA), Luxembourg, 2015, pp. 1-8. doi: 10.1109/ETFA.2015.7301473
Abstract: The IoT research area has evolved to incorporate a plethora of messaging protocol standards, both existing and new, emerging as preferred communications means. The variety of protocols and technologies enable IoT to be used in many application scenarios. However, the use of incompatible communication protocols also creates vertical silos and reduces interoperability between vendors and technology platform providers. In many applications, it is important that maximum interoperability is enabled. This can be for reasons such as efficiency, security, end-to-end communication requirements etc. In terms of error handling each protocol has its own methods, but there is a gap for bridging the errors across protocols. Centralized software bus and integrated protocol agents are used for integrating different communications protocols. However, the aforementioned approaches do not fit well in all Industrial IoT application scenarios. This paper therefore investigates error handling challenges for a multi-protocol SOA-based translator. A proof of concept implementation is presented based on MQTT and CoAP. Experimental results show that multi-protocol error handling is possible and furthermore a number of areas that need more investigation have been identified.
Keywords: open systems; protocols; service-oriented architecture; CoAP; MQTT; centralized software bus; communication protocols; industrial IoT; integrated protocol agents; maximum interoperability; messaging protocol standards; multiprotocol SOA systems; multiprotocol SOA-based translator; translation error handling; Computer architecture; Delays; Monitoring; Protocols; Quality of service; Servers; Service-oriented architecture; Arrowhead; Cyber-physical systems; Error handling; Internet of Things; Protocol translation; SOA; Translation (ID#: 16-11262)


V. Meza, X. Gomez and E. Perez, “Quantifying Observability in State Estimation Considering Network Infrastructure Failures,” Innovative Smart Grid Technologies Latin America (ISGT LATAM), 2015 IEEE PES, Montevideo, 2015, pp. 171-176. doi: 10.1109/ISGT-LA.2015.7381148
Abstract: Smart grid integrates electrical network, communication systems and information technologies, where increasing architecture interdependency is introducing new challenges in the evaluation of how possible threats could affect security and reliability of power system. While cyber-attacks have been widely studied, consequences of physical failures on real-time applications are starting to receive attention due to implications for power system security. This paper presents a methodology to quantify the impact on observability in state estimation of possible disruptive failures of a common transmission infrastructure. Numerical results are obtained by calculating observability indicators on an IEEE 14-bus test case, considering the simultaneous disconnection of power transmission lines and communication links installed on the same infrastructure.
Keywords: computer network reliability; computer network security; power engineering computing; power system measurement; power system reliability; power system security; smart power grids; state estimation; common transmission infrastructure; communication link disconnection; disruptive failures; network infrastructure failures; observability quantification; physical failure; power transmission lines; smart power grid; Jacobian matrices; Mathematical model; Observability; Power measurement; Power systems; Security; State estimation; Observability; cyber-physical security; power systems; (ID#: 16-11263)


C. C. Sun, J. Hong and C. C. Liu, “A Co-Simulation Environment for Integrated Cyber and Power Systems,” 2015 IEEE International Conference on Smart Grid Communications (SmartGridComm), Miami, FL, 2015, pp. 133-138. doi: 10.1109/SmartGridComm.2015.7436289
Abstract: Due to the development of new power technologies, cyber infrastructures have been widely deployed for monitoring, control, and operation of a power grid. Information and Communications Technology (ICT) provides connectivity of the cyber and power systems. As a result, cyber intrusions become a threat that may cause damages to the physical infrastructures. Research on cyber security for the power grid is a high priority subject for the emerging smart grid environment. A cyber-physical testbed is critical for the study of cyber-physical security of power systems. For confidentiality, measurements (e.g., voltages, currents and binary status) and ICT data (e.g., communication protocols, system logs, and security logs) from the power grids are not publicly accessible. Therefore, a realistic testbed is a good alternative for study of the interactions between physical and cyber systems of a power grid.
Keywords: power engineering computing; power system security; security of data; smart power grids; ICT; co-simulation environment; cyber infrastructures; cyber intrusions; cyber systems; cyber-physical security; cyber-physical testbed; information and communications technology; physical infrastructures; power grid; power systems; smart grid environment; Computer security; Protocols; Real-time systems; Smart grids; Substations; Co-simulations; Cyber Security; Cyber-Physical Security; Intrusion Detection System for Substations; Smart Grid Testbed (ID#: 16-11264)


R. Liu and A. Srivastava, “Integrated Simulation to Analyze the Impact of Cyber-Attacks on the Power Grid,” Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2015 Workshop on, Seattle, WA, 2015, pp. 1-6. doi: 10.1109/MSCPES.2015.7115395
Abstract: With the development of the smart grid technology, Information and Communication Technology (ICT) plays a significant role in the smart grid. ICT enables to realize the smart grid, but also brings cyber vulnerabilities. It is important to analyze the impact of possible cyber-attacks on the power grid. In this paper, a real-time, cyber-physical co-simulation testbed with hardware-in-the-loop capability is discussed. Real-time Digital Simulator (RTDS), Synchrophasor devices, DeterLab, and a wide- area monitoring application with closed-loop control are utilized in the developed testbed. Two different real life cyber-attacks, including TCP SYN flood attack, and man-in-the-middle attack, are simulated on an IEEE standard power system test case to analyze the the impact of these cyber-attacks on the power grid.
Keywords: closed loop systems; digital simulation; phasor measurement; power system simulation; smart power grids; DeterLab; ICT; IEEE standard power system test case; RTDS; TCP SYN flood attack; closed loop control; cyber vulnerability; cyber-attack impact analysis; hardware-in-the-loop capability; information and communication technology; integrated simulation; man-in-the-middle attack; real-time cyber-physical cosimulation testbed; real-time digital simulator; smart power grid technology; synchrophasor devices; wide-area monitoring application; Capacitors; Loading; Phasor measurement units; Power grids; Power system stability; Reactive power; Real-time systems; Cyber Security; Cyber-Physical; DeterLab; Real-Time Co-Simulation; Synchrophasor Devices (ID#: 16-11265)


Bowen Zheng, W. Li, P. Deng, L. Gérardy, Q. Zhu and N. Shankar, “Design and Verification for Transportation System Security,” 2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC), San Francisco, CA, 2015, pp. 1-6. doi: 10.1145/2744769.2747920
Abstract: Cyber-security has emerged as a pressing issue for transportation systems. Studies have shown that attackers can attack modern vehicles from a variety of interfaces and gain access to the most safety-critical components. Such threats become even broader and more challenging with the emergence of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies. Addressing the security issues in transportation systems requires comprehensive approaches that encompass considerations of security mechanisms, safety properties, resource constraints, and other related system metrics. In this work, we propose an integrated framework that combines hybrid modeling, formal verification, and automated synthesis techniques for analyzing the security and safety of transportation systems and carrying out design space exploration of both in-vehicle electronic control systems and vehicle-to-vehicle communications. We demonstrate the ideas of our framework through a case study of cooperative adaptive cruise control.
Keywords: formal verification; on-board communications; road safety; security of data; traffic engineering computing; automated synthesis techniques; cooperative adaptive cruise control; design space exploration; formal verification; hybrid modeling; in-vehicle electronic control systems; transportation system safety; transportation system security; vehicle-to-vehicle communications; Delays; Safety; Security; Sensors; Vehicles (ID#: 16-11266)


M. S. Mispan, B. Halak, Z. Chen and M. Zwolinski, “TCO-PUF: A Subthreshold Physical Unclonable Function,” Ph.D. Research in Microelectronics and Electronics (PRIME), 2015 11th Conference on, Glasgow, 2015, pp. 105-108. doi: 10.1109/PRIME.2015.7251345
Abstract: A Physical Unclonable Function (PUF) is a promising technology towards comprehensive security protection for integrated circuit applications. It provides a secure method of hardware identification and authentication by exploiting inherent manufacturing process variations to generate a unique response for each device. Subthreshold Current Array PUFs, which are based on the non-linearity of currents and voltages in MOSFETs in the subthreshold region, provide higher security against machine learning-based attacks compared with delay-based PUFs. However, their implementation is not practical due to the low output voltages generated from transistor arrays. In this paper, a novel architecture for a PUF, called the “Two Chooses One” PUF or TCO-PUF, is proposed to improve the output voltage ranges. The proposed PUF shows excellent quality metrics. The average inter-chip Hamming distance is 50.23%. The reliability over the temperature and ±10% supply voltage fluctuations is 91.58%. In terms of security, on average TCO-PUF shows higher security compared to delay-based PUFs and existing designs of Subthreshold Current Array PUFs against machine learning attacks.
Keywords: MOSFET; cryptographic protocols; integrated circuit design; integrated circuit reliability; learning (artificial intelligence); security of data; TCO-PUF; current nonlinearity; hardware authentication; hardware identification; integrated circuit applications; interchip Hamming distance; machine learning-based attacks; security protection; subthreshold current array PUF; two chooses one physical unclonable function; Arrays; Measurement; Reliability; Security; Subthreshold current; Transistors; Modelling attacks; Physical Unclonable Function; Subthreshold (ID#: 16-11267)


V. Casola, A. D. Benedictis and M. Rak, “Security Monitoring in the Cloud: An SLA-Based Approach,” Availability, Reliability and Security (ARES), 2015 10th International Conference on, Toulouse, 2015, pp. 749-755. doi: 10.1109/ARES.2015.74
Abstract: In this paper we present a monitoring architecture that is automatically configured and activated based on a signed Security SLA. Such monitoring architecture integrates different security-related monitoring tools (either developed ad-hoc or already available as open-source or commercial products) to collect measurements related to specific metrics associated with the set of security Service Level Objectives (SLOs) that have been specified in the Security SLA. To demonstrate our approach, we discuss a case study related to detection and management of vulnerabilities and illustrate the integration of the popular open source monitoring system Open VAS into our monitoring architecture. We show how the system is configured and activated by means of available Cloud automation technologies and provide a concrete example of related SLOs and metrics.
Keywords: cloud computing; contracts; public domain software; security of data; system monitoring; OpenVAS; SLA-based approach; SLO; cloud automation technologies; monitoring architecture; open source monitoring system; open-source products; security monitoring; security service level objectives; security-related monitoring tools; signed security SLA; vulnerability management; Automation; Computer architecture; Measurement; Monitoring; Protocols; Security; Servers; Cloud security monitoring; Open VAS; Security Service Level Agreements; vulnerability monitoring (ID#: 16-11268)


M. Ennahbaoui, H. Idrissi and S. E. Hajji, “Secure and Flexible Grid Computing Based Intrusion Detection System Using Mobile Agents and Cryptographic Traces,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. 314-319. doi: 10.1109/INNOVATIONS.2015.7381560
Abstract: Grid Computing is one of the new and innovative information technologies that attempt to make resources sharing global and more easier. Integrated in networked areas, the resources and services in grid are dynamic, heterogeneous and they belong to multiple spaced domains, which effectively enables a large scale collection, sharing and diffusion of data. However, grid computing stills a new paradigm that raises many security issues and conflicts in the computing infrastructures where it is integrated. In this paper, we propose an intrusion detection system (IDS) based on the autonomy, intelligence and independence of mobile agents to record the behaviors and actions on the grid resource nodes to detect malicious intruders. This is achieved through the use of cryptographic traces associated with chaining mechanism to elaborate hashed black statements of the executed agent code, which are then compared to depict intrusions. We have conducted experiments basing three metrics: network load, response time and detection ability to evaluate the effectiveness of our proposed IDS.
Keywords: cryptography; grid computing; mobile agents; IDS; chaining mechanism; cryptographic traces; data collection; data diffusion; data sharing; detection ability metric; intrusion detection system; network load metric; resources sharing; response time metric; security issues; Computer architecture; Cryptography; Grid computing; Intrusion detection; Mobile agents; Monitoring
(ID#: 16-11269)


Y. Bi, K. Shamsi, J. S. Yuan, F. X. Standaert and Y. Jin, “Leverage Emerging Technologies for DPA-Resilient Block Cipher Design,” 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE), Dresden, Germany, 2016, pp. 1538-1543.
doi: (not provided)
Abstract: Emerging devices have been designed and fabricated to extend Moore's Law. While the benefits over traditional metrics such as power, energy, delay, and area certainly apply to emerging device technologies, new devices may offer additional benefits in addition to improvements in the aforementioned metrics. In this sense, we consider how new transistor technologies could also have a positive impact on hardware security. More specifically, we consider how tunneling FETs (TFET) and silicon nanowire FETs (SiNW FETs) could offer superior protection to integrated circuits and embedded systems that are subject to hardware-level attacks — e.g., differential power analysis (DPA). Experimental results on SiNW FET and TFET CML gates are presented. In addition, simulation results of utilizing TFET CML on a light-weight cryptographic circuit, KATAN32, show that TFET-based current mode logic (CML) can both improve DPA resilience and preserve low power consumption in the target design. Compared to the CMOS-based CML designs, the TFET CML circuit consumes 15 times less power while achieving a similar level of DPA resistance.
Keywords: cryptography; current-mode logic; field effect transistors; nanowires; security; silicon; tunnel transistors; CMOS-based CML design; DPA resilience; DPA-resilient block cipher design; KATAN32; Moore law; Si; SiNW FET; TFET CML gate; complementary metal oxide semiconductor; current mode logic; differential power analysis; hardware security; hardware-level attack; integrated circuit; leverage emerging technology; light-weight cryptographic circuit; low power consumption; silicon nanowire FET; transistor technologies; tunneling field effect transistor; CMOS integrated circuits; Cryptography; Logic gates; Power demand; TFETs; Current Mode Logic (CML); Differential Power Analysis (DPA); Emerging Technologies (ID#: 16-11270)


S. R. Sahoo, S. Kumar and K. Mahapatra, “A Modified Configurable RO PUF with Improved Security Metrics,” 2015 IEEE International Symposium on Nanoelectronic and Information Systems, Indore, 2015, pp. 320-324. doi: 10.1109/iNIS.2015.37
Abstract: Physical Unclonable Functions (PUF) are promising security primitives used to produce unique signature for Integrated circuit (IC) which are useful in hardware security and cryptographic applications. Out of several PUF proposed by researcher like Ring Oscillator (RO) PUF, Arbiter PUF, configurable RO (CRO) PUF etc. RO PUF is widely used because of its higher uniqueness. As the frequency of RO is highly susceptible to temperature and voltage fluctuation it affects the reliability of IC signature. So to improve the reliability configurable ROs (CRO) are used. In this paper we present a modified CRO PUF in which inverters used to design RO use different logic styles: static CMOS and Feed through logic (FTL). The FTL based CRO PUF improves the uniqueness as well as the reliability of signature against environmental fluctuation (temperature and voltage) because of its higher leakage current and low switching threshold. The security metrics like uniqueness and reliability are calculated for proposed modified CRO PUF and compared with earlier proposed CRO PUF by carrying out the simulation in 90 nm technology.
Keywords: CMOS logic circuits; copy protection; cryptography; integrated circuit design; integrated circuit reliability; leakage currents; logic design; logic gates; oscillators; CRO PUF;FTL;IC signature; arbiter PUF; configurable RO PUF; cryptographic applications; feed through logic; hardware security; integrated circuit; inverters; leakage current; logic styles; physical unclonable functions; ring oscillator; security metrics; size 90 nm; static CMOS; switching threshold; voltage fluctuation; Information systems; Challenge-Response pair (CRP); Configurable Ring Oscillator (CRO);Feedthrough logic (FTL); Physical Unclonable Function (PUF);process variation (PV)
(ID#: 16-11271)


K. E. Lever, K. Kifayat and M. Merabti, “Identifying Interdependencies Using Attack Graph Generation Methods,” Innovations in Information Technology (IIT), 2015 11th International Conference on, Dubai, 2015, pp. 80-85. doi: 10.1109/INNOVATIONS.2015.7381519
Abstract: Information and communication technologies have augmented interoperability and rapidly advanced varying industries, with vast complex interconnected networks being formed in areas such as safety-critical systems, which can be further categorised as critical infrastructures. What also must be considered is the paradigm of the Internet of Things which is rapidly gaining prevalence within the field of wireless communications, being incorporated into areas such as e-health and automation for industrial manufacturing. As critical infrastructures and the Internet of Things begin to integrate into much wider networks, their reliance upon communication assets by third parties to ensure collaboration and control of their systems will significantly increase, along with system complexity and the requirement for improved security metrics. We present a critical analysis of the risk assessment methods developed for generating attack graphs. The failings of these existing schemas include the inability to accurately identify the relationships and interdependencies between the risks and the reduction of attack graph size and generation complexity. Many existing methods also fail due to the heavy reliance upon the input, identification of vulnerabilities, and analysis of results by human intervention. Conveying our work, we outline our approach to modelling interdependencies within large heterogeneous collaborative infrastructures, proposing a distributed schema which utilises network modelling and attack graph generation methods, to provide a means for vulnerabilities, exploits and conditions to be represented within a unified model.
Keywords: graph theory; risk management; security of data; Internet of Things; attack graph generation methods; communication assets; complex interconnected networks; critical infrastructures; distributed schema; e-health; generation complexity; heterogeneous collaborative infrastructures; industrial manufacturing automation; information and communication technologies; interdependencies identification; interdependencies modelling; interoperability; risk assessment methods; safety-critical systems; security metrics; system complexity; vulnerabilities identification; wireless communications; Collaboration; Complexity theory; Internet of things; Power system faults; Power system protection; Risk management; Security; Attack Graphs; Cascading Failures; Collaborative Infrastructures; Interdependency
(ID#: 16-11272)


C. Herber, A. Saeed and A. Herkersdorf, “Design and Evaluation of a Low-Latency AVB Ethernet Endpoint Based on ARM SoC,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1128-1134. doi: 10.1109/HPCC-CSS-ICESS.2015.52
Abstract: Communication requirements in automotive electronics are steadily increasing. To satisfy this demand and enable future automotive embedded architectures, new interconnect technologies are needed. Audio Video Bridging (AVB) Ethernet is a promising candidate to accomplish this as it features time sensitive and synchronous communication in combination with high bit rates. However, there is a lack of commercial products as well as research regarding AVB-capable system-on-chips (SoCs). In this paper, we investigate how and at what cost a legacy Ethernet MAC can be enhanced into an AVB Ethernet controller. Using FPGA prototyping and a real system based on an ARM Cortex-A9 SoC running Linux, we conducted a series of experiments to evaluate important performance metrics and to validate our design decisions. We achieved frame release latencies of less than 6 μs and time-synchronization with an endpoint-induced inaccuracy of up to 8 μs.
Keywords: Linux; automotive electronics; field programmable gate arrays; local area networks; system-on-chip; ARM Cortex-A9; ARM SoC; Ethernet MAC; FPGA; audio video bridging; automotive electronic; bit rate; field programmable gate array; low-latency AVB Ethernet endpoint; synchronous communication; system-on-chip; Automotive engineering; Field programmable gate arrays; Hardware; Random access memory; Software; Synchronization; Audio Video Bridging; Automotive Electronics; Ethernet
(ID#: 16-11273)


H. Manem, K. Beckmann, M. Xu, R. Carroll, R. Geer and N. C. Cady, “An Extendable Multi-Purpose 3D Neuromorphic Fabric Using Nanoscale Memristors,” 2015 IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA), Verona, NY, 2015, pp. 1-8. doi: 10.1109/CISDA.2015.7208625
Abstract: Neuromorphic computing offers an attractive means for processing and learning complex real-world data. With the emergence of the memristor, the physical realization of cost-effective artificial neural networks is becoming viable, due to reduced area and increased performance metrics than strictly CMOS implementations. In the work presented here, memristors are utilized as synapses in the realization of a multi-purpose heterogeneous 3D neuromorphic fabric. This paper details our in-house memristor and 3D technologies in the design of a fabric that can perform real-world signal processing (i.e., image/video etc.) as well as everyday Boolean logic applications. The applicability of this fabric is therefore diverse with applications ranging from general-purpose and high performance logic computing to power-conservative image detection for mobile and defense applications. The proposed system is an area-effective heterogeneous 3D integration of memristive neural networks, that consumes significantly less power and allows for high speeds (3D ultra-high bandwidth connectivity) in comparison to a purely CMOS 2D implementation. Images and results provided will illustrate our state of the art 3D and memristor technology capabilities for the realization of the proposed 3D memristive neural fabric. Simulation results also show the results for mapping Boolean logic functions and images onto perceptron based neural networks. Results demonstrate the proof of concept of this system, which is the first step in the physical realization of the multi-purpose heterogeneous 3D memristive neuromorphic fabric.
Keywords: Boolean functions; CMOS integrated circuits; fabrics; memristors; neural chips; perceptrons; signal processing; three-dimensional integrated circuits; 3D memristive neural fabric; 3D technology; Boolean logic function application; CMOS implementation; area effective heterogeneous 3D integration; artificial neural network; complementary metal oxide semiconductor; defense application; extendable multipurpose 3D neuromorphic fabric; logic computing; memristive neural network; mobile application; nanoscale memristor; neuromorphic computing; perceptron; power conservative image detection; Decision support systems; Fabrics; Memristors; Metals; Neuromorphics; Neurons; Three-dimensional displays; 3D integrated circuits; image processing; memristor; nanoelectronics; neural networks (ID#: 16-11274)


P. R. da Paz Ferraz Santos, R. P. Esteves and L. Z. Granville, “Evaluating SNMP, NETCONF, and RESTful Web Services for Router Virtualization Management,” 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM), Ottawa, ON, 2015, pp. 122-130. doi: 10.1109/INM.2015.7140284
Abstract: In network virtualization environments (NVEs), the physical infrastructure is shared among different users (or service providers) who create multiple virtual networks (VNs). As part of VN provisioning, virtual routers (VRs) are created inside physical routers supporting virtualization. Currently, the management of NVEs is mostly realized by proprietary solutions. Heterogeneous NVEs (i.e., with different equipment and technologies) are difficult to manage due to the lack of standardized management solutions. As a first step to achieve management interoperability, good performance, and high scalability, we implemented, evaluated, and compared four management interfaces for physical routers that host virtual ones. The interfaces are based on SNMP (v2c and v3), NETCONF, and RESTful Web Services, and are designed to perform three basic VR management operations: VR creation, VR retrieval, and VR removal. We evaluate these interfaces with regard to the following metrics: response time, CPU time, memory consumption, and network usage. Results show that the SNMPv2c interface is the most suitable one for small NVEs without strict security requirements and NETCONF is the best choice to compose a management interface to be deployed in more realistic scenarios, where security and scalability are major concerns.
Keywords: Web services; open systems; security of data; virtualisation; NETCONF; NVEs; RESTful Web services; SNMPv2c interface; VN provisioning; VR creation; VR management operations; VR removal; VR retrieval; management interoperability; network virtualization environments; router virtualization management; security; virtual networks; virtual routers; Data models; Memory management; Protocols; Servers; Virtual machine monitors; Virtualization; XML (ID#: 16-11275)


E. Takamura, K. Mangum, F. Wasiak and C. Gomez-Rosa, “Information Security Considerations for Protecting NASA Mission Operations Centers (MOCs),” 2015 IEEE Aerospace Conference, Big Sky, MT, 2015, pp. 1-14. doi: 10.1109/AERO.2015.7119207
Abstract: In NASA space flight missions, the Mission Operations Center (MOC) is often considered “the center of the (ground segment) universe,” at least by those involved with ground system operations. It is at and through the MOC that spacecraft is commanded and controlled, and science data acquired. This critical element of the ground system must be protected to ensure the confidentiality, integrity and availability of the information and information systems supporting mission operations. This paper identifies and highlights key information security aspects affecting MOCs that should be taken into consideration when reviewing and/or implementing protecting measures in and around MOCs. It stresses the need for compliance with information security regulation and mandates, and the need for the reduction of IT security risks that can potentially have a negative impact to the mission if not addressed. This compilation of key security aspects was derived from numerous observations, findings, and issues discovered by IT security audits the authors have conducted on NASA mission operations centers in the past few years. It is not a recipe on how to secure MOCs, but rather an insight into key areas that must be secured to strengthen the MOC, and enable mission assurance. Most concepts and recommendations in the paper can be applied to non-NASA organizations as well. Finally, the paper emphasizes the importance of integrating information security into the MOC development life cycle as configuration, risk and other management processes are tailored to support the delicate environment in which mission operations take place.
Keywords: aerospace computing; command and control systems; data integrity; information systems; risk management; security of data; space vehicles; IT security audits; IT security risk reduction; MOC development life cycle; NASA MOC protection; NASA mission operation center protection; NASA space flight missions; ground system operations; information availability; Information confidentiality; information integrity; information security considerations; information security regulation; information systems; nonNASA organizations; spacecraft command and control; Access control; Information security; Monitoring; NASA; Software; IT security metrics; access control; asset protection; automation; change control; connection protection; continuous diagnostics and mitigation; continuous monitoring; ground segment ground system; incident handling; information assurance; information security; information security leadership; information technology leadership; infrastructure protection; least privilege; logical security; mission assurance; mission operations; mission operations center; network security; personnel screening; physical security; policies and procedures; risk management; scheduling restrictions; security controls; security hardening; software updates; system cloning and software licenses; system security; system security life cycle; unauthorized change detection; unauthorized change deterrence; unauthorized change prevention
(ID#: 16-11276)


S. R. Sahoo, S. Kumar and K. Mahapatra, “A Novel ROPUF for Hardware Security,” VLSI Design and Test (VDAT), 2015 19th International Symposium on, Ahmedabad, 2015, pp. 1-2. doi: 10.1109/ISVDAT.2015.7208093
Abstract: Physical Unclonable Functions (PUFs) are promising security primitives in recent times. A PUF is a die-specific random function or silicon biometric that is unique for every instance of the die. PUFs derive their randomness from the uncontrolled random variations in the IC manufacturing process which is used to generate cryptographic keys. Researchers have proposed different kinds of PUF in last decade, with varying properties. Quality of PUF is decided by its properties like: uniqueness, reliability, uniformity etc. In this paper we have designed a novel CMOS based RO PUF with improved quality metrics at the cost of additional hardware. The novel PUF is a modified Ring Oscillator PUF (RO-PUF), in which CMOS inverters of RO-PUF are replaced with Feedthrough logic (FTL) inverters. The FTL inverters in RO-PUF improve the security metrics because of its high leakage current. The use of pulse injection circuit (PIC) is responsible to increase challenge-response pairs (CRP's). Then a comparison analysis has been carried out by simulating both the PUF in 90 nm technology. The simulation results shows that the proposed modified FTL PUF provides a uniqueness of 45.24% with a reliability of 91.14%.
Keywords: CMOS analogue integrated circuits; copy protection; cryptography; elemental semiconductors; integrated circuit modelling; leakage currents; logic circuits; logic design; logic gates; oscillators; random functions; silicon; CMOS based RO PUF; CMOS inverters; CRP; FTL PUF; FTL inverters; IC manufacturing process; PIC; Si; challenge-response pairs; cryptographic keys; die-specific random function; feedthrough logic inverters; hardware security; leakage current; physical unclonable functions; pulse injection circuit; ring oscillator PUF; security metrics; silicon biometric; size 90 nm; CMOS integrated circuits; Inverters; Leakage currents; Measurement; Reliability; Security; Silicon; Challenge-Response pair (CRP); Feedthrough logic (FTL); Physical Unclonable Function (PUF); Ring Oscillator (RO); process variation (PV) (ID#: 16-11277)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Kerberos 2015


SoS Logo




Kerberos supports authentication in distributed systems. Used in intelligent systems, it is an encrypted data structure naming a user and a service the user may access. For the Science of Security community, it is relevant to the broad issues of cryptography and to resilience, human behavior, resiliency, and metrics. The work cited here was presented in 2015.

Hoa Quoc Le, Hung Phuoc Truong, Hoang Thien Van and Thai Hoang Le, “A New Pre-Authentication Protocol in Kerberos 5: Biometric Authentication,” Computing & Communication Technologies - Research, Innovation, and Vision for the Future (RIVF), 2015 IEEE RIVF International Conference on, Can Tho, 2015, pp. 157-162. doi: 10.1109/RIVF.2015.7049892
Abstract: Kerberos is a well-known network authentication protocol that allows nodes to communicate over a non-secure network connection. After Kerberos is used to prove the identity of objects in client-server model, it will encrypt all of their communications in following steps to assure privacy and data integrity. In this paper, we modify the initial authentication exchange in Kerberos 5 by using biometric data and asymmetric cryptography. This proposed method creates a new preauthentication protocol in order to make Kerberos 5 more secure. Due to the proposed method, the limitation of password-based authentication in Kerberos 5 is solved. It is too difficult for a user to repudiate having accessed to the application. Moreover, the mechanism of user authentication is more convenient. This method is a strong authentication scheme that is against several attacks.
Keywords: cryptographic protocols; data integrity; data privacy; message authentication; Kerberos 5; asymmetric cryptography; attacks; authentication exchange; biometric authentication; biometric data; client-server model; data integrity; encryption; network authentication protocol; nonsecure network connection; objects identity; password-based authentication; preauthentication protocol; privacy; user authentication; Authentication; Bioinformatics; Cryptography; Fingerprint recognition; Protocols; Servers; Authentication; Kerberos; biometric; cryptography; fingerprint (ID#: 16-9978)


N. S. Khandelwal and P. Kamboj, “Two Factor Authentication Using Visual Cryptography and Digital Envelope in Kerberos,” Electrical, Electronics, Signals, Communication and Optimization (EESCO), 2015 International Conference on, Visakhapatnam, 2015, pp. 1-6. doi: 10.1109/EESCO.2015.7253638
Abstract: Impersonation is the obvious security risk in an undefended distributed network. An adversary pretends to be a client and can have illicit access to the server. To counter this threat, user authentication is used which is treated as the first line of defense in a networked environment. The most popular and widely used authentication protocol is Kerberos. Kerberos is the de facto standard, used to authenticate users mutually by the use of trusted third party. But this strong protocol is vulnerable to various security attacks. This paper gives an overview of Kerberos protocol and its existing security problems. To enhance security and combat security attacks, it also describes a novel approach of incorporating the features of Visual Cryptography and Digital Envelope into Kerberos. Using Visual cryptography, we have added one more layer of security by considering a secret share as one of the factor of providing mutual authentication. While the session key is securely distributed by using the concept of Digital envelope in which user's private key is considered as another factor of authentication. Thus, our proposed scheme makes the Kerberos protocol highly robust, secure and efficient.
Keywords: computer network security cryptographic protocols; image coding; private key cryptography; Kerberos protocol; authentication protocol; digital envelope; distributed network; factor authentication; security attacks; security risk; session key; user authentication; user private key; visual cryptography; Authentication; Encryption; Protocols; Servers; Visualization;  Digital Envelope; Kerberos; Visual cryptography (ID#: 16-9979)


B. Bakhache and R. Rostom, “Kerberos Secured Address Resolution Protocol (KARP),” Digital Information and Communication Technology and its Applications (DICTAP), 2015 Fifth International Conference on, Beirut, 2015, pp. 210-215. doi: 10.1109/DICTAP.2015.7113201
Abstract: Network security has become more significant to users computers, associations, and even in military applications. With the presence of internet, security turned into a considerable issue. The Address Resolution Protocol (ARP) is used by computers on a Local Area Network (LAN) in order to map each network address (IP) to its physical address (MAC). This protocol has been verified to function well under regular conditions. Thus, it is a stateless and an all trusting protocol which makes it vulnerable to numerous ARP cache poisoning attacks such as Man-in-the-Middle (MITM) and Denial of service (DoS). However, ARP spoofing is a simple attack that can be done on data link layer profiting from the weak points of the ARP protocol. In this paper, we propose a new method called KARP (Kerberos ARP) to secure the ARP by integrating the Kerberos protocol. KARP is designed to add authentication to ARP inspiring from the procedures used in the famous Kerberos protocol. The simulated results of the new method show the advantage of KARP in highly securing ARP against spoofing attacks providing the lowest computational cost possible.
Keywords: access protocols; security of data; ARP cache poisoning attacks; KARP; LAN; MAC; all trusting protocol; denial of service attacks; kerberos secured address resolution protocol; local area network; man-in-the-middle attacks; network security; physical address; spoofing attacks; Authentication; IP networks; Protocols; Public key; Servers; ARP; ARP Spoofing; K-ARP; Kerberos Protocol; authentication (ID#: 16-9980)


T. A. T. Nguyen and T. K. Dang, “Combining Fuzzy Extractor in Biometric-Kerberos Based Authentication Protocol,” 2015 International Conference on Advanced Computing and Applications (ACOMP), Ho Chi Minh City, 2015, pp. 1-6. doi: 10.1109/ACOMP.2015.23
Abstract: Kerberos is a distributed authentication protocol which guarantees the mutual authentication between client and server over an insecure network. After the identification, all the subsequent communications are encrypted by session keys to ensure privacy and data integrity. In this paper, we have proposed a biometric authentication protocol based on Kerberos scheme. This protocol is not only resistant against attacks on the insecure network such as man-in-the-middle attack, replay attack, but also able to protect the biometric for using fuzzy extractor. This technique conceals the user's biometric into the cryptographic key called biometric key. This key is used to verify a user in authentication phase. Therefore, there is no need to store users' biometric in the database. Even if biometric keys is revealed, it is impossible for an attack to infer the users' biometric for the high security of the fuzzy extractor scheme. The protocol also supports multi-factor authentication to enhance security of the entire system.
Keywords: client-server systems; cryptographic protocols; data integrity; data privacy; fuzzy set theory; private key cryptography; public key cryptography; Kerberos scheme; biometric key; biometric-Kerberos based authentication protocol; client-server mutual authentication; cryptographic key; distributed authentication protocol; fuzzy extractor scheme; insecure network; man-in-the-middle attack; replay attack; session keys; user biometric; Authentication; Cryptography; Databases; Mobile communication; Protocols; Servers; Kerberos; biometric; fuzzy extractor; mutual authentication; remote authentication (ID#: 16-9981)


R. Maheshwari, A. Gupta and N. Chandra, “Secure Authentication Using Biometric Templates in Kerberos,” Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, New Delhi, 2015, pp. 1247-1250. doi: (not provided)
Abstract: The paper suggests the use of biometric templates for achieving the authentication in distributed systems and networks using Kerberos. The most important advantage in using the biometric templates is implying biologically inspired passwords such as pupil, fingerprints, face, iris, hand geometry, voice, palm print, handwritten signatures and gait. Using biometric templates in Kerberos gives more reliability to client server architectures for analysis in distributed platform while dealing with sensitive and confidential information. Even today the companies face challenge of security of confidential data. Although the main focus of the development of Hadoop, CDBMS like technologies was primarily oriented towards the big data analysis, data management and further conversion of huge chunks of raw data into useful information. Hence, implementing biometric templates in Kerberos makes various frameworks on master slave architecture to be more reliable providing an added security advantage.
Keywords: biometrics (access control); client-server systems; cryptographic protocols; message authentication; parallel processing; software architecture; CDBMS; Hadoop; Kerberos; biologically inspired passwords; biometric templates; client server architectures; confidential data security; confidential information; distributed networks; distributed platform; distributed systems; face; fingerprints; gait; hand geometry; handwritten signatures; Iris; master slave architecture; palm print; pupil; secure authentication; sensitive information; voice; Authentication; Authorization; Computer architecture; Cryptography; Databases; Servers; Biometric templates; Data Security; Hadoop; Kerberos; distributed system; master slave architecture (ID#: 16-9982)


M. Colombo, S. N. Valeije and L. Segura, “Issues and Disadvantages that Prevent the Native Implementation of Single Sign On Using Kerberos on Linux Based Systems,” 2015 CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), Santiago, 2015, pp. 885-889. doi: 10.1109/Chilecon.2015.7404677
Abstract: This paper discusses the problems and disadvantages users have to deal with when they attempt to use the Single Sign On mechanism, in conjunction with the Kerberos V5 protocol as a means of authenticating users on Linux based environments. Some known incompatibilities and Security problems are exposed for which, today, native Single Sign On in Kerberos is not a standard in Linux. Finally, the future prospects regarding the possibility of accomplishing this goal will be discussed.
Keywords: Linux; authorisation; user interfaces; Kerberos V5 protocol; Linux based systems; single sign; user authentication; Java; Protocols; Security; Servers; Silicon compounds; Standards; Authenticaton; Kerberos;  Single Sign On (ID#: 16-9983)


S. Gulhane and S. Bodkhe, “DDAS Using Kerberos with Adaptive Huffman Coding to Enhance Data Retrieval Speed and Security,” Pervasive Computing (ICPC), 2015 International Conference on, Pune, 2015, pp. 1-6. doi: 10.1109/PERVASIVE.2015.7086987
Abstract: The increasing fad of deploying application over the web and store as well as retrieve database to/from particular server. As data stored in distributed manner so scalability, flexibility, reliability and security are important aspects need to be considered while established data management system. There are several systems for database management. After reviewing Distributed data aggregation service(DDAS) system which is relying on Blobseer it found that it provide a high level performance in aspects such as data storage as a Blob (Binary large objects) and data aggregation. For complicated analysis and instinctive mining of scientific data, Blobseer serve as a repository backend. WS-Aggregation is another framework which is viewed as a web services but it is actually carried out aggregation of data. In this framework for executing multi-site queries a single-site interface is provided to the clients. Simple storage service (S3) is another type of storage utility. This S3 system provides an anytime available and low cost service. Kerberos is a method which provides a secure authentication as only authorized clients are able to access distributed database. Kerberos consist of four steps i.e. Authentication Key exchange, Ticket granting service Key exchange, Client/Server service exchange and Build secure communication. Adaptive Huffman method to writing (also referred to as Dynamic Huffman method) is associate accommodative committal to writing technique basic of Huffman coding. It permits compression as well as decompression of data and also permits building the code because the symbols square measure is being transmitted, having no initial information of supply distribution, that enables one-pass cryptography and adaptation to dynamical conditions in data.
Keywords: Huffman codes; Web services; cryptography; data mining; distributed databases; query processing; Blob; Blobseer; DDAS; Kerberos; WS-Aggregation; Web services; adaptive Huffman coding; authentication key exchange; binary large objects; client-server service exchange; data aggregation; data management system; data retrieval security; data retrieval speed; data storage; distributed data aggregation service system; distributed database; dynamic Huffman method; instinctive scientific data mining; multisite queries; one-pass cryptography; secure communication; Authentication; Catalogs; Distributed databases; Memory; Servers; XML; adaptive huffman method; blobseer; distributed database; kerberos; simple storage service; ws aggregation (ID#: 16-9984)


H. Zhang, Q. You and J. Zhang, “A Lightweight Electronic Voting Scheme Based on Blind Signature and Kerberos Mechanism,” Electronics Information and Emergency Communication (ICEIEC), 2015 5th International Conference on, Beijing, 2015, pp. 210-214. doi: 10.1109/ICEIEC.2015.7284523
Abstract: Blind signature has been widely used in electronic voting because of its anonymity. However, all existing electronic voting schemes based on it require maintaining a Certificate Authority to distribute key pairs to voters, which is a huge burden to the electronic voting system. In this paper, we present a lightweight electronic voting system based on blind signature that removes the Certificate Authority by integrating the Kerberos authentication mechanism into the blind signature electronic voting scheme. It uses symmetric keys to encrypt the exchanged information instead of asymmetric keys to avoid the requirement for the Certificate Authority, and thus greatly reduces the cost of the electronic voting system. We have implemented the proposed system, and demonstrated it not only satisfies all the criteria for a practical and secure electronic voting system but also can resist most likely attacks depicted by the three threat models.
Keywords: cryptography; digital signatures; government data processing; Kerberos authentication mechanism; anonymity; blind signature; certificate authority; encryption; exchanged information; lightweight electronic voting scheme; lightweight electronic voting system; secure electronic voting system; symmetric keys; threat models; Authentication; Cryptography; Electronic voting; Nominations and elections; Radiation detectors; Servers; Kerberos; electronic voting; security (ID#: 16-9985)


P. P. Gaikwad, J. P. Gabhane and S. S. Golait, “3-level Secure Kerberos Authentication for Smart Home Systems Using IoT,” Next Generation Computing Technologies (NGCT), 2015 1st International Conference on, Dehradun, 2015, pp. 262-268. doi: 10.1109/NGCT.2015.7375123
Abstract: Uses of Internet-of-Things have been increased almost in all domains. Smart Home System can be made using Internet-of-Things. This paper presents the design and an effective implementation of smart home system using Internet of things. The designed system is very effective and ecofriendly having the advantage of low cost. This system ease out the home automation task and user can easily monitor control home appliances from anywhere and anytime using internet. Embedded system, GPRS module and RF modules are used for making this system. Security has been increased in this system on the server side by using 3 level Kerberos authentication. Hence, the system is now more secure to use than the current smart homes systems. Design of hardware and software is also presented in paper.
Keywords: Internet of Things; authorisation; cellular radio; domestic appliances; embedded systems; home automation; packet radio networks; 3-level secure Kerberos authentication; GPRS module; Internet-of-Things; IoT; RF modules; embedded system; hardware design; home appliance monitoring; home automation task; server side; smart home systems; software design; Microcontrollers; Modems; Radio frequency; Relays; Servers; Smart homes; Switches; Kerberos; RF Identification; Smart home (ID#: 16-9986)


A. Desai, Nagegowda K S and Ninikrishna T, “Secure and QoS Aware Architecture for Cloud Using Software Defined Networks and Hadoop,” 2015 International Conference on Computing and Network Communications (CoCoNet), Trivandrum, 2015, pp. 369-373. doi: 10.1109/CoCoNet.2015.7411212
Abstract: Cloud services have become a daily norm in today's world. Many services today are been migrated to the cloud. Although it has its own benefits it is difficult to manage due to the sheer volume of data and the various different types of services provided. Adhering to the Service Level Agreement (SLA) becomes a challenging task. Also the security of the cloud is very important since if broken all the services provided by the cloud distributor are at risk. Thus there is need of an architecture which is better equipped with security as well as adhering to the quality of service (QoS) written in the SLA given to the tenants of the cloud. In this paper we propose an architecture which will be use software defined networking (SDN) and Hadoop to provide QoS aware and secure architecture. We will also use Kerberos for authentication and single sign on (SSO). In this paper we have shown the sequence of flows of data in a cloud center and how the proposed architecture takes care of it and is equipped to manage the cloud compared to the existing system.
Keywords: cloud computing; contracts; cryptographic protocols; data handling; quality of service; software defined networking; Hadoop; Kerberos; QoS aware architecture; SDN; SLA; SSO; cloud center; cloud distributor; cloud services; secure architecture; service level agreement; single sign on; software defined network; Authentication; Cloud computing; Computer architecture; Control systems; Quality of service; Servers; Big data; Quality of service (QoS); Software defined networks (SDN) (ID#: 16-9987)


S. C. Patel, R. S. Singh and S. Jaiswal, “Secure and Privacy Enhanced Authentication Framework for Cloud Computing,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 1631-1634. doi: 10.1109/ECS.2015.7124863
Abstract: Cloud computing is a revolution in information technology. The cloud consumer outsources their sensitive data and personal information to cloud provider's servers which is not within the same trusted domain of data-owner so most challenging issues arises in cloud are data security users privacy and access control. In this paper we also have proposed a method to achieve fine grained security with combined approach of PGP and Kerberos in cloud computing. The proposed method provides authentication, confidentiality, integrity, and privacy features to Cloud Service Providers and Cloud Users.
Keywords: authorisation; cloud computing; data integrity; data privacy; outsourcing; personal information systems; sensitivity; trusted computing; Kerberos approach; PGP approach; access control; authentication features cloud computing; cloud consumer; cloud provider servers; cloud service providers; cloud users; confidentiality features; data security user privacy; data-owner; information technology; integrity features; personal information outsourcing; privacy enhanced authentication framework; privacy features; secure authentication framework; sensitive data outsourcing; Access control; Authentication; Cloud computing; Cryptography; Privacy; Servers; Kerberos; Pretty Good Privacy; access control; authentication; privacy; security (ID#: 16-9988)


S. V. Baghel and D. P. Theng, “A Survey for Secure Communication of Cloud Third Party Authenticator,” Electronics and Communication Systems (ICECS), 2015 2nd International Conference on, Coimbatore, 2015, pp. 51-54. doi: 10.1109/ECS.2015.7124959
Abstract: Cloud computing is an information technology where user can remotely store their outsourced data so as enjoy on demand high quality application and services from configurable resources. Using information data exchange, users can be worried from the load of local data storage and protection. Thus, allowing freely available auditability for cloud data storage is more importance so that user gives change to check data integrity through external audit party. In the direction of securely establish efficient third party auditor (TPA), which has next two primary requirements to be met: 1) TPA should able to audit outsourced data without demanding local copy of user outsourced data; 2) TPA process should not bring in new threats towards user data privacy. To achieve these goals this system will provide a solution that uses Kerberos as a Third Party Auditor/ Authenticator, RSA algorithm for secure communication, MD5 algorithm is used to verify data integrity, Data centers is used for storing of data on cloud in effective manner with secured environment and provides Multilevel Security to Database.
Keywords: authorisation; cloud computing; computer centres; data integrity; data protection; outsourcing; public key cryptography; MD5 algorithm; RSA algorithm; TPA; cloud third party authenticator; data centers; data outsourcing; external audit party; information data exchange; information technology; local data protection; local data storage; multilevel security; on demand high quality application; on demand services; secure communication; third party auditor; user data privacy; user outsourced data; Algorithm design and analysis; Authentication; Cloud computing; Heuristic algorithms; Memory; Servers; Cloud Computing; Data center; Multilevel database; Public Auditing; Third Party Auditor (ID#: 16-9989)


J. Song, H. Kim and S. Park, “Enhancing Conformance Testing Using Symbolic Execution for Network Protocols,” in IEEE Transactions on Reliability, vol. 64, no. 3, pp. 1024-1037, Sept. 2015. doi: 10.1109/TR.2015.2443392
Abstract: Security protocols are notoriously difficult to get right, and most go through several iterations before their hidden security vulnerabilities, which are hard to detect, are triggered. To help protocol designers and developers efficiently find non-trivial bugs, we introduce SYMCONF, a practical conformance testing tool that generates high-coverage test input packets using a conformance test suite and symbolic execution. Our approach can be viewed as the combination of conformance testing and symbolic execution: (1) it first selects symbolic inputs from an existing conformance test suite; (2) it then symbolically executes a network protocol implementation with the symbolic inputs; and (3) it finally generates high-coverage test input packets using a conformance test suite. We demonstrate the feasibility of this methodology by applying SYMCONF to the generation of a stream of high quality test input packets for multiple implementations of two network protocols, the Kerberos Telnet protocol and Dynamic Host Configuration Protocol (DHCP), and discovering non-trivial security bugs in the protocols.
Keywords: conformance testing; cryptographic protocols; DHCP; Kerberos Telnet protocol; SYMCONF; conformance testing enhancement; dynamic host configuration protocol; hidden security vulnerability; high-coverage test input packets; network protocols; nontrivial security bugs; security protocols; symbolic execution; symbolic inputs; Computer bugs; IP networks; Interoperability; Protocols; Security; Software; Testing; Conformance testing; Kerberos; Telnet; protocol verification; test packet generation
(ID#: 16-9990)


Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Location-Based Services 2015


SoS Logo

Location-Based Services



Location is an important element of many wireless telephone applications. Location tracking can offer potential for privacy invasions or as an attack vector. For Science of Security, location-based services relate to cyber-physical systems, resilience, and metrics. The work cited here was presented in 2015.

M. Yassin and E. Rachid, “A Survey of Positioning Techniques and Location Based Services in Wireless Networks,” Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, Kozhikode, 2015, pp. 1-5. doi: 10.1109/SPICES.2015.7091420
Abstract: Positioning techniques are known in a wide variety of wireless radio access technologies. Traditionally, Global Positioning System (GPS) is the most popular outdoor positioning system. Localization also exists in mobile networks such as Global System for Mobile communications (GSM). Recently, Wireless Local Area Networks (WLAN) become widely deployed, and they are also used for localizing wireless-enabled clients. Many techniques are used to estimate client position in a wireless network. They are based on the characteristics of the received wireless signals: power, time or angle of arrival. In addition, hybrid positioning techniques make use of the collaboration between different wireless radio access technologies existing in the same geographical area. Client positioning allows the introduction of numerous services like real-time tracking, security alerts, informational services and entertainment applications. Such services are known as Location Based Services (LBS), and they are useful in both commerce and security sectors. In this paper, we explain the principles behind positioning techniques used in satellite networks, mobile networks and Wireless Local Area Networks. We also describe hybrid localization methods that exploit the coexistence of several radio access technologies in the same region, and we classify the location based services into several categories. When localization accuracy is improved, position-dependent services become more robust and efficient, and user satisfaction increases.
Keywords: Global Positioning System; direction-of-arrival estimation; mobile radio; radio access networks; wireless LAN; GPS; GSM; Global Positioning System; LBS; WLAN; angle of arrival; client position estimation; entertainment applications; geographical area; global system for mobile communication network; hybrid positioning techniques; informational services; location based services; outdoor positioning system; real-time tracking; received wireless signals; satellite networks; security alerts; security sectors; time-of-arrival; wireless local area networks; wireless radio access technology; wireless-enabled client localization; Accuracy; IEEE 802.11 Standards; Mobile communication; Mobile computing; Position measurement; Satellites; Location Based Services; Positioning techniques; Wi-Fi; hybrid positioning systems (ID#: 16-10146)


U. P. Rao and H. Girme, “A Novel Framework for Privacy Preserving in Location Based Services,” 2015 Fifth International Conference on Advanced Computing & Communication Technologies, Haryana, 2015, pp. 272-277. doi: 10.1109/ACCT.2015.30
Abstract: As availability of the mobile has been increased and many providers have started offering Location Based Services (LBS). There is undoubted potential of location-aware computing, but location awareness also comes up with the inherent threats, perhaps the most important of which is location privacy. This tracking of the location information result into unauthorized access of location data of user and causes serious consequences. It is a challenge to develop effective security schemes which can allow users to freely navigate through different applications, services and also ensure that the user's private information cannot be revealed elsewhere. This paper presents a detailed overview of existing schemes applied to Location Based Services (LBS). It also proposes a novel privacy preserving method (based on PIR) to provide the location privacy to the user.
Keywords: mobile computing; security of data; telecommunication security; trusted computing; location based services; location data; location information; location-aware computing; privacy preserving; Accuracy; Collaboration; Computer architecture; Databases; Mobile communication; Privacy; Security; Location based service; Location privacy; Private Information Retrieval (PIR); Trusted third party (TTP) (ID#: 16-10147)


G. Zhuo, Q. Jia, L. Guo, M. Li and Y. Fang, “Privacy-Preserving Verifiable Proximity Test for Location-Based Services,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417154
Abstract: The prevalence of smartphones with geo-positioning functionalities gives rise to a variety of location-based services (LBSs). Proximity test, an important branch of location-based services, enables the LBS users to determine whether they are in a close proximity with their friends, which can be extended to numerous applications in location-based mobile social networks. Unfortunately, serious security and privacy issues may occur in the current solutions to proximity test. On the one hand, users' private location information is usually revealed to the LBS server and other users, which may lead to physical attacks to users. On the other hand, the correctness of proximity test computation results from LBS server cannot be verified in the existing schemes and thus the creditability of LBS is greatly reduced. Besides, privacy should be defined by user him/herself, not the LBS server. In this paper, we propose a privacy-preserving verifiable proximity test for location-based services. Our scheme enables LBS users to verify the correctness of proximity test results from LBS server without revealing their location information. We show the security, efficiency, and feasibility of our proposed scheme through detailed performance evaluation.
Keywords: data privacy; mobile computing; smart phones; social networking (online); geo-positioning; location-based mobile social networks; location-based services; privacy-preserving verifiable proximity test; private location information; smartphones; Cryptography; Mobile radio mobility management; Privacy; Protocols; Servers (ID#: 16-10148)


P. P. Lindenberg, Bo-Chao Cheng and Yu-Ling Hsueh, “Novel Location Privacy Protection Strategies for Location-Based Services,” 2015 Seventh International Conference on Ubiquitous and Future Networks, Sapporo, 2015, pp. 866-870.
doi: 10.1109/ICUFN.2015.7182667
Abstract: The usage of Location-Based Services (LBS) holds a potential privacy issue when people exchange their locations for information relative to these locations. While most people perceive these information exchange services as useful, others do not, because an adversary might take advantage of the users' sensitive data. In this paper, we propose k-path, an algorithm for privacy protection for continuous location tracking-typed LBS. We take inspiration in k-anonymity to hide the user location or trajectory among k locations or trajectories. We introduce our simulator as a tool to test several strategies to hide users' locations. Afterwards, this paper will give an evaluation about the effectiveness of several approaches by using the simulator and data provided by the GeoLife data set.
Keywords: mobile communication; telecommunication security; GeoLife data set; LBS; continuous location tracking; information exchange services; location based services; mobile devices; novel location privacy protection strategies; privacy protection; user sensitive data; Data privacy; History; Mobile radio mobility management; Privacy; Sensitivity; Trajectory; Uncertainty; Location-Based Service; Privacy; k-anonymity (ID#: 16-10149)


B. Niu, Q. Li, X. Zhu, G. Cao and H. Li, “Enhancing Privacy Through Caching in Location-Based Services,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 1017-1025. doi: 10.1109/INFOCOM.2015.7218474
Abstract: Privacy protection is critical for Location-Based Services (LBSs). In most previous solutions, users query service data from the untrusted LBS server when needed, and discard the data immediately after use. However, the data can be cached and reused to answer future queries. This prevents some queries from being sent to the LBS server and thus improves privacy. Although a few previous works recognize the usefulness of caching for better privacy, they use caching in a pretty straightforward way, and do not show the quantitative relation between caching and privacy. In this paper, we propose a caching-based solution to protect location privacy in LBSs, and rigorously explore how much caching can be used to improve privacy. Specifically, we propose an entropy-based privacy metric which for the first time incorporates the effect of caching on privacy. Then we design two novel caching-aware dummy selection algorithms which enhance location privacy through maximizing both the privacy of the current query and the dummies' contribution to cache. Evaluations show that our algorithms provide much better privacy than previous caching-oblivious and caching-aware solutions.
Keywords: data privacy; entropy; query processing; caching-aware dummy selection; caching-based solution; entropy-based privacy metric; location-based services; privacy enhancement; privacy protection; untrusted LBS server; users query service data; Algorithm design and analysis; Computers; Entropy; Measurement; Mobile communication; Privacy; Servers (ID#: 16-10150)


B. Niu, X. Zhu, W. Li, H. Li, Y. Wang and Z. Lu, “A Personalized Two-Tier Cloaking Scheme for Privacy-Aware Location-Based Services,” Computing, Networking and Communications (ICNC), 2015 International Conference on, Garden Grove, CA, 2015,
pp. 94-98. doi: 10.1109/ICCNC.2015.7069322
Abstract: The ubiquity of modern mobile devices with GPS modules and Internet connectivity such as 3G/4G techniques have resulted in rapid development of Location-Based Services (LBSs). However, users enjoy the convenience provided by the untrusted LBS server at the cost of their privacy. To protect user's sensitive information against adversaries with side information, we design a personalized spatial cloaking scheme, termed TTcloak, which provides k-anonymity for user's location privacy, 1-diversity for query privacy and desired size of cloaking region for mobile users in LBSs, simultaneously. TTcloak uses Dummy Query Determining (DQD) algorithm and Dummy Location Determining (DLD) algorithm to find out a set of realistic cells as candidates, and employs a CR-refinement Module (CRM) to guarantee that dummy users are assigned into the cloaking region with desired size. Finally, thorough security analysis and empirical evaluation results verify our proposed TTcloak.
Keywords: 3G mobile communication; 4G mobile communication; Global Positioning System; Internet; data privacy; mobile computing; mobility management (mobile radio); telecommunication security; telecommunication services; 3G techniques; 4G techniques; CR-refinement module; CRM; DLD algorithm; DQD algorithm; GPS modules; Internet connectivity; LBS server; TTcloak; cloaking region; dummy location determining algorithm; dummy query determining algorithm; dummy users; mobile users; modern mobile devices; personalized spatial cloaking scheme; personalized two-tier cloaking scheme; privacy-aware location-based services; query privacy; security analysis; user location privacy; Algorithm design and analysis; Complexity theory; Entropy; Mobile radio mobility management; Privacy; Servers (ID#: 16-10151)


A. K. Tyagi and N. Sreenath, “Location Privacy Preserving Techniques for Location Based Services over Road Networks,” Communications and Signal Processing (ICCSP), 2015 International Conference on, Melmaruvathur, 2015, pp. 1319-1326. doi: 10.1109/ICCSP.2015.7322723
Abstract: With the rapid development of wireless and mobile technologies (LBS, Privacy of personal location information in location-based services of a vehicle ad-hoc network (VANET) users is becoming an increasingly important issue. LBSs provide enhanced functionalities, they open up new vulnerabilities that can be exploited to cause security and privacy breaches. During communication in LBSs, individuals (vehicle users) face privacy risks (for example location privacy, identity privacy, data privacy etc.) when providing personal location data to potentially untrusted LBSs. However, as vehicle users with mobile (or wireless) devices are highly autonomous and heterogeneous, it is challenging to design generic location privacy protection techniques with desired level of protection. Location privacy is an important issue in vehicular networks since knowledge of a vehicle's location can result in leakage of sensitive information. This paper focuses and discussed on both potential location privacy threats and preserving mechanisms in LBSs over road networks. The proposed research in this paper carries significant intellectual merits and potential broader impacts i.e. a) investigate the impact of inferential attacks (for example inference attack, position co-relation attack, transition attack and timing attack etc.) in LBSs for vehicular ad-hoc networks (VANET) users, and proves the vulnerability of using long-term pseudonyms (or other approaches like silent period, random encryption period etc.) for camouflaging users' real identities. b) An effective and extensible location privacy architecture based on the one approach like mix zone model with other approaches to protect location privacy are discussed. c) This paper addresses the location privacy preservation problems in details from a novel angle and provides a solid foundation for future research to protecting user's location information.
Keywords: data privacy; mobile computing; risk management; road traffic; security of data; telecommunication security; vehicular ad hoc networks; VANET;  extensible location privacy architecture; identity privacy; inference attack; intellectual merits; location privacy preserving techniques; location privacy threats; location-based services; long-term pseudonyms; mix zone model; mobile technologies; personal location information; position correlation attack; privacy breach; privacy risks; road networks; security breach; timing attack; transition attack; vehicle ad-hoc network; wireless technologies; Communication system security; Mobile communication; Mobile computing; Navigation; Privacy; Vehicles; Wireless communication; Location privacy; Location-Based Service; Mix zones; Mobile networks; Path confusion; Pseudonyms; k-anonymity (ID#: 16-10152)


D. Liao, H. Li, G. Sun and V. Anand, “Protecting User Trajectory in Location-Based Services,” 2015 IEEE Global Communications Conference (GLOBECOM), San Diego, CA, 2015, pp. 1-6. doi: 10.1109/GLOCOM.2015.7417512
Abstract: Preserving user location and trajectory privacy while using location-based service (LBS) is an important issue. To address this problem, we first construct three kinds of attack models that can expose a user's trajectory or path while the user is sending continuous queries to a LBS server. Then we propose the k-anonymity trajectory (KAT) algorithm, which is suitable for both single query and continuous queries. Different from existing works, the KAT algorithm selects k-1 dummy locations using the sliding widow based k- anonymity mechanism when the user is making single queries and selects k-1 dummy trajectories using the trajectory selection mechanism for continuous queries. We evaluate and validate the effectiveness of our proposed algorithm by conducting simulations for the single and continuous query scenarios.
Keywords: data privacy; mobility management (mobile radio); telecommunication security; LBS server; attack models; continuous queries; k-1 dummy locations; k-1 dummy trajectories; k-anonymity trajectory algorithm; location-based services; query scenarios; sliding widow based k-anonymity mechanism; trajectory privacy; trajectory selection mechanism; user location; user trajectory; Algorithm design and analysis; Entropy; Handheld computers; Mobile radio mobility management; Privacy; Probability; Trajectory (ID#: 16-10153)


W. Li, B. Niu, H. Li and F. Li, “Privacy-Preserving Strategies in Service Quality Aware Location-Based Services,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7328-7334. doi: 10.1109/ICC.2015.7249497
Abstract: The popularity of Location-Based Services (LBSs) have resulted in serious privacy concerns recently. Mobile users may lose their privacy while enjoying kinds of social activities due to the untrusted LBS servers. Many Privacy Protection Mechanisms (PPMs) are proposed in literature by employing different strategies, which come at the cost of either system overhead, or service quality, or both of them. In this paper, we design privacy-preserving strategies for both of the users and adversaries in service quality aware LBSs. Different from existing approaches, we first define and point out the importance of the Fine-Grained Side Information (FGSI) over existing concept of the side information, and propose a Dual-Privacy Metric (DPM) and Service Quality Metric (SQM). Then, we build analytical frameworks that provide privacy-preserving strategies for mobile users and the adversaries to achieve their goals, respectively. Finally, the evaluation results show the effectiveness of our proposed frameworks and the strategies.
Keywords: data protection; mobility management (mobile radio); quality of service; DPM; FGSI; LBS; PPM; SQM; dual-privacy metric; fine-grained side information; mobile user; privacy protection mechanism; privacy-preserving strategy; service quality aware location-based service; service quality metric; Information systems; Measurement; Mobile radio mobility management; Privacy; Security; Servers (ID#: 16-10154)


S. Ishida, S. Tagashira, Y. Arakawa and A. Fukuda, “On-demand Indoor Location-Based Service Using Ad-hoc Wireless Positioning Network,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1005-1013. doi: 10.1109/HPCC-CSS-ICESS.2015.111
Abstract: WiFi-based localization is a promising candidate for indoor localization because the localization systems can be implemented on WiFi devices widely used today. In this paper, we present a distributed localization system to realize on-demand location-based services. We define characteristics of on-demand from both the service providers' and users' perspectives. From the service providers' perspective, we utilize our previous work, a WiFi ad-hoc wireless positioning network (AWPN). From the users' perspective, we address two challenges: the elimination of a user-application installation process and a reduction in network traffic. We design a localization system using the AWPN and provide a location-based service as a Web service, which allows the use of Web browsers. The proposed localization system is built on WiFi access points and distributes network traffic over the network. We describe the design and implementation and include a design analysis of the proposed localization system. Experimental evaluations confirm that the proposed localization system can localize a user device within 220 milliseconds. We also perform simulations and demonstrate that the proposed localization system reduces network traffic by approximately 24% compared to a centralized localization system.
Keywords: Web services; ad hoc networks; wireless LAN; AWPN; Web browsers; Web service; WiFi ad-hoc wireless positioning network; WiFi-based localization; ad-hoc wireless positioning network; distributed localization system; location-based service; on-demand indoor location-based service; Accuracy; Ad hoc networks; IEEE 802.11 Standard; Mobile radio mobility management; Web servers; Wireless communication; WiFi mesh network; indoor localization; location-based Web service; on-demand (ID#: 16-10155)


D. Goyal and M. B. Krishna, “Secure Framework for Data Access Using Location Based Service in Mobile Cloud Computing,” 2015 Annual IEEE India Conference (INDICON), New Delhi, 2015, pp. 1-6. doi: 10.1109/INDICON.2015.7443761
Abstract: Mobile Cloud Computing (MCC) extends the services of cloud computing with respect to mobility in cloud and user device. MCC offloads the computation and storage to the cloud since the mobile devices are resource constrained with respect to computation, storage and bandwidth. The task can be partitioned to offload different sub-tasks to the cloud and achieve better performance. Security and privacy are the primary factors that enhance the performance of MCC applications. In this paper we present a security framework for data access using Location-based service (LBS) that acts as an additional layer in authentication process. User having valid credentials in location within the organization are enabled as authenticated user.
Keywords: authorisation; cloud computing; data privacy; message authentication; mobile computing; resource allocation; LBS; MCC; data access; location based service; mobile cloud computing; security framework; task partitioning; user authentication process; Cloud computing; Mobile communication; Mobile computing; Organizations; Public key; Cloud Computing; Encryption; Geo-encryption; Location-based Service; Mobile Cloud Computing; Security in MCC (ID#: 16-10156)


Anju S and J. Joseph, “Location Based Service Applications to Secure Locations with Dual Encryption,” Innovations in Information, Embedded and Communication Systems (ICIIECS), 2015 International Conference on, Coimbatore, 2015, pp. 1-4. doi: 10.1109/ICIIECS.2015.7193061
Abstract: Location Based Service Applications (LBSAs) are becoming a part of our lives. Through these applications the users can interact with the physical world and get all data they; Foursquare. But it misuses it in many ways by extracting personal information of users and lead to many threats. To improve the location privacy we use the technique LocX. Here, the location and data related with it are encrypted before store in different servers. So a third party cannot track the location from the server and the server itself cannot see the location. In addition, to improve the security in location points and data points we introduce dual encryption method in LocX. Asymmetric keys are used to encrypt the data with two keys public key and user's private key. But in LocX random inexpensive symmetric keys are used.
Keywords: data privacy; mobile computing; mobility management (mobile radio); private key cryptography; public key cryptography; Foursquare; LBSA; LocX random inexpensive symmetric keys; LocX technique; dual encryption method; location based service applications; location privacy; personal information; public key; user private key; Encryption; Indexes; Privacy; Public key; Servers; Asymmetric; Encrypt; Location Privacy (ID#: 16-10157)


V. A. Kachore, J. Lakshmi and S. K. Nandy, “Location Obfuscation for Location Data Privacy,” 2015 IEEE World Congress on Services, New York City, NY, 2015, pp. 213-220. doi: 10.1109/SERVICES.2015.39
Abstract: Advances in wireless internet, sensor technologies, mobile technologies, and global positioning technologies have renewed interest in location based services (LBSs) among mobile users. LBSs on smartphones allow consumers to locate nearby products and services, in exchange of their location information. Precision of location data helps for accurate query processing of LBSs but it may lead to severe security violations and several privacy threats, as intruders can easily determine user's common paths or actual locations. Encryption is the most explored approach for ensuring security. It can give protection against third party attacks but it cannot provide protection against privacy threats on the server which can still obtain user location and use it for malicious purposes. Location obfuscation is a technique to protect user privacy by altering the location of the users while preserving capability of server to compute few mathematical functions which are useful for the user over the obfuscated location information. This work mainly concentrates on LBSs which wants to know the distance travelled by user for providing their services and compares encryption and obfuscation techniques. This study proposes various methods of location obfuscation for GPS location data which are used to obfuscate user's path and location from service provider. Our work shows that user privacy can be maintained without affecting LBSs results, and without incurring significant overheads.
Keywords: Global Positioning System; cryptography; data protection; mobile computing; query processing; smart phones; GPS location data privacy; LBS; encryption; location based service; location obfuscation; mobile user; query processing; smart phone; user privacy protection; Data privacy; Encryption;  Privacy; Servers; Location Based Services; Location data protection; Path Obfuscation Techniques; User Privacy (ID#: 16-10158)


K. Kasori and F. Sato, “Location Privacy Protection Considering the Location Safety,” Network-Based Information Systems (NBiS), 2015 18th International Conference on, Taipei, 2015, pp. 140-145. doi: 10.1109/NBiS.2015.24
Abstract: With rapid advances in mobile communication technologies and continued price reduction of location tracking devices, location-based services (LBSs) are widely recognized as an important feature of the future computing environment. Though LBSs provide many new opportunities, the ability to locate mobile users also presents new threats - the intrusion of location privacy. Lots of different techniques for securing the location privacy have been proposed, for instance the concept of Silent period, the concept of Dummy node, and the concept of Cloaking-region. However, many of these researches have a problem that quality of the LBS (QoS) decreased when anonymity is improved, and anonymity falls down when QoS is improved. In this paper, we propose a location privacy scheme by utilizing the cloaking region and the regional safety degree. The regional safety degree means the measure of the needs of the anonymity of the location information. If the node is in the place of high regional safety, the node does not need any anonymization. The proposed method is evaluated by the quality of location information and the location safety. The location safety is calculated by multiplying the regional safety degree and the identification level. From our simulation results, the proposed method improves the quality of the location information without the degradation of the location safety.
Keywords: data privacy; mobile computing; security of data; tracking; LBSs; cloaking region; cloaking-region; dummy node; identification level; location information anonymity; location privacy intrusion; location privacy protection; location safety; location tracking devices; location-based services; mobile communication technologies; price reduction; regional safety degree; silent period; Measurement; Mobile radio mobility management; Privacy; Quality of service; Safety; Servers; k-anonymity; location anonymization; location based services (ID#: 16-10159)


B. G. Patel, V. K. Dabhi, U. Tyagi and P. B. Shah, “A Survey on Location Based Application Development for Android Platform,” Computer Engineering and Applications (ICACEA), 2015 International Conference on Advances in, Ghaziabad, 2015, pp. 731-739. doi: 10.1109/ICACEA.2015.7164786
Abstract: Android is currently the fastest growing mobile platform. One of the fastest growing areas in Android applications is Location Based Service (LBS). LBS provides information services based on the current or a known location and is supported by the Mobile positioning system. Presently, MOSDAC (Meteorological and Oceanographic Satellite Data Archival Centre) disseminates the weather forecast information through web. Android is one of the most widely used mobile OS these days and that is the reason why it is the best practice to develop application on Android platform. The application for disseminating location based weather forecast is a client-server application on Android platform. It provides weather forecast information as per user's location or location of interest. While developing a client-server application, the communication between client and database server becomes imperative. This paper discusses detailed analysis for choosing appropriate type of web service, data exchange protocols, data exchange format, and Mobile positioning technologies for client-server application. It also highlights issues like memory capacity, security, poor response time, and battery consumption in mobile devices. This paper is about exploring effective options to establish the dissemination service over smart phones with Android OS.
Keywords: Global Positioning System; Web services; client-server systems; electronic data interchange; information dissemination; protocols; smart phones; LBS; MOSDAC; Meteorological and Oceanographic Satellite Data Archival Centre; Web service; android applications; battery consumption; client-server application; data exchange format; data exchange protocols; database server; dissemination service; information services; location based application development; location based service; memory capacity; mobile OS; mobile devices; mobile positioning system; response time; smart phones; weather forecast information dissemination; Batteries; Mobile communication; Simple object access protocol; Smart phones; XML; Android; Battery Consumption; Location Based Services; Response time; Security (ID#: 16-10160)


Z. Riaz, F. Dürr and K. Rothermel, “Optimized Location Update Protocols for Secure and Efficient Position Sharing,” Networked Systems (NetSys), 2015 International Conference and Workshops on, Cottbus, 2015, pp. 1-8. doi: 10.1109/NetSys.2015.7089083
Abstract: Although location-based applications have seen fast growth in the last decade due to pervasive adoption of GPS enabled mobile devices, their use raises privacy concerns. To mitigate these concerns, a number of approaches have been proposed in literature, many of which rely on a trusted party to regulate user privacy. However, trusted parties are known to be prone to data breaches [1]. Consequently, a novel solution, called Position Sharing, was proposed in [2] to secure location privacy in fully non-trusted systems. In Position Sharing, obfuscated position shares of the actual user location are distributed among several location servers, each from a different provider, such that there is no single point of failure if the servers get breached. While Position Sharing can exhibit useful properties such as graceful degradation of privacy, it incurs significant communication overhead as position shares are sent to several location servers instead of one. To this end, we propose a set of location update protocols to minimize the communication overhead of Position Sharing while maintaining the privacy guarantees that it originally provided. As we consider the scenario of frequent location updates, i.e., movement trajectories, our protocols additionally add protection against an attack based on spatio-temporal correlation in published locations. By evaluating on a set of real-world GPS traces, we show that our protocols can reduce the communication overhead by 75% while significantly improving the security guarantees of the original Position Sharing algorithm.
Keywords: Global Positioning System; correlation theory; mobility management (mobile radio); protocols; security of data; GPS; Position Sharing algorithm; communication overhead minimization; data breach; location privacy security; location server; location update protocol optimization; mobile device; movement trajectory; spatio-temporal correlation; trusted party; user privacy; Correlation; Dead reckoning; Mobile handsets; Privacy; Protocols; Servers; dead reckoning; efficient communication; location-based services; privacy; selective update (ID#: 16-10161)


L. Haukipuro, I. M. Shabalina and M. Ylianttila, “Preventing Social Exclusion for Persons with Disabilities Through ICT Based Services,” Information, Intelligence, Systems and Applications (IISA), 2015 6th International Conference on, Corfu, 2015, pp. 1-7. doi: 10.1109/IISA.2015.7388102
Abstract: The paper addresses opportunities that the fast diffusion of Information and Communication Technology is opening for people with different levels of physical restrictions, or disabilities. For these people mobile technology not only allows ubiquity for communications but also anytime access to some services that are vital for their security and autonomy thus preventing social exclusion. More specifically, the paper describes evaluation study and four developed ICT based services aimed to prevent social exclusion and ease everyday life of persons with disabilities. Findings of the study show that there is enormous need for the services aimed for disabled to promote their equal status in society.
Keywords: handicapped aids; mobile computing; ICT based services; information and communication technology; mobile technology; persons with disabilities; social exclusion; Cities and towns; Cultural differences; Government; Information and communication technology; Interviews; Mobile communication; Navigation; Location Based Services; Preventing Social Exclusion; Services for Disabled; Social Services (ID#: 16-10162)


M. Maier, L. Schauer and F. Dorfmeister, “ProbeTags: Privacy-Preserving Proximity Detection Using Wi-Fi Management Frames,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 756-763. doi: 10.1109/WiMOB.2015.7348038
Abstract: Since the beginning of the ubiquitous computing era, context-aware applications have been envisioned and pursued, with location and especially proximity information being one of the primary building blocks. To date, there is still a lack of feasible solutions to perform proximity tests between mobile entities in a privacy-preserving manner, i.e., one that does not disclose one's location in case the other party is not in proximity. In this paper, we present our novel approach based on location tags built from surrounding Wi-Fi signals originating only from mobile devices. Since the set of mobile devices at a given location changes over time, this approach ensures the user's privacy when performing proximity tests. To improve the robustness of similarity calculations, we introduce a novel extension of the commonly used cosine similarity measure to allow for weighing its components while preserving the signal strength semantics. Our system is evaluated extensively in various settings, ranging from office scenarios to crowded mass events. The results show that our system allows for robust short-range proximity detection while preserving the participants' privacy.
Keywords: computer network management; computer network security; data privacy; mobile computing; wireless LAN; ProbeTags; Wi-Fi management frames; Wi-Fi signals; context-aware applications; cosine similarity measure; location tags; mobile devices; mobile entities; privacy-preserving proximity detection; proximity tests; signal strength semantics; similarity calculation robustness improvement; ubiquitous computing era; Euclidean distance; IEEE 802.11 Standard; Mobile communication; Mobile computing; Mobile handsets; Privacy; Wireless communication; 802.11; location-based services; proximity detection (ID#: 16-10163)


S. M. H. Sharhan and S. Zickau, “Indoor Mapping for Location-Based Policy Tooling Using Bluetooth Low Energy Beacons,” Wireless and Mobile Computing, Networking and Communications (WiMob), 2015 IEEE 11th International Conference on, Abu Dhabi, 2015, pp. 28-36. doi: 10.1109/WiMOB.2015.7347937
Abstract: Most service providers and data owners desire to control the access to sensitive resources. The user may express restrictions, such as who can access the resources, at which point in time and from which location. However, the location requirement is difficult to achieve in an indoor environment. Determining user locations inside of buildings is based on a variety of solutions. Moreover, current access control solutions do not consider restricting access to sensitive data in indoor environments. This article presents a graphical web interface based on OpenStreetMap (OSM), called Indoor Mapping Web Interface (IMWI), which is designed to use indoor maps and floor plans of several real-world objects, such as hospitals, universities and other premises. By placing Bluetooth Low Energy (BLE) beacons inside buildings and by labeling them on digital indoor maps, the web interface back-end will provide the stored location data within an access control environment. Using the stored information will enable users to express indoor access control restrictions. Moreover, the IMWI enables and ensures the accurate determination of a user device location in indoor scenarios. By defining several scenarios the usability of the IMWI and the validity of the policies have been evaluated.
Keywords: Bluetooth; indoor radio; Indoor Mapping Web Interface; OpenStreetMap; access control environment; bluetooth low energy beacons; device location; indoor access control; indoor mapping; indoor scenarios; location-based policy tooling; service providers; Access control; Communication system security; Medical services; Wireless LAN; Wireless communication; Wireless sensor networks; Access Control; Bluetooth Low Energy Beacons; Indoor Mapping; Location-based Services; XACML Policies (ID#: 16-10164)


C. Piao and X. Li, “Privacy Preserving-Based Recommendation Service Model of Mobile Commerce and Anonimity Algorithm,” e-Business Engineering (ICEBE), 2015 IEEE 12th International Conference on, Beijing, 2015, pp. 420-427. doi: 10.1109/ICEBE.2015.77
Abstract: The wide location based service in the application of mobile commerce has brought great convenience to people's work and lives, while the risk of privacy disclosure has been receiving more and more attention from the academia and industry. After analyzing the privacy issues in mobile commerce, a privacy preserving recommendation service framework based on cloud is established. According to the defined personalized privacy requirements of mobile users, the (K, L, P)-anonymity model is formally described. Based on the anonymity model, a dynamically structure minimum anonymous sets algorithm DSMAS is proposed, which can be used to protect the location, identifier and other sensitive information of the mobile user on the road network. Finally, based on a real road network and generated privacy profiles of the mobile users, the feasibility of algorithm are validated by experimentally analyzing using the metrics including information entropy, query cost, anonymization time and dummy ratio.
Keywords: cloud computing; data privacy; entropy; mobile commerce; recommender systems; security of data; anonymization time; dummy ratio; information entropy; privacy preserving-based recommendation service model; query cost; Business; Cloud computing; Mobile communication; Mobile computing; Privacy; Roads; Sensitivity; Anonymity model; Cloud platform; Location-based service; Privacy preserving algorithm (ID#: 16-10165)


R. Beniwal, P. Zavarsky and D. Lindskog, “Study of Compliance of Apple's Location Based APIs with Recommendations of the IETF Geopriv,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 214-219. doi: 10.1109/ICITST.2015.7412092
Abstract: Location Based Services (LBS) are services offered by smart phone applications which use device location data to offer the location-related services. Privacy of location information is a major concern in LBS applications. This paper compares the location APIs of iOS with the IETF Geopriv architecture to determine what mechanisms are in place to protect location privacy of an iOS user. The focus of the study is on the distribution phase of the Geopriv architecture and its applicability in enhancing location privacy on iOS mobile platforms. The presented review shows that two iOS APIs features known as Geocoder and turning off location services provide to some extent location privacy for iOS users. However, only a limited number of functionalities can be considered as compliant with Geopriv's recommendations. The paper also presents possible ways how to address limited location privacy offered by iOS mobile devices based on Geopriv recommendations.
Keywords: application program interfaces; data privacy; iOS (operating system); recommender systems; smart phones; Apple location based API; Geocoder; Geopriv recommendation; IETF Geopriv architecture; LBS; device location data; distribution phase; iOS mobile device; iOS mobile platform; iOS user; location based service; location information privacy; location privacy; location-related service; off location service; smart phone application; Global Positioning System; Internet; Mobile communication; Operating systems; Privacy; Servers; Smart phones; APIs; Geopriv; iOS; location information (ID#: 16-10166)


C. Lyu, A. Pande, X. Wang, J. Zhu, D. Gu and P. Mohapatra, “CLIP: Continuous Location Integrity and Provenance for Mobile Phones,” Mobile Ad Hoc and Sensor Systems (MASS), 2015 IEEE 12th International Conference on, Dallas, TX, 2015, pp. 172-180. doi: 10.1109/MASS.2015.33
Abstract: Many location-based services require a mobile user to continuously prove his location. In absence of a secure mechanism, malicious users may lie about their locations to get these services. Mobility trace, a sequence of past mobility points, provides evidence for the user's locations. In this paper, we propose a Continuous Location Integrity and Provenance (CLIP) Scheme to provide authentication for mobility trace, and protect users' privacy. CLIP uses low-power inertial accelerometer sensor with a light-weight entropy-based commitment mechanism and is able to authenticate the user's mobility trace without any cost of trusted hardware. CLIP maintains the user's privacy, allowing the user to submit a portion of his mobility trace with which the commitment can be also verified. Wireless Access Points (APs) or colocated mobile devices are used to generate the location proofs. We also propose a light-weight spatial-temporal trust model to detect fake location proofs from collusion attacks. The prototype implementation on Android demonstrates that CLIP requires low computational and storage resources. Our extensive simulations show that the spatial-temporal trust model can achieve high (> 0.9) detection accuracy against collusion attacks.
Keywords: data privacy; mobile computing; mobile handsets; radio access networks; AP; CLIP; computational resources; continuous location integrity and provenance; light-weight entropy-based commitment mechanism; location-based services; low-power inertial accelerometer sensor; mobile phones; mobility trace; storage resources; user privacy; wireless access points; Communication system security; Mobile communication; Mobile handsets; Privacy; Security; Wireless communication; Wireless sensor networks (ID#: 16-10167)


P. Hallgren, M. Ochoa and A. Sabelfeld, “InnerCircle: A Parallelizable Decentralized Privacy-Preserving Location Proximity Protocol,” Privacy, Security and Trust (PST), 2015 13th Annual Conference on, Izmir, 2015, pp. 1-6. doi: 10.1109/PST.2015.7232947
Abstract: Location Based Services (LBS) are becoming increasingly popular. Users enjoy a wide range of services from tracking a lost phone to querying for nearby restaurants or nearby tweets. However, many users are concerned about sharing their location. A major challenge is achieving the privacy of LBS without hampering the utility. This paper focuses on the problem of location proximity, where principals are willing to reveal whether they are within a certain distance from each other. Yet the principals are privacy-sensitive, not willing to reveal any further information about their locations, nor the distance. We propose InnerCircle, a novel secure multi-party computation protocol for location privacy, based on partially homomorphic encryption. The protocol achieves precise fully privacy-preserving location proximity without a trusted third party in a single round trip. We prove that the protocol is secure in the semi-honest adversary model of Secure Multi-party Computation, and thus guarantees the desired privacy properties. We present the results of practical experiments of three instances of the protocol using different encryption schemes. We show that, thanks to its parallelizability, the protocol scales well to practical applications.
Keywords: cryptographic protocols; data privacy; mobile computing; InnerCircle; LBS privacy; location based services; parallelizability; parallelizable decentralized privacy-preserving location proximity protocol; partially homomorphic encryption; privacy properties; round trip; secure multiparty computation protocol; secure protocol; semihonest adversary model; Approximation methods; Encryption; Privacy; Protocols; Public key (ID#: 16-10168)


P. G. Kolapwar and H. P. Ambulgekar, “Location Based Data Encryption Methods and Applications,” Communication Technologies (GCCT), 2015 Global Conference on, Thuckalay, 2015, pp. 104-108. doi: 10.1109/GCCT.2015.7342632
Abstract: In today’s world, mobile communication has tremendous demand in our daily life. The use of the mobile user's location called Geo-encryption, produces more secure systems that can be used in different mobile applications. Location Based Data Encryption Methods (LBDEM) are used to enhance the security of such applications called as Location based Services (LBS). It collects position, time, latitude and longitude coordinates of mobile nodes and uses for the encryption and decryption process. Geo-encryption plays an important role to raise the security of LBS. Different Geo-protocols have been developed in the same area to add security with better throughput. The (AES-GEDTD) is such an approach which gives higher security with a great throughput. In this paper, we discuss AES-GEDTD as a LBDEM approach and its role in some applications like Digital Cinema Distribution, Patient Telemonitoring System (PTS) and Military Application.
Keywords: cryptographic protocols; mobile communication; decryption process; digital cinema distribution; encryption process; geo-encryption; location based data encryption methods; location based services; military application; mobile nodes; mobile user location; patient telemonitoring system; Encryption; Mobile nodes; Protocols; Receivers; AES-GEDTD; DES-GEDTD; Geo-encryption; Geo-protocol; LBDEM; LBS (ID#: 16-10169)


S. S. Kumar and A. Pandharipande, “Secure Indoor Positioning: Relay Attacks and Mitigation Using Presence Sensing Systems,” 2015 IEEE 13th International Conference on Industrial Informatics (INDIN), Cambridge, 2015, pp. 82-87. doi: 10.1109/INDIN.2015.7281714
Abstract: Secure indoor positioning is critical to successful adoption of location-based services in buildings. However, positioning based on off-the-air signal measurements is prone to various security threats. In this paper, we provide an overview of security threats encountered in such indoor positioning systems, and particularly focus on the relay threat. In a relay attack, a malicious entity may gain unauthorized access by introducing a rogue relay device in a zone of interest, which is then validly positioned by the location network, and then transfer the control rights to a malicious device outside the zone. This enables the malicious entity to gain access to the application network using the rogue device. We present multiple solutions relying on a presence sensing system to deal with this attack scenario. In one solution, a localized presence sensing system is used to validate the user presence in the vicinity of the position before location-based control is allowed. In another solution, the user device is required to respond to a challenge by a physical action that may be observed and validated by the presence sensing system.
Keywords: indoor navigation; relay networks (telecommunication); telecommunication security; location-based services; off-the-air signal measurement; presence sensing system; relay attack mitigation; rogue relay device; secure indoor positioning; security threat; Lighting; Lighting control; Mobile handsets; Mobile radio mobility management; Relays; Sensors; Servers; Secure indoor positioning; presence sensing systems; relay attacks (ID#: 16-10170)


C. Yara, Y. Noriduki, S. Ioroi and H. Tanaka, “Design and Implementation of Map System for Indoor Navigation — An Example of an Application of a Platform Which Collects and Provides Indoor Positions,” Inertial Sensors and Systems (ISISS), 2015 IEEE International Symposium on, Hapuna Beach, HI, 2015, pp. 1-4. doi: 10.1109/ISISS.2015.7102376
Abstract: Many kinds of indoor positioning systems have been investigated, and location-based services have been developed and introduced. They are individually designed and developed based on the requirements for each service. This paper presents a map platform that accommodates any positioning system in order to utilize the platform for various application systems. The requirement conditions are summarized and the platform has been implemented using open source software. The software allows the required functions to be assigned into two servers, realizes the independence of each function and allows for future function expansion. The study has verified the basic functions required for a mapping system that can incorporate several indoor positioning systems including dead reckoning calculated by inertia sensors installed in a smartphone and an odometry system operated by the rotary encoders installed in an electric wheel chair.
Keywords: Global Positioning System; cartography; distance measurement; geophysics computing; indoor navigation; mobile computing; public domain software; wheelchairs; basic function; dead reckoning; electric wheel chair; future function expansion; indoor positioning system; inertia sensor; location-based services; mapping system; odometry system; open source software; rotary encoder; smartphone; Browsers; Dead reckoning; History; Information management; Security; Servers; Indoor Positioning Data; Map; Navigation (ID#: 16-10171)


L. Xiao, J. Liu, Q. Li and H. V. Poor, “Secure Mobile Crowdsensing Game,” 2015 IEEE International Conference on Communications (ICC), London, 2015, pp. 7157-7162. doi: 10.1109/ICC.2015.7249468
Abstract: By recruiting sensor-equipped smartphone users to report sensing data, mobile crowdsensing (MCS) provides location-based services such as environmental monitoring. However, due to the distributed and potentially selfish nature of smartphone users, mobile crowdsensing applications are vulnerable to faked sensing attacks by users who bid a low price in an MCS auction and provide faked sensing reports to save sensing costs and avoid privacy leakage. In this paper, the interactions among an MCS server and smartphone users are formulated as a mobile crowdsensing game, in which each smartphone user chooses its sensing strategy such as its sensing time and energy to maximize its expected utility while the MCS server classifies the received sensing reports and determines the payment strategy accordingly to stimulate users to provide accurate sensing reports. Nash equilibrium (NE) of a static MCS game is evaluated and a closed-form expression for the NE in a special case is presented. Moreover, a dynamic mobile crowdsensing game is investigated, in which the sensing parameters of a smartphone are unknown by the server and the other users. A Q-learning discriminated pricing strategy is developed for the server to determine the payment to each user. Simulation results show that the proposed pricing mechanism stimulates users to provide high-quality sensing services and suppress faked sensing attacks.
Keywords: mobility management (mobile radio); pricing; smart phones; telecommunication security; MCS auction; MCS server; NE; Nash equilibrium; Q-learning discriminated pricing strategy; closed-form expression; dynamic mobile crowdsensing game security; faked sensing attack suppression; high-quality sensing service; location-based service; sensor-equipped smartphone; Games; Information systems; Mobile communication; Pricing; Security; Sensors; Servers (ID#: 16-10172)


G. Sarath and Megha Lal S. H, “Privacy Preservation and Content Protection in Location Based Queries,” Contemporary Computing (IC3), 2015 Eighth International Conference on, Noida, 2015, pp. 325-330. doi: 10.1109/IC3.2015.7346701
Abstract: Location based services are widely used to access location information such as nearest ATMs and hospitals. These services are accessed by sending location queries containing user's current location to the Location based service (LBS) server. LBS server can retrieve the current location of user from this query and misuse it, threatening his privacy. In security critical application like defense, protecting location privacy of authorized users is a critical issue. This paper describes the design and implementation of a solution to this privacy problem, which provides location privacy to authorized users and preserve confidentiality of data in LBS server. Our solution is a two stage approach, where first stage is based on Oblivious transfer and second stage is based on Private information Retrieval. Here the whole service area is divided into cells and location information of each cell is stored in the server in encrypted form. The user who wants to retrieve location information will create a clocking region (a subset of service area), containing his current location and generate a query embedding it. Server can only identify the user is somewhere in this clocking region, so user's security can be improved by increasing the size of the clocking region. Even if the server sends the location information of all the cells in the clocking region, user can decrypt service information only for the user's exact location, so confidentiality of server data will be preserved.
Keywords: authorisation; data privacy; mobile computing; query processing; ATM; LBS server; authorized user; content protection; data confidentiality; hospital; location based query; location based service server; location information retrieval; location privacy; oblivious transfer; privacy preservation; private information retrieval; security critical application; Cryptography; Information retrieval; Privacy; Protocols; Receivers; Servers; Location based query (ID#: 16-10173)


X. Gong, X. Chen, K. Xing, D. H. Shin, M. Zhang and J. Zhang, “Personalized Location Privacy in Mobile Networks: A Social Group Utility Approach,” 2015 IEEE Conference on Computer Communications (INFOCOM), Kowloon, 2015, pp. 1008-1016. doi: 10.1109/INFOCOM.2015.7218473
Abstract: With increasing popularity of location-based services (LBSs), there have been growing concerns for location privacy. To protect location privacy in a LBS, mobile users in physical proximity can work in concert to collectively change their pseudonyms, in order to hide spatial-temporal correlation in their location traces. In this study, we leverage the social tie structure among mobile users to motivate them to participate in pseudonym change. Drawing on a social group utility maximization (SGUM) framework, we cast users' decision making of whether to change pseudonyms as a socially-aware pseudonym change game (PCG). The PCG further assumes a general anonymity model that allows a user to have its specific anonymity set for personalized location privacy. For the SGUM-based PCG, we show that there exists a socially-aware Nash equilibrium (SNE), and quantify the system efficiency of the SNE with respect to the optimal social welfare. Then we develop a greedy algorithm that myopically determines users' strategies, based on the social group utility derived from only the users whose strategies have already been determined. It turns out that this algorithm can efficiently find a Pareto-optimal SNE with social welfare higher than that for the socially-oblivious PCG, pointing out the impact of exploiting social tie structure. We further show that the Pareto-optimal SNE can be achieved in a distributed manner.
Keywords: data privacy; game theory; mobile computing; optimisation; telecommunication security; LBS; Pareto-optimal SNE; SGUM-based PCG; location traces; location-based services; mobile networks; optimal social welfare; personalized location privacy; physical proximity; social group utility maximization framework; social tie structure; socially-aware Nash equilibrium; socially-aware pseudonym change game; spatial-temporal correlation; system efficiency quantification; Computers; Games; Mobile communication; Mobile handsets; Nash equilibrium; Privacy; Tin (ID#: 16-10174)


M. Movahedi, J. Saia and M. Zamani, “Shuffle to Baffle: Towards Scalable Protocols for Secure Multi-Party Shuffling,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 800-801. doi: 10.1109/ICDCS.2015.116
Abstract: In secure multi-party shuffling, multiple parties, each holding an input, want to agree on a random permutation of their inputs while keeping the permutation secret. This problem is important as a primitive in many privacy-preserving applications such as anonymous communication, location-based services, and electronic voting. Known techniques for solving this problem suffer from poor scalability, load-balancing issues, trusted party assumptions, and/or weak security guarantees. In this paper, we propose an unconditionally-secure protocol for multi-party shuffling that scales well with the number of parties and is load-balanced. In particular, we require each party to send only a polylogarithmic number of bits and perform a polylogarithmic number of operations while incurring only a logarithmic round complexity. We show security under universal compos ability against up to about n/3 fully-malicious parties. We also provide simulation results in the full version of this paper showing that our protocol improves significantly over previous work. For example, for one million parties, when compared to the state of the art, our protocol reduces the communication and computation costs by at least three orders of magnitude and slightly decreases the number of communication rounds.
Keywords: computational complexity; cryptographic protocols; data privacy; resource allocation; anonymous communication; electronic voting; load-balancing; location-based services; logarithmic round complexity; permutation secret; polylogarithmic number; privacy-preserving; random permutation; scalable protocols; secure multiparty shuffling; trusted party assumptions; unconditionally-secure protocol; Electronic voting; Logic gates; Mobile radio mobility management; Privacy; Protocols; Security; Sorting; Multi-Party Computation; Privacy-Preserving Applications; Secure Shuffling (ID#: 16-10175)


H. Ngo and J. Kim, “Location Privacy via Differential Private Perturbation of Cloaking Area,” 2015 IEEE 28th Computer Security Foundations Symposium, Verona, 2015, pp. 63-74. doi: 10.1109/CSF.2015.12
Abstract: The increasing use of mobile devices has triggered the development of location based services (LBS). By providing location information to LBS, mobile users can enjoy variety of useful applications utilizing location information, but might suffer the troubles of private information leakage. Location information of mobile users needs to be kept secret while maintaining utility to achieve desirable service quality. Existing location privacy enhancing techniques based on K-anonymity and Hilbertcurve cloaking area generation showed advantages in privacy protection and service quality but disadvantages due to the generation of large cloaking areas that makes query processing and communication less effective. In this paper we propose a novel location privacy preserving scheme that leverages some differential privacy based notions and mechanisms to publish the optimal size cloaking areas from multiple rotated and shifted versions of Hilbert curve. With experimental results, we show that our scheme significantly reduces the average size of cloaking areas compared to previous Hilbert curve method. We also show how to quantify adversary's ability to perform an inference attack on user location data and how to limit adversary's success rate under a designed threshold.
Keywords: curve fitting; data privacy; mobile computing; mobile handsets; perturbation techniques; Hilbert curve method; Hilbert-curve cloaking area generation; LBS; differential privacy based notions; differential private perturbation; inference attack; k-anonymity; location based services; location information; location privacy enhancing techniques; location privacy preserving scheme; mobile devices; mobile users; optimal size cloaking areas; private information leakage; service quality; Cryptography; Data privacy; Databases; Mobile communication; Privacy; Protocols; Servers; Hilbert curve; differential identifiability; geo-indistinguishability; location privacy (ID#: 16-10176)


Y. Lin, W. Huang and Y. Tang, “Map-Based Multi-Path Routing Protocol in VANETs,” 2015 IEEE 9th International Conference on Anti-counterfeiting, Security, and Identification (ASID), Xiamen, 2015, pp. 145-149. doi: 10.1109/ICASID.2015.7405680
Abstract: Due to vehicle movement and propagation loss of radio channel, providing a routing protocol for reliable multihop communication in VANETs is particularly challenging. In this paper, we present a map-based multi-path routing protocol for VANETs - MBMPR, which utilizes GPS, digital map and sensors in vehicle. With global road information, MBMPR finds an optimal forward path and an alternate path using Dijkstra algorithm, which improves the reliability of data transmission. Considering the load balance problem in junctions, a congestion detection mechanism is proposed. Aiming at the packet loss problem due to target vehicle's mobility, MBMPR adopts recovery strategy using location-based services and target vehicle mobility prediction. The simulations demonstrate MBMPR has significant performances comparing with classical VANETs routing protocols.
Keywords: multipath channels; resource allocation; routing protocols; telecommunication network reliability; vehicular ad hoc networks; Dijkstra algorithm; GPS; MBMPR; VANET; congestion detection mechanism; data transmission reliability; digital map; global road information; load balance problem; location-based services; map-based multipath routing protocol; optimal forward path; packet loss problem; propagation loss; radio channel; recovery strategy; reliable multihop communication; target vehicle mobility prediction; vehicle movement; Load balance; Multi-path routing; VANETs (ID#: 16-10177)


X. Chen, A. Mizera and J. Pang, “Activity Tracking: A New Attack on Location Privacy,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, 2015, pp. 22-30. doi: 10.1109/CNS.2015.7346806
Abstract: The exposure of location information in location-based services (LBS) raises users' privacy concerns. Recent research reveals that in LBSs users concern more about the activities that they have performed than the places that they have visited. In this paper, we propose a new attack with which the adversary can accurately infer users' activities. Compared to existing attacks, our attack provides the adversary not only with the places where users perform activities but also with the information when they stay at each of these places. To achieve this objective, we propose a new model to capture users' mobility and their LBS requests in continuous time, which naturally expresses users' behaviour in LBSs. We then formally implement our attack by extending an existing framework for quantifying location privacy. Through experiments on a real-life dataset, we show the effectiveness of our new tracking attack.
Keywords: data privacy; mobility management (mobile radio); telecommunication security; telecommunication services; activity tracking; attack implementation; location information; location privacy; location-based services; real-life dataset; tracking attack; user privacy concerns; users activity; users mobility; Communication networks; Conferences; Privacy; Real-time systems; Security; Semantics; Trajectory (ID#: 16-10178)


J. R. Shieh, “An End-to-End Encrypted Domain Proximity Recommendation System Using Secret Sharing Homomorphic Cryptography,” Security Technology (ICCST), 2015 International Carnahan Conference on, Taipei, 2015, pp. 1-6. doi: 10.1109/CCST.2015.7389682
Abstract: Location privacy preservation is where a person's location is revealed to other entities, such as a service provider or the person's friends, only if this release is strictly necessary and authorized by the person. This is especially important for location-based services. Other current systems use only a 2D geometric model. We develop 3D geometric location privacy for a service that alerts people of nearby friends. Using a robust encryption algorithm, our location privacy scheme guarantees that users can protect their exact location but still be alerted if and only if the service or friend is nearby and to then determine whether they are getting closer. This is in contrast to other non-secure systems, systems that lack secret sharing, and systems that use location cloaking. In our system, such proximity information can be reconstructed only when a sufficient number of shared keys are combined together; individual shared keys are of no use on their own. The proposed ring homomorphism cryptography combines secret keys from each user to compute relative distances from the encrypted user's location end. Our secret sharing scheme doesn't allow anyone to deceive, mislead, or defraud others of their rights, or to gain an unfair advantage. This relative distance is computed entirely in the encryption domain and is based on the philosophy that everyone has the same right to privacy. We also propose a novel protocol to provide personal anonymity for users of the system. Experiments show that the proposed scheme offers secure, accurate, fast, and anonymous privacy-preserving proximity information. This new approach can potentially be applied to various location-based computing environments.
Keywords: data privacy; mobile computing; private key cryptography; recommender systems; 2D geometric model; 3D geometric location privacy; end-to-end encrypted domain proximity recommendation system; location privacy preservation; location privacy scheme; location-based computing environments; location-based services; personal anonymity; privacy-preserving proximity information; relative distance; ring homomorphism cryptography; robust encryption algorithm; secret keys; secret sharing homomorphic cryptography; user location end encryption; Encryption; Measurement; Mobile radio mobility management; Multimedia communication; Privacy; Three-dimensional displays; Personalization;  Recommender Systems (ID#: 16-10179)


N. W. Lo, M. C. Chiang and C. Y. Hsu, “Hash-Based Anonymous Secure Routing Protocol in Mobile Ad Hoc Networks,” Information Security (AsiaJCIS), 2015 10th Asia Joint Conference on, Kaohsiung, 2015, pp. 55-62. doi: 10.1109/AsiaJCIS.2015.27
Abstract: A mobile ad hoc network (MANET) is composed of multiple wireless mobile devices in which an infrastructure less network with dynamic topology is built based on wireless communication technologies. Novel applications such as location-based services and personal communication Apps used by mobile users with handheld wireless devices utilize MANET environments. In consequence, communication anonymity and message security have become critical issues for MANET environments. In this study, a novel secure routing protocol with communication anonymity, named as Hash-based Anonymous Secure Routing (HASR) protocol, is proposed to support identity anonymity, location anonymity and route anonymity, and defend against major security threats such as replay attack, spoofing, route maintenance attack, and denial of service (DoS) attack. Security analyses show that HASR can achieve both communication anonymity and message security with efficient performance in MANET environments.
Keywords: cryptography; mobile ad hoc networks; mobile computing; mobility management (mobile radio); routing protocols; telecommunication network topology; telecommunication security; DoS attack; HASR protocol; Hash-based anonymous secure routing protocol; MANET; denial of service attack; dynamic network topology; handheld wireless devices; location-based services; message security; mobile ad hoc networks; mobile users; personal communication Apps; route maintenance attack; wireless communication technologies; wireless mobile devices; Cryptography; Mobile ad hoc networks; Nickel; Routing; Routing protocols; communication anonymity; message security; mobile ad hoc network; routing protocol (ID#: 16-10180)


B. Wang, M. Li, H. Wang and H. Li, “Circular Range Search on Encrypted Spatial Data,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 794-795. doi: 10.1109/ICDCS.2015.113
Abstract: Searchable encryption is a promising technique enabling meaningful search operations to be performed on encrypted databases while protecting user privacy from untrusted third-party service providers. However, while most of the existing works focus on common SQL queries, geometric queries on encrypted spatial data have not been well studied. Especially, circular range search is an important type of geometric query on spatial data which has wide applications, such as proximity testing in Location-Based Services and Delaunay triangulation in computational geometry. In this poster, we propose two novel symmetric-key searchable encryption schemes supporting circular range search. Informally, both of our schemes can correctly verify whether a point is inside a circle on encrypted spatial data without revealing data privacy or query privacy to a semi-honest cloud server. We formally define the security of our proposed schemes, prove that they are secure under Selective Chosen-Plaintext Attacks, and evaluate their performance through experiments in a real-world cloud platform (Amazon EC2). To the best of our knowledge, this work represents the first study in secure circular range search on encrypted spatial data.
Keywords: SQL; computational geometry; data privacy; mesh generation; private key cryptography; query processing; Amazon EC2; Delaunay triangulation; SQL query; circular range search; computational geometry; encrypted database; encrypted spatial data; geometric query; location-based service; proximity testing; query privacy; selective chosen-plaintext attack; semi-honest cloud server; symmetric-key searchable encryption scheme; user privacy protection; Companies; Data privacy; Encryption; Servers; Spatial databases (ID#: 16-10181)


Z. Zhou, Z. Yang, C. Wu, Y. Liu and L. M. Ni, “On Multipath Link Characterization and Adaptation for Device-Free Human Detection,” Distributed Computing Systems (ICDCS), 2015 IEEE 35th International Conference on, Columbus, OH, 2015, pp. 389-398. doi: 10.1109/ICDCS.2015.47
Abstract: Wireless-based device-free human sensing has raised increasing research interest and stimulated a range of novel location-based services and human-computer interaction applications for recreation, asset security and elderly care. A primary functionality of these applications is to first detect the presence of humans before extracting higher-level contexts such as physical coordinates, body gestures, or even daily activities. In the presence of dense multipath propagation, however, it is non-trivial to even reliably identify the presence of humans. The multipath effect can invalidate simplified propagation models and distort received signal signatures, thus deteriorating detection rates and shrinking detection range. In this paper, we characterize the impact of human presence on wireless signals via ray-bouncing models, and propose a measurable metric on commodity WiFi infrastructure as a proxy for detection sensitivity. To achieve higher detection rate and wider sensing coverage in multipath-dense indoor scenarios, we design a lightweight sub carrier and path configuration scheme harnessing frequency diversity and spatial diversity. We prototype our scheme with standard WiFi devices. Evaluations conducted in two typical office environments demonstrate a detection rate of 92.0% with a false positive of 4.5%, and almost 1x gain in detection range given a minimal detection rate of 90%.
Keywords: diversity reception; human computer interaction; indoor radio; multipath channels; radio links; radiowave propagation; wireless LAN; Wi-Fi infrastructure; dense multipath propagation; device-free human detection; frequency diversity; higher-level context extraction; human-computer interaction application; lightweight subcarrier; location-based service; multipath dense indoor scenario; multipath link adaptation; multipath link characterization; path configuration scheme; ray bouncing model; received signal signature; shrinking detection range; spatial diversity; wireless based device-free human sensing; IEEE 802.11 Standard; OFDM; Sensitivity; Sensors; Shadow mapping; Wireless communication; Wireless sensor networks (ID#: 16-10182)


Y. Utsunomiya, K. Toyoda and I. Sasase, “LPCQP: Lightweight Private Circular Query Protocol for Privacy-Preserving k-NN Search,” 2015 12th Annual IEEE Consumer Communications and Networking Conference (CCNC), Las Vegas, NV, 2015, pp. 59-64. doi: 10.1109/CCNC.2015.7157947
Abstract: With the recent growth of mobile communication, location-based services (LBSs) are getting much attention. While LBSs provide beneficial information about points of interest (POIs) such as restaurants or cafes near users, their current location could be revealed to the server. Lien et al. have recently proposed a privacy-preserving k-nearest neighbor (k-NN) search with additive homomorphic encryption. However, it requires heavy computation due to unnecessary multiplication in the encryption domain and this causes intolerant burden on the server. In this paper, we propose a lightweight private circular query protocol (LPCQP) for privacy-preserving k-NN search with additive and multiplicative homomorphism. Our proposed scheme divides a POI-table into some sub-tables and aggregates them with homomorphic cryptography in order to remove unnecessary POI information for the request user, and thus the computational cost required on the server is reduced. We evaluate the performance of our proposed scheme and show that our scheme reduces the computational cost on the LBS server while keeping high security and high accuracy.
Keywords: cryptographic protocols; data privacy; mobility management (mobile radio); telecommunication security; LBS server; LPCQP; additive homomorphic encryption; computational cost reduction; homomorphic cryptography; lightweight private circular query protocol; location-based service; mobile communication; points of interest; privacy-preserving k-NN search; privacy-preserving k-nearest neighbor search; Accuracy; Additives; Computational efficiency; Encryption; Servers (ID#: 16-10183)


S. K. Mazumder, C. Chowdhury and S. Neogy, “Tracking Context of Smart Handheld Devices,” Applications and Innovations in Mobile Computing (AIMoC), 2015, Kolkata, 2015, pp. 176-181. doi: 10.1109/AIMOC.2015.7083849
Abstract: The ability to locate wireless devices has many benefits as already mentioned by researchers. These applications include sport tracking, friend finders, security related jobs, surveillance etc. Performance of such security related services and surveillance would significantly improve if in addition to location, context of the user is also known. Thus in this paper, location based service is designed and implemented that utilizes context sensing along with location to track context (location and state of the device user) of a smart handheld in an energy efficient manner. The service can be used for surveillance and to act proactively. It can also be used to track location of individuals (relatives, children) as well as that of lost or stolen device (say, phone) from any type of other handheld devices. The service can be initiated securely and remotely (not necessarily from smartphones) thus it does not always work in the background and hence save significant battery power. Once initiated it does not stop even when SIM card is changed or the phone is restarted. The performance analysis shows the efficiency of the application.
Keywords: smart phones; telecommunication security; SIM card; battery power; locate wireless devices; location based service; security related services; smart handheld devices; stolen device; track context; track location; tracking context; Context; Global Positioning System; Google; Mobile communication; Sensors; Servers; Smart phones; Android; Context; Location tracking; SmartPhone; tracking (ID#: 16-10184)


M. Ahmadian, J. Khodabandehloo and D. C. Marinescu, “A Security Scheme for Geographic Information Databases in Location Based Systems,” SoutheastCon 2015, Fort Lauderdale, FL, 2015, pp. 1-7. doi: 10.1109/SECON.2015.7132941
Abstract: LBS (Location-based Services) are ubiquitous nowadays; they are used by a wide variety of applications ranging from social networks to military applications. Moreover, smart phones and hand held devices are increasingly being used for mobile transactions. These devices are mostly GPS-enabled and can provide location information. In some cases, the geographical location of clients as an authentication factor is integrated with applications to enhance security. But for attackers it is easy to forge location information, thus the security of geographical information is a critical issue. In this paper we discuss geographical database features and we propose an effective security scheme for mobile devices with limited resources.
Keywords: Global Positioning System; cryptography; geographic information systems; mobile computing; GPS-enabled devices; authentication factor; geographic information databases; geographical database features; geographical location information; hand held devices; location based systems; military applications; mobile devices; mobile transactions; security scheme; smart phones; social networks; Data structures; Encryption; Hardware; Spatial databases; Cryptography; Digital map; Location based system; Security; databases (ID#: 16-10185)


B. Wang, M. Li, H. Wang and H. Li, “Circular Range Search on Encrypted Spatial Data,” Communications and Network Security (CNS), 2015 IEEE Conference on, Florence, Italy, 2015, pp. 182-190. doi: 10.1109/CNS.2015.7346827
Abstract: Searchable encryption is a promising technique enabling meaningful search operations to be performed on encrypted databases while protecting user privacy from untrusted third-party service providers. However, while most of the existing works focus on common SQL queries, geometric queries on encrypted spatial data have not been well studied. Especially, circular range search is an important type of geometric query on spatial data which has wide applications, such as proximity testing in Location-Based Services and Delaunay triangulation in computational geometry. In this paper, we propose two novel symmetric-key searchable encryption schemes supporting circular range search. Informally, both of our schemes can correctly verify whether a point is inside a circle on encrypted spatial data without revealing data privacy or query privacy to a semi-honest cloud server. We formally define the security of our proposed schemes, prove that they are secure under Selective Chosen-Plaintext Attacks, and evaluate their performance through experiments in a real-world cloud platform (Amazon EC2).
Keywords: Cloud computing; Data privacy; Encryption; Nearest neighbor searches; Servers; Spatial databases (ID#: 16-10186)


L. Chen and K. D. Kang, “A Framework for Real-Time Information Derivation from Big Sensor Data,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 1020-1026. doi: 10.1109/HPCC-CSS-ICESS.2015.46
Abstract: In data-intensive real-time applications, e.g., transportation management and location-based services, the amount of sensor data is exploding. In these applications, it is desirable to extract value-added information, e.g., fast driving routes, from sensor data streams in real-time rather than overloading users with massive raw data. However, achieving the objective is challenging due to the data volume and complex data analysis tasks with stringent timing constraints. Most existing big data management systems, e.g., Hadoop, are not directly applicable to real-time sensor data analytics, since they are timing agnostic and focus on batch processing of previously stored data that are potentially outdated and subject to I/O overheads. To address the problem, we design a new real-time big data management framework, which supports a non-preemptive periodic task model for continuous in-memory sensor data analysis and a schedulability test based on the EDF (Earliest Deadline First) algorithm to derive information from current sensor data in real-time by extending the map-reduce model originated in functional programming. As a proof-of-concept case study, a prototype system is implemented. In the performance evaluation, it is empirically shown that all deadlines can be met for the tested sensor data analysis benchmarks.
Keywords: Big Data; batch processing (computers); data analysis; functional programming; parallel processing; performance evaluation; EDF algorithm; I/O overheads; batch processing; complex data analysis tasks; continuous in-memory sensor data analysis; data volume; data-intensive real-time applications; earliest deadline first algorithm; functional programming; nonpreemptive periodic task model; performance evaluation; real-time big data management framework; real-time information derivation; real-time sensor big data analytics; schedulability test; stringent timing constraints; timing agnostic; value-added information extraction; Analytical models; Big data; Data analysis; Data models; Mobile radio mobility management; Real-time systems; Timing; Big Sensor Data; Real-Time Information (ID#: 16-10187)


J. Peng, Y. Meng, M. Xue, X. Hei and K. W. Ross, “Attacks and Defenses in Location-Based Social Networks: A Heuristic Number Theory Approach,” Security and Privacy in Social Networks and Big Data (SocialSec), 2015 International Symposium on, Hangzhou, 2015, pp. 64-71. doi: 10.1109/SocialSec2015.19
Abstract: The rapid growth of location-based social network (LBSN) applications — such as WeChat, Momo, and Yik Yak — has in essence facilitated the promotion of anonymously sharing instant messages and open discussions. These services breed a unique anonymous atmosphere for users to discover their geographic neighborhoods and then initiate private communications. In this paper, we demonstrate how such location-based features of WeChat can be exploited to determine the user's location with sufficient accuracy in any city from any location in the world. Guided by the number theory, we design and implement two generic localization attack algorithms to track anonymous users' locations that can be potentially adapted to any other LBSN services. We evaluated the performance of the proposed algorithms using Matlab simulation experiments and also deployed real-world experiments for validating our methodology. Our results show that WeChat, and other LBSN services as such, have a potential location privacy leakage problem. Finally, k-anonymity based countermeasures are proposed to mitigate the localization attacks without significantly compromising the quality-of-service of LBSN applications. We expect our research to bring this serious privacy pertinent issue into the spotlight and hopefully motivate better privacy-preserving LBSN designs.
Keywords: data privacy; social networking (online); LBSN applications; Matlab simulation; Momo; WeChat; Yik Yak; heuristic number theory approach; k-anonymity based countermeasures; localization attack algorithms; location privacy leakage problem; location-based social networks; privacy-preserving LBSN design; private communications; quality-of-service; user location; Algorithm design and analysis; Global Positioning System; Prediction algorithms; Privacy; Probes; Smart phones; Social network services; Wechat; localization attack; location-based social network; number theory; privacy (ID#: 16-10188)


S. Kim, S. Ha, A. Saad and J. Kim, “Indoor Positioning System Techniques and Security,” e-Technologies and Networks for Development (ICeND), 2015 Fourth International Conference on, Lodz, 2015, pp. 1-4. doi: 10.1109/ICeND.2015.7328540
Abstract: Nowadays location based techniques are used various fields such as traffic navigation, map services, etc. Because people spend a lot of time in the indoor place, it is important for users and service providers to get exact indoor positioning information. There are many technologies to get indoor location information like Wi-Fi, Bluetooth, Radio Frequency Identification (RFID), etc. In spite of importance of IPS, there is no standard for IPS techniques. In addition because of characteristic of data, security and privacy problems become issue. This paper introduces the IPS techniques and analyzes each IPS techniques in terms of cost, accuracy, etc. Then introduce related security threats.
Keywords: indoor communication; indoor navigation; radionavigation; security of data; telecommunication traffic; Bluetooth; IPS techniques; RFID; Radio Frequency Identification; Wi-Fi; Indoor location information; indoor positioning system techniques; location based techniques; map services; traffic navigation; Accuracy; Base stations; Fingerprint recognition; Global Positioning System; IEEE 802.11 Standard; Security; Bluetooth4.0; Indoor Positioning System (IPS); Location; Privacy (ID#: 16-10189)


J. Wang, M. Qiu, B. Guo, Y. Shen and Q. Li, “Low-Power Sensor Polling for Context-Aware Services on Smartphones,” 2015 IEEE 17th International Conference on High Performance Computing and Communications (HPCC), 2015 IEEE 7th International Symposium on Cyberspace Safety and Security (CSS), 2015 IEEE 12th International Conference on Embedded Software and Systems (ICESS), New York, NY, 2015, pp. 617-622. doi: 10.1109/HPCC-CSS-ICESS.2015.255
Abstract: The growing availability of sensors integrated in smartphones provides much more opportunities for context-aware services, such as location-based and profile-based applications. Power consumption of the sensors contributes a significant part of the overall power consumption on current smartphones. Furthermore, smartphone sensors have to be activated in a stable period to match to the request frequency of those context-aware applications, known as full polling-based detection, which wastes a large amount of energy in unnecessary detecting. In this paper, we propose a low-power sensor polling for context-aware applications, which can dynamically shrink the extra sensor activities so that the unrelated sensors can keep in sleeping status for a longer time. We also provide an algorithm to find the relationship of application invoking and those sensor activities, which is always hidden in the context middleware. With this method, the polling scheduler is able to calculate and match the detecting frequency of various application combinations aroused by user. We evaluate this framework with different kinds of context-aware applications. The results show that our new low-power polling spends a tiny responding delay (97ms) in the middleware to save 70% sensor energy consumption, comparing with the traditional exhausting polling operation.
Keywords: mobile computing; power aware computing; scheduling; sensors; smart phones; application invoking relationship; context middleware; context-aware services; frequency detection; full-polling-based detection; location-based application; low-power sensor polling; polling scheduler; profile-based application; sensor activities; sensor energy consumption; sensor power consumption; sleeping status; smart phones; stable period; Accelerometers; Context; Electronic mail; Feature extraction; IEEE 802.11 Standard; Matrix converters; Smart phones; Contextaware; attribute; detecting; energy consumption; polling; smartphone sensor (ID#: 16-10190)


S. Imran, R. V. Karthick and P. Visu, “DD-SARP: Dynamic Data Secure Anonymous Routing Protocol for MANETs in Attacking Environments,” Smart Technologies and Management for Computing, Communication, Controls, Energy and Materials (ICSTM), 2015 International Conference on, Chennai, 2015, pp. 39-46. doi: 10.1109/ICSTM.2015.7225388
Abstract: The most important application of MANETs is to maintain anonymous communication in attacking environment. Though lots of anonymous protocols for secure routing have been proposed, but the proposed solutions happen to be vulnerable at some point. The service rejection attacks or DoS, timing attacks makes both system and protocol vulnerable. This paper studies and discuss about the various existing protocols and how efficient they are in the attacking environment. The protocols such as, ALARM: Anonymous Location-Aided Routing in Suspicious MANET, ARM: Anonymous Routing Protocol for Mobile Ad Hoc Networks, Privacy-Preserving Location-Based On-Demand Routing in MANETs, AO2P: Ad Hoc on-Demand Position-Based Private Routing Protocol, Anonymous Connections. In this paper we propose a new concept by combining two proposed protocols based on geographical location based: ALERT which is based mainly on node-to-node hop encryption and bursty traffic. And Greedy Perimeter Stateless Routing (GPSR), a new geographical location based protocol for wireless networks that uses the router's position and a packet's destination to make forwarding of packets. It follows greedy method of forwarding using the information about the immediate neighboring router in the network. Simulation results have explained the efficiency of the proposed DD-SARP protocol with improved performance when compared to the existing protocols.
Keywords: mobile ad hoc networks; routing protocols; telecommunication security; ALARM; ALERT; AO2P; Ad Hoc on-Demand Position-Based Private Routing Protocol, Anonymous Connections; Anonymous Location-Aided Routing in Suspicious MANET; Anonymous Routing Protocol for Mobile Ad Hoc Networks; DD-SARP; DoS; GPSR; Greedy Perimeter Stateless Routing; anonymous communication; attacking environments; bursty traffic; dynamic data secure anonymous routing protocol; geographical location; neighboring router; node-to-node hop encryption; packet destination; packet forwarding; privacy-preserving location-based on-demand routing; router position; secure routing; service rejection attacks; timing attacks; Ad hoc networks; Encryption; Mobile computing; Public key; Routing; Routing protocols; Mobile adhoc network; adversarial; anonymous; privacy (ID#: 16-10191)


A. C. M. Fong, “Conceptual Analysis for Timely Social Media-Informed Personalized Recommendations,” 2015 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, 2015, pp. 150-151. doi: 10.1109/ICCE.2015.7066358
Abstract: Integrating sensor networks and human social networks can provide rich data for many consumer applications. Conceptual analysis offers a way to reason about real-world concepts, which can assist in discovering hidden knowledge from the fused data. Knowledge discovered from such data can be used to provide mobile users with location-based, personalized and timely recommendations. Taking a multi-tier approach that separates concerns of data gathering, representation, aggregation and analysis, this paper presents a conceptual analysis framework that takes unified aggregated data as an input and generates semantically meaningful knowledge as an output. Preliminary experiments suggest that a fusion of sensor network and social media data improves the overall results compared to using either source of data alone.
Keywords: data analysis; data mining; mobile computing; recommender systems; sensor fusion; social networking (online); conceptual analysis; data aggregation; data analysis; data fusion; data gathering; data representation; hidden knowledge discovery; location-based recommendation; multitier approach; sensor network; timely social media-informed personalized recommendations; Conferences; Consumer electronics; Formal concept analysis; Media; Ontologies; Security; Social network services (ID#: 16-10192)


C. Su, Y. Yu, M. Sui and H. Zhang, “Friend Recommendation Algorithm Based on User Activity and Social Trust in LBSNs,” 2015 12th Web Information System and Application Conference (WISA), Jinan, 2015, pp. 15-20. doi: 10.1109/WISA.2015.11
Abstract: In LBSNs (Location-based Social Networks), friend recommendation results are mainly decided by the number of common friends or depending on similar user preferences. However, lack of description of semantic information about user activity preferences, insufficiency in building social trust among user relationships and individual score ranking by a crowd or the person from third party of social networks make recommendation quality undesirable. Aiming at this issue, FRBTA algorithm is proposed in this paper to recommend best friends by considering multiple factors such as user semantic activity preferences, social trust. Experimental results show that the proposed algorithm is feasible and effective.
Keywords: recommender systems; security of data; social networking (online); FRBTA algorithm; LBSN; friend recommendation algorithm; location-based social networks; similar user preferences; social trust; user activity; user semantic activity preferences; Buildings; Multimedia communication; Semantics; Social network services; Streaming media; User-generated content; Activity Similarity; Friend Recommendation; LBSNs; Social Trust (ID#: 16-10193)


F. A. Mansoori and C. Y. Yeun, “Emerging New Trends of Location Based Systems Security,” 2015 10th International Conference for Internet Technology and Secured Transactions (ICITST), London, 2015, pp. 158-163. doi: 10.1109/ICITST.2015.7412078
Abstract: Location Base System (LBS) is considered one of the most beneficial technologies in our modern life, commonly imbedded in varies devices. It helps people find their required services in the least amount of time based on their positions. The users submit a query with their locations and their required services to an un-trusted LBS server. This raises the flag of user privacy where the user has to have the right to conduct services with keeping their location or identity concealed. This research will cover introduction to LBS Services and Architecture components. Security threats to LBS, related work to providing security while conducting LBS services which will include checking integrity of provided location information (LI), privacy of end user vs identifying end user for security purposes. Privacy of end user based on key anonymity and the four different LBS security approaches based on key-anonymity which are: Encryption-based K-anonymity, MobiCache, FGcloak and Pseudo-Location Updating System. Comparison and analysis of the four stated LBS security approaches and finally enhancements and recommendations.
Keywords: cryptography; data privacy; mobile computing; FGcloak; LBS; LI; MobiCache; architecture components; encryption-based k-anonymity; end user privacy; key anonymity; location based systems security; location information; pseudo-location updating system; Computer architecture; Internet; Privacy; Public key; Servers; Location Based Systems; Privacy; Security (ID#: 16-10194)


A. Vasilateanu and A. Buga, “AsthMate — Supporting Patient Empowerment Through Location-Based Smartphone Applications,” 2015 20th International Conference on Control Systems and Computer Science, Bucharest, 2015, pp. 411-417. doi: 10.1109/CSCS.2015.61
Abstract: The ever changing challenges and pressures to the healthcare domain have introduced the urgency of finding a replacement for traditional systems. Breakthroughs registered by information systems, advances in data storage and processing solutions sustained by the ubiquity of gadgets and an efficient infrastructure for network and services have sustained a shift of medical systems towards digital healthcare. Asth Mate application is an e-health tool for asthma patients, acting as an enabler for patient empowerment. The contributions brought by the application are both to the individual and to the community exposing a web application that allows citizens to check the state of the air for the area they live in. The ongoing implementation can benefit of the advantages of cloud computing solutions in order to ensure a better deployment and data accessibility. However, data privacy is a key aspect for such systems. In consideration of this reason, a proper trade-off between the functionality, d